Process mining backgrounder – recovering from my #PowerPointFail

Did you ever make some last-minute changes before a presentation, only to look on in horror when a slide pops up that was not exactly what you were expecting? That was me today, on a webinar about process intelligence that I did with Signavio. In the webinar, I was taking a step back from process automation — my usual topic of discussion — to talk more about the analytical tools such as process mining. This morning, I decided that I wanted to add a brief backgrounder on process mining, and pulled in some slides that I had created for a presentation back in 2013 on (what were then) evolving technologies related to process management. I got a bit too fancy, and created a four-image build but accidentally didn’t have the animation set on what should have been the last image added to the build, so it obscured all the good stuff on the slide.

I thought it was a pretty interesting topic, and I rebuilt the slide and recorded it. Check it out (it’s only 3-1/2 minutes long):

It’s webinar week! Check out my process intelligence webinar with @Signavio on Thursday

On Thursday, I’m presenting a webinar on process intelligence with Signavio. Here’s the description:

How do you get a handle on your company’s disrupted processes? How do you get real-time visibility into your organization’s strengths and weaknesses? How do you confidently chart a path to the future? The key is process intelligence: seeing your processes clearly and understanding what is actually happening versus what’s supposed to happen.

For example, your order-to-cash process is showing increased sales but decreasing customer satisfaction. Why? What is the root cause? Or, you have an opportunity to offer a new product but aren’t sure if your manufacturing process can handle it. To make this decision, you need a clear line of sight into what your organization can do. These areas are where process intelligence shines.

This webinar will help you answer questions like these, showing you – with examples – how process intelligence can help you drive real business results.

Rather than my usual focus on process automation, I’m digging a bit more into the process analysis side, particularly around process mining. With the current situation with a largely distributed workforce for many businesses, processes have change and it’s not possible to do Gemba walks or job shadowing to collect information on what the adjusted processes look like. Process mining and task mining provide the capabilities to do that remotely and accurately, and identify any problems with conformance/compliance as well as discover root causes. You can sign up at the link above to attend or receive the on-demand replay after the event.

I also posted last week about the webinar that I’m presenting on Wednesday for ABBYY on digital intelligence in the insurance industry, which is a related but different spin on the same issue: how are processes changing now, and what methodologies and technologies are available to handle this disruption. In case it’s not obvious, I don’t work for either of these vendors (who have some overlap in products) but provide “thought leadership” presentations to help introduce and clarify concepts for audiences. Looking forward to seeing everyone on either or both of these webinars later this week.

#PegaWorld iNspire 2020

PegaWorld, in shifting from an in-person to virtual event, dropped down to a short 2.5 hours. The keynotes and many of the breakouts appeared to be mostly pre-recorded, hosted live by CTO Don Schuerman who provided some welcome comic relief and moderated live Q&A with each of the speakers after their session.

The first session was a short keynote with CEO Alan Trefler. It’s been a while since I’ve had a briefing with Pega, and their message has shifted strongly to the combination of AI and case management as the core of their digital platform capabilities. Trefler also announced Pega Process Fabric that allows the integration of multiple systems not just from Pega, but other vendors.

Next up was SVP of Products Kerim Akgonul, discussing their low-code Pega Express approach and how it’s helping customers to stand up applications faster. We heard briefly from Anna Gleiss, Global IT Head of Master Data Management at Siemens, who talked about how they are leveraging Pega to ensure reusability and speed deployment across the 30 different applications that they’re running in the Pega Cloud. Akgonul continued with use cases for self-service — especially important with the explosion in customer service in some industries due to the pandemic — and some of their customers such as Aflac who are using Pega to further their self-service efforts.

There was a keynote by Rich Gilbert, Chief Digital and Information Officer at Aflac, on the reinvention that they have gone through. There’s a lot of disruption in the insurance industry now, and they’ve been addressing this by creating a service-based operating model to deliver digital services as a collaboration between business and IT. They’ve been using Pega to help them with their key business drivers of settling claims faster and providing excellent customer service with offerings such as “Claims Guest Checkout”, which lets someone initiate a claim through self-service without knowing their policy number or logging in, and a Claims Status Tracker available on their mobile app or website. They’ve created a new customer service experience using a combination of live chat and virtual assistants, the latter of which is resolving 86% of inquiries without moving to a live agent.

Akgonul also provided a bit more information on the Process Fabric, which acts as a universal task manager for individual workers, with a work management dashboard for managers. There was no live Q&A at this point, but it was delayed until a Tech Talk later in the agenda. In the interim was a one-hour block of breakouts that had one track of three live build sessions, plus a large number of short prerecorded sessions from Pega, partners and customers. I’m interested in more information on the Process Fabric, which I believe will be in the later Tech Talk, although I did grab some screenshots from Akgonul’s keynote:

The live build sessions seemed to be overloaded and there was a long delay getting into them, but once started, they were good-quality demos of building Pega applications. I came in part way through the first one on low-code using App Studio, and it was quite interactive, with a moderator dropping in occasionally with live questions, and eventually hurrying the presenter along to finish on time. I was only going to stay for a couple of minutes, but it was pretty engaging and I watched all of it. The next live demo was on data and integration, and built on the previous demo’s vehicle fleet manager use case to add data from a variety of back-end sources. The visuals were fun, too: the presenter’s demo was most of the screen, with a bubble at the bottom right containing a video of the speaker, then a bubble popping in at the bottom left with the moderator when he had a question or comment. Questions from the audience helped to drive the presentation, making it very interactive. The third live demo was on user experience, which had a few connectivity issues so I’m not sure we saw the entire demo as planned, but it showed the creation of the user interface for the vehicle manager app using the Cosmos system, moving a lot of logic out of the UI and into the case model.

The final session was the Tech Talk on product vision and roadmap with Kerim Akgonul, moderated by Stephanie Louis, Senior Director of Pega’s Community and Developer Programs. He discussed Process Fabric, Project Phoenix, Cosmos and other new product releases in addition to fielding questions from social media and Pega’s online community. This was very interactive and engaging, much more so than his earlier keynote which seemed a bit stiff and over-rehearsed. More of this format, please.

In general, I didn’t find the prerecorded sessions to be very compelling. Conference organizers may think that prerecording sessions reduces risk, but it also reduces spontaneity and energy from the presenters, which is a lot of what makes live presentations work so well. The live Q&A interspersed with the keynotes was okay, and the live demos in the middle breakout section as well as the live Tech Talk were really good. PegaWorld also benefited from Pega’s own online community, which provided a more comprehensive discussion platform than the broadcast platform chat or Q&A. If you missed today’s event, you should be able to find all of the content on demand on the PegaWorld site within the next day or two.

Using Digital Intelligence to Navigate the Insurance Industry’s Perfect Storm: my upcoming webinar with @ABBYY_Software

I have a long history working with insurance companies on their digitization and process automation initiatives, and there’s a lot of interesting things happening in insurance as a result of the pandemic and associated lockdown: more automation of underwriting and claims, increased use of digital documents instead of paper, and trying to discover the “new normal” in insurance processes as we move to a world that will remain, at least in part, with a distributed workforce for some time in the future. At the same time, there is an increase in some types of insurance business activity, and decreases in other areas, requiring reallocation of resources.

On June 17, I’ll be presenting a webinar for ABBYY on some of the ways that insurance companies can navigate this perfect storm of business and societal disruption using digital intelligence technologies including smarter content capture and process intelligence. Here’s what we plan to cover:

  • Helping you understand how to transform processes, instead of falling into the trap of just automating existing, often broken processes
  • Getting your organization one step further of your competition with the latest content intelligence capabilities that help transform your customer experience and operational effectiveness
  • Completely automating your handling of essential documents used in onboarding, policy underwriting, claims, adjudication, and compliance
  • Having direct overview of your processes as living in real time to discover where bottlenecks and repetitions occur, where content needs to be processed, and where automation can be most effective

Register at the link, and see you on the 17th.

Around the world with Signavio Live 2020

I missed last year’s Signavio Live event, and it turns out that it gave them a year head start on the virtual conference format now being adopted by other tech vendors. Now that everyone has switched to online conferences, many have decided to go the splashy-but-prerecorded route, which includes a lot of flying graphics and catchy music but canned presentations that fall a bit flat. Signavio has a low-key format of live presentations that started at 11am Sydney time with a presentation by Property Exchange Australia: I tuned in from my timezone at 9pm last night, stayed for the Deloitte Australia presentation, then took a break until the last part of the Coca-Cola European Partners presentation that started at 8am my time. In the meantime, there were continuous presentations from APAC and Europe, with the speakers all presenting live in their own regular business hours.

Signavio started their product journey with a really good process modeler, and now have process mining and some degree of operational monitoring for a more complete process intelligence suite. In his keynote, CEO Gero Decker talked about how the current pandemic — even as many countries start to emerge from it — is highlighting the need for operational resilience: companies need to design for flexibility, not just efficiency. For example, many companies are reinventing customer touchpoints, such as curbside delivery for retail as an alternative to in-store shopping, or virtual walk-throughs for looking at real estate. Companies are also reinventing products and services, allowing businesses that rely on in-person interactions to take their services online; I’ve been seeing this shift with everything from yoga classes to art gallery lectures. Decker highlighted two key factors to focus on in order to emerge from the crisis stronger: operational excellence, and customer experience. One without the other does not provide the benefit, but they need to be combined into the concept of “Customer Excellence”. In the Q&A, he discussed how many companies started really stepping up their process intelligence efforts in order to deal with the COVID-19 crisis, then realized that they should be doing this in the normal course of business.

There was a session with Jan ten Sythoff, Senior TEI Consultant at Forrester, and Signavio’s Global SVP of Customer Service, Stefan Krumnow, on the total economic impact of the Signavio Suite (TEI is the Forrester take on ROI). Krumnow started with the different factors that might be part of what a customer organization might be getting out of Signavio — RPA at scale, operational excellence, risk and compliance, ERP transformation, and customer excellence — then ten Sythoff discussed the specific TEI report that Forrester created for Signavio in October 2019 with a few updates for the current situation. The key quantifiable benefits identified by Forrester were external resources cost avoidance, higher efficiency in implementing new processes, and cost avoidance of alternative tools; they also found non-quantifiable benefits such as a better culture of process improvement across organizations. For their aggregate case study created from all of their interviews, they calculated a payback of less than six months for implementing Signavio: this would depend, of course, on how closely a particular organization matched their fictitious use case, which was a 100,000-employee company.

There are a number of additional sessions running until 5pm Eastern North American time; I might peek back in for a few of those, and will write another post if there’s anything of particular interest. I expect that everything will be available on demand after the event if you want to check out any of the sessions.

On the conference format, there is a single track of live presentations, and a Signavio moderator on each one to introduce the speaker and help wrangle the Q&A. Each presentation is 40 minutes plus 10 minutes of Q&A, with a 10-minute break between each one. Great format, schedule-wise, and the live sessions make it very engaging. They’re using GoToWebinar, and I’m using it on my tablet where it works really well: I can control the screen split between speaker video and slides (assuming the speaker is sharing their video), it supports multiple simultaneous speakers, I can see at the top of the screen who is talking in case I join a presentation after the introduction, and the moderator can collect feedback via polls and surveys. Because it’s a single track, it’s a single GTW link, allowing attendees to drop in and out easily throughout the day. The only thing missing is a proper discussion platform — I have mentioned this about several of the online conferences that I’ve attended, and liked what Camunda did with a Slack workspace that started before and continued after the conference — although you can ask questions via the GTW Question panel. To be fair, there is very little social media engagement (the Twitter hashtag for the conference is mostly me and Signavio people), so possibly the attendees wouldn’t get engaged in a full discussion platform either. Without audience engagement, a discussion platform can be a pretty lonely place. In summary, the GTW platform seems to behave well and is a streamlined experience if you don’t expect a lot of customer engagement, or you could use it with a separate discussion platform.

Disclaimer: Signavio is my customer, and I’ve created several webinars for them over the past few months. We have another one coming up next month on Process Intelligence. However, they have not compensated me in any way for listening in on Signavio Live today, or writing about it here.

My upcoming webinar sponsored by Signavio – How to Thrive During Times of Rapid Change

This will be the fourth in a series of webinars that I’m doing for Signavio, this time focused on the high-tech industry but with lessons that span other industries. From the description:

High-Tech businesses are renowned disruptors. But what happens when the disruptors become the disrupted? For example, let’s say a global pandemic surfaces and suddenly changes your market dynamics and your business model.

Can your business handle an instant slowdown or a hyper-growth spurt? What about your operating systems? Are they nimble enough for you to scale? Can you onboard new customers en masse or handle a high volume of service tickets overnight? What about your supply chain; how agile are your systems and supplier relationships?

The first two webinars were discussing banking in February and insurance in March, and the role that intelligent processes play in improving business, with a brief mention in the March webinar about addressing business disruption caused by the pandemic. By the time we hit the third webinar on financial services in April, we had pivoted to look at the necessity of process improvement technologies and methodologies in times of business disruption such as the current crisis. Unlike a lot of industries, many high-tech sectors have been booming during the pandemic: their problems are around being able to scale up operations to meet demand without sacrificing customer service. Although they share some of the same issues as I looked at in the earlier webinars, they have some unique issues where process intelligence and automation can help them.

Tune in on May 20th at 11am Eastern; if you can’t make it then, sign up anyway and you’ll get a link to the on-demand version.

CelosphereLive 2020 — Day 3: extending process mining with multi-event logs and task mining

Traditionally, process mining is fed from history logs from a single system. However, most businesses aren’t run on a single system, and Celonis Product Lead for Discovery Sabeth Steiner discussed how they are allowing multi-event log process mining, where logs from multiple systems are ingested and correlated to do a more comprehensive analysis. This can be useful to find friction between parallel (inbound) procurement and (outbound) sales processes, or customer service requests that span multiple process silos. Different parallel processes appear in Celonis process discovery in different colors, and the crossover points between them highlighted.

Each of the processes can be analyzed independently, but the power comes when they are analyzed in tandem: optimizing the delivery time within an order-to-cash process while seeing the points that it interacts with the procure-to-pay process of the vendors providing materials for that order. Jessica Kaufmann, Senior Software Developer, joined Steiner to show the integrated data model that exists behind the integrated analysis of multiple processes, and how to set this up for multiple event logs. She discussed the different types of visualization: whether to visualize the different processes as a single process (by merging the event logs), or as multiple interacting processes. KPIs can also be combined, so that overall KPIs of multiple interacting processes can be tracked. Great Q&A at the end where they addressed a number of audience questions on the mechanics of using multi-event logs, and they confirmed that this will be available in the free Celonis Snap offering.

Another analysis capability not traditionally covered by process mining is task mining: what are the users doing on the desktop to interact between multiple systems? Andreas Plieninger, Product Manager, talked about how they capture user interaction data with their new Celonis Task Mining. I’ve been seeing user interaction capture being done by a few different vendors, both process mining/analysis and RPA vendors, and this really is the missing link in understanding processes: lack of this type of data capture is the reason that I spend a lot of time job-shadowing when I’m looking at an enterprise customer’s processes.

Task Mining is installed on the user’s desktop (Windows only for now), and when certain white-listed applications are used, the interaction information is captured as well as data from the desktop files, such as Excel spreadsheets. AI/ML helps to group the activities together and match them to other system processes, providing context for analysis. “Spyware” that tracks user actions on the desktop is not uncommon in productivity monitoring, but Celonis Task Mining this is a much more secure and restricted version of that, capturing just the data required for analyzing processes, and respecting the privacy of both the user and data on their screen.

Once the user interaction data is captured, it can be analyzed in the same way as process event log: it can discover the process and its variants, and trigger alerts if process compliance rules are violated. It’s in the same data later as process mining data, so can analyzed and exposed using the same AI, boards and apps structure as process data. Task Miner also captures screen snapshots to show what was actually happening as the user clicked around and entered data, and can be used to check what the user was seeing while they were working. This can be used to determine root causes for the longer-running variants, find opportunities for task automation, and check compliance.

He showed a use case for finding automation opportunities in a procure-to-pay process, similar to the concept of multi-event logs where one of those logs is the user interaction data. The user interaction data is treated a bit differently, however, since it represents manual activities where you may want to apply automation. A Celonis automation could then be used to address some of the problem areas identified by the task mining, where some of the cases are completely automated, while others require human intervention. This ability to triage cases, sending only those that really need human input for someone to process, while automatically pushing actions back to the core systems to complete the others automatically, can result in significant cost savings and shortened cycle time.

Celonis Task Mining is still in an early adopter program, but is expected to be in beta by August 2020 and generally available in November. I’m predicting a big uptake in this capability, since remote work is removing the ability to use techniques such as job shadowing to understand what steps workers are taking to complete tasks. Adding Task Mining data to Process Mining data creates the complete picture of how work is actually getting done.

That’s it for me at CelosphereLive 2020; you can see replays of the presentation videos on the conference site, with the last of them likely to be published by tomorrow. Good work by Celonis on a marathon event: this ran for several hours per day over three days, although the individual presentations were pre-recorded then followed by live Q&A. Lots of logistics and good production quality, but it could have had better audience engagement through a more interactive platform such as Slack.

CelosphereLive 2020 – Day 2: From process mining to intelligent operations

I’m back for the Celonis online conference, CelosphereLive, for a second day. They started much earlier since they using a European time zone, but I started in time to catch the Q&A portion of Ritu Nibber’s presentation (VP of Global Process and Controls at Reckitt Benckiser) and may go back to watch the rest of it since there were a lot of interesting questions that came up.

There was a 15-minute session back in their studio with Celonis co-CEO Bastian Nominacher and VP of Professional Services Sebastian Walter, then on to a presentation by Peter Tasev, SVP of Procure to Pay at Deutsche Telekom Services Europe. DTSE is a shared services organization providing process and service automation across many of their regional organizations, and they are now using Celonis to provide three key capabilities to their “process bionics”:

  1. Monitor the end-to-end operation and efficiency of their large, heterogeneous processes such as procure-to-pay. They went through the process of identifying the end-to-end KPIs to include into an operational monitoring view, then use the dashboard and reports to support data-driven decisions.
  2. Use of process mining to “x-ray” their actual processes, allowing for process discovery, conformance checking and process enhancement.
  3. Track real-time breaches of rules in the process, and alert the appropriate people or trigger automated activities.

Interesting to see their architecture and roadmap, but also how they have structured their center of excellence with business analysts being the key “translator” between business needs and the data analysts/scientists, crossing the boundary between the business areas and the CoE.

He went through their financial savings, which were significant, and also mentioned the ability of process mining to identify activities that were not necessary or could be automated, thereby freeing up the workforce to do more value-added activities such as negotiating prices. Definitely worth watch the replay of this presentation to understand the journey from process mining to real-time operational monitoring and alerting.

It’s clear that Celonis is repositioning from just process mining — a tool for a small number of business analysts in an organization — into operational process intelligence that would be a daily dashboard tool for a much large portion of the workforce. Many other process mining products are attempting an equivalent pivot, although Celonis seems to be a bit farther along than most.

Martin Klenk, Celonis CTO, gave an update on their technology strategy, with an initial focus on how the Celonis architecture enables the creation of these real-time operational apps: real-time connectors feed into a data layer, which is analyzed by the Process AI Engine, and then exposed through Boards that integrate data and other capabilities for visualization. Operational and analytical apps are then created based on Boards. Although Celonis has just released two initial Accounts Payable and Supply Chain operational apps, this is something that customers and partners can build in order to address their particular needs.

He showed how a custom operational app can be created for a CFO to show how this works, using a real-time connectors to Salesforce for order data and Jira for support tickets. He showed their multi-event log analytical capability, which makes it much easier to bring together data sources from different systems and automatically correlate them without a lot of manual data cleansing — the links between processes in different systems are identified without human intervention. This allows detection of anomalies that occur on boundaries between systems, rather than just within systems.

Signals can be created based on pre-defined patterns or from scratch, allowing a real-time data-driven alert to be issued when required, or an automation push to another system be triggered. This automation capability is a critical differentiator, allowing for a simple workflow based on connector steps, and can replace the need for some amount of other process automation technologies such as RPA in cases where those are not a good fit.

He was joined by Martin Rowlson, Global Head of Process Excellence at Uber; they are consolidating data from all of their operational arms (drive, eats, etc.) to analyze their end-to-end processes, and using process mining and task mining to identify areas for process improvement. They are analyzing some critical processes, such as driver onboarding and customer support, to reduce friction and improve the process for both Uber and the driver or customer.

Klenk’s next guest as Philipp Grindemann, head of Business Development at Lufthansa CityLine, discussing how they are using Celonis to optimize their core operations. They track maintenance events on their aircraft, plus all ground operations activities. Ground operations are particularly complex due to the high degree of parallelism: an aircraft may be refueled at the same time that cargo is being loaded. I have to guess that their operations are changing radically right now and they are having to re-structure their processes, although that wasn’t discussed.

His last guest was Dr. Lars Reinkemeyer, author of Process Mining in Action — his book has collected and documented many real-world use cases for process mining — to discuss some of the expected directions of process mining beyond just analytics.

They then returned to a studio session for a bit more interactive Q&A; the previous technology roadmap keynote was pre-recorded and didn’t allow for any audience questions, although I think that the customers that he interviewed will have full presentations later in the conference.

#CelosphereLive lunch break

As we saw in at CamundaCon Live last week, there is no break time in the schedule: if you want to catch all of the presentations and discussions in real time, be prepared to carry your laptop with you everywhere during the day. The “Live from the Studio” sessions in between presentations are actually really interesting, and I don’t want to miss those. Today, I’m using their mobile app on my tablet just for the streaming video, which lets me take screenshots as well as carry it around with me, then using my computer for blogging, Twitter, screen snap editing and general research. This means that I can’t use their chat or Q&A functions since the app does not let you stream the video and use the chat at the same time, and the chat wasn’t very interesting yesterday anyway.

The next presentation was by Zalando, a European online fashion retailer, with Laura Henkel, their Process Mining Lead, and Alejandro Basterrechea, Head of Procurement Operations. They have moved beyond just process mining, and are using Celonis to create machine learning recommendations to optimize procurement workflows: the example that we saw provided Amazon-like recommendations for internal buyers. They also use the process automation capabilities to write information back to the source systems, showing how Celonis can be used for automating multi-system integration where you don’t already have process automation technology in place to handle this. Their key benefits in adding Celonis to their procurement processes have been efficiency, quality and value creation. Good interactive audience Q&A at the end where they discuss their journey and what they have planned next with the ML/AI capabilities. It worked well with two co-presenters, since one could be identifying a question for their area while the other was responding to a different question, leaving few gaps in the conversation.

We broke into two tracks, and I attended the session with Michael Götz, Engineering Operations Officer at Celonis, providing a product roadmap. He highlighted their new operational apps, and how they collaborated with customers to create them from real use cases. There is a strong theme of moving from just analytical apps to operational apps that sense and act. He walked through a broad set of the new and upcoming features, starting with data and connectivity, through the process AI engine, and on to boards and the operational apps. I’ve shown some of his slides that I captured below, but if you’re a Celonis customer, you’ll want to watch this presentation and hear what he has to say about specific features. Pretty exciting stuff.

I skipped the full-length Uber customer presentation to see the strategies for how to leverage Celonis when migrating legacy systems such as CRM or ERP, presented by Celonis Data Scientist Christoph Hakes. As he pointed out, moving between systems isn’t just about migrating the data, but it also requires changing (and improving) processes . One of the biggest areas of risk in these large-scale migrations is around understanding and documenting the existing and future-state processes: if you’re not sure what you’re doing now, then likely anything that you design for the new system is going to be wrong. 60% of migrations fail to meet the needs of the business, in part due to that lack of understanding, and 70% fail to achieve their goals due to resistance from employees and management. Using process mining to explore the actual current process and — more importantly — understand the variants means that at least you’re starting from an accurate view of the current state. They’ve created a Process Repository for storing process models, including additional data and attachments

Hakes moved on to talk about their redesign tools, such as process conformance checking to align the existing processes to the designed future state. After rollout, their real-time dashboards can monitor adoption to locate the trouble spots, and send out alerts to attempt remediation. All in all, they’ve put together a good set of tools and best practices: their customer Schlumberger saved $40M in migration costs by controlling the migration costs, driving user adoption and performing ongoing optimization using Celonis. Large-scale ERP system migration is a great use case for process mining in the pre-migration and redesign areas, and Celonis’ monitoring capabilities also make it valuable for post-migration conformance monitoring.

The last session of the day was also a dual track, and I selected the best practices presentation on how to get your organization ready for process mining, featuring Celonis Director of Customer Success Ankur Patel. The concurrent session was Erin Ndrio on getting started with Celonis Snap, and I covered that based on a webinar last month. Patel’s session was mostly for existing customers, although he had some good general points on creating a center of excellence, and how to foster adoption and governance for process mining practices throughout the organization. Some of this was about how a customer can work with Celonis, including professional services, training courses, the partner network and their app store, to move their initiatives along. He finished with a message about internal promotion: you need to make people want to use Celonis because they see benefits to their own part of the business. This is no different than the internal evangelism that needs to be done for any new product and methodology, but Patel actually laid out methods for how some of their customers are doing this, such as road shows, hackathons and discussion groups, and how the Celonis customer marketing team can help.

He wrapped up with thoughts on a Celonis CoE. I’m not a big fan of product-specific CoEs, instead believing that there should be a more general “business automation” or “process optimization” CoE that covers a range of process improvement and automation tools. Otherwise, you tend to end up with pockets of overlapping technologies cropping up all over a large organization, and no guidance on how best to combine them. I wrote about this in a guest post on the Trisotech blog last month. I do think that Patel had some good thoughts on a centralized CoE in general to support governance and adoption for a range of personas.

I will check back in for a few sessions tomorrow, but have a previous commitment to attend Alfresco Modernize for a couple of hours. Next week is IBM Think Digital, the following week is Appian World, then Signavio Live near the end of May, so it’s going to be a busy few weeks. This would normally be the time when I am flying all over to attend these events in person, and it’s nice to be able to do it from home although some of the events are more engaging than others. I’m gathering a list of best practices for online conferences, including the things that work and those that don’t, and I’ll publish that after this round of virtual events. So far, I think that Camunda and Celonis have both done a great job, but for very different reasons: Camunda had much better audience engagement and more of a “live” feel, while Celonis showed how to incorporate higher production quality and studio interviews to good effect, even though I think it’s a bit early to be having in-person interviews.

CelosphereLive 2020 – Day 1

I expect to be participating in a lot of virtual vendor conferences over the next months, and today I tuned in to the Celonis CelosphereLive. They are running on European time, with today’s half day starting at a reasonable 9am Eastern North American time, but the next two days will be starting at 4am my time– I may be catching some of the sessions on demand.

We had a keynote from co-CEO Alexander Rinke that included a short discussion with the godfather of process mining, Wil van der Aalst. I liked Rinke’s characterization that every process in every company is waiting to be improved: this is what process mining (aka process intelligence, depending on which vendor is talking) is all about in terms of discovering processes. Miguel Milano, Celonis Chief Revenue Officer, joined him to talk about their new Celonis Value Architects certification program. The fact that this announcement takes such a prominent place in the keynote highlights that there’s still a certain amount of expertise required to do process mining effectively, even though the tools have become much easier to use.

There were also some new product announcements, first around the availablity of their real-time data connectors. This is a direction that many of the process mining vendors are taking, moving from just an analytical process discovery role to more of an operational monitoring process intelligence role. Chief Product Officer Hala Zeine joined Rinke to talk about their connectivity — out of the box, the product connects to 80 different data sources — and their process AI engine that fits the data to a set of desired outcomes and makes recommendations. Their visualization boards then let you view the analysis and explore root causes of problem areas.

Their process AI engine does some amount of automation, and they have just released operational apps that help to automate some activities of of the workflow. These operational apps are an overlay on business processes that monitor work in progress, and display analysis of the state of (long-running) processes that it is monitoring. The example shown is an Accounts Payable operational app that looks at invoices that are scheduled for payment, and allows a financial person to change parameters (such as date of payment) in bulk, which would then push that update back to the underlying A/P system. Think of these operational apps as smart dashboards, where you can do some complex analysis for monitoring work in progress, and also push some updates and actions back to the underlying operational system. These first two apps are already available to Celonis customers in their app store, and tomorrow there will be a session with the CTO showing how to build your own operational app.

To finish off the day we had two product demos/discussions. First was JP Thomsen, Celonis’ VP Business Models, giving a more in-depth demo of their Accounts Payable operational applications. He was joined by Jan Fuhr, Process Mining Lead at global healthcare company Fresenius Kabi, which collaborated on the creation of the A/P operational application; Fuhr discussed their process mining journey and how they are now able to better support their A/P function and manage cash flow. The sweet spot for these operational apps appears to be when you don’t have end-to-end management on your process with another system such as a BPMS: the operational app monitors what’s happening in any core systems (such as SAP) and replaces ad hoc “management by spreadsheet” with AI and rules that highlight problem areas and make suggestions for remediation. They’ve had some great cost savings, through taking advantage of paying within a specified time frame to receive a discount, and optimizing their payment terms.

Last up was Trevor Miles, Celonis’ head of Supply Chain and Manufacturing Solutions, talking about the supply chain operational application: obviously these operational apps are a big deal for Celonis and their customers, since they’ve been the focus of most of these first half-day. Process mining can provide significant value in supply chain management since it typically involves a number of different systems without an explicit end-to-end process orchestration, and can have a lot of exceptions or variants. Understanding those variants and being able to analyze and reroute things on the fly is critical to maintaining a resilient suppy chain. This has been highlighted during the COVID-19 pandemic, where supply chains have been disrupted, overloaded or underused, depending on the commodity and the route.

Process mining is used to generate a digital twin for the supply chain, which can then be used to analyze past performance and use as a point of comparison with current activities. The Celonis operational app for supply chain is about closing the gap between sensing and actions, so that process mining and simulation isn’t just an analytical tool, but a tool for guiding actions to improve processes. It’s also a tool for bridging across multiple systems of the same time: many large organizations have, for example, multiple instances of SAP for different parts of their processes, and need to knit together all of that information to make better decisions.

Not quite social distancing…

They finished up with a discussion in the studio between Hala Zeine, co-CEO Bastian Nominacher and CTO Martin Klenk, covering some of the new announcements and what’s coming up in the next two days. I’ll be back for some of the sessions tomorrow, although likely not before 8am Eastern.

A few notes on the virtual conference format. Last week’s CamundaCon Live had sessions broadcast directly from each speaker’s home plus a multi-channel Slack workspace for discussion: casual and engaging. Celonis has made it more like an in-person conference by live-broadcasting the “main stage” from a studio with multiple camera angles; this actually worked quite well, and the moderator was able to inject live audience questions. Some of the sessions appeared to be pre-recorded, and there’s definitely not the same level of audience engagement without a proper discussion channel like Slack — at an in-person event, we would have informal discussions in the hallways between sessions that just can’t happen in this environment. Unfortunately, the only live chat is via their own conference app, which is mobile-only and has a single chat channel, plus a separate Q&A channel (via in-app Slido) for speakers that is separated by session and is really more of a webinar-style Q&A than a discussion. I abandoned the mobile app early and took to Twitter. I think the Celosphere model is probably what we’re going to see from larger companies in their online conferences, where they want to (attempt to) tightly control the discussion and demonstrate the sort of high-end production quality that you’d have at a large in-person conference. However, I think there’s an opportunity to combine that level of production quality with an open discussion platform like Slack to really improve the audience experience.

CamundaCon Live 2020 – Day 2: blockchain interrupted, and customer case studies with Capital One and Goldman Sachs

It was a tough choice with the first post-break session at CamundaCon Live: I wanted to listen in on Rick Weinberg, Camunda VP of Products, as he talked about their product direction roadmap, but I decided on the presentation by Muthukumar Vaidhianathan and Tandeep Sidhu from Capital One instead, focused on their process automation modernization with Camunda. I’ll catch the recorded version of Weinberg’s session later, along with a few others that I want to see.

Vaidhianathan and Sidhu talked about some of the problems that they were having with case management using their legacy infrastructure, and how they selected and deployed Camunda. Sidhu talked quite a bit about getting the technical teams up and running with Camunda, and some of the team and DevOps scalability issues. They use a single consolidated UI (actually, one each for front office and back office workers), then a case orchestration layer that connects to the multiple Camunda-based applications: Bank Case Manager, Fraud, Collections, and Legal.

Vaidhianathan then took us through their Legal Case Management application, and how they use BPMN and DMN for automating with complex business rules. Decision tables are used to decide, for example, on the course of action for a particular case based on the data about the case. They feel that it’s important for product owners to own their BPMN and DMN models, while building the strong relationships between developers and the product owners on the business side. Some good lessons learned from their journey at the end.

Captial One also presented at the Camunda Day in NYC last summer, but talked about how they organized a Camunda hackathon rather than the business applications — I think they were much earlier in their journey then, and weren’t ready to talk about business applications yet.

I’ve been interested in blockchain and BPM for a while now, and listened in on Patrick Millar of the non-profit consortium RiskStream Collaborative as he presented on ledger automation using Camunda. Their parent organization is The Institutes, which provides risk management and insurance education, and is guided by senior executives from the property and casualty industry. RiskStream is creating a distributed network platform, called Canopy, that allows their insurance company members to share data privately and securely, and participate in shared business processes. Whenever you have multiple insurance companies in an insurance process, like a claim for a multi-vehicle accident, having shared business processes — such as first notice of loss and proof of insurance — between the multiple insurers means that claims can be settled quicker and at a much lower cost.

In addition to private lines of insurance, they are also looking at applications in commercial lines and reinsurance. There are pretty significant savings if they get 100% market adoption (not an unrealistic goal since the market is made up of their members): $300M in personal lines auto for FNOL and proof of insurance, $384 in commercial lines for certificates of insurance, and $97M in reinsurance for placement.

Unfortunately, we lost the audio/video connection to the presenter in the middle of the session (yes, this really is happening live, and shit happens) and they had to close the session, just as I was really getting into the topic. Also, he never got to the part about how they’re using Camunda. We’ve already heard from Camunda that they will have him record his presentation and have it added to the on-demand videos.

The next session brought both tracks back together for a panel on digital transformation, featuring Mike Ryan, VP Software Engineering at JP Morgan Chase; Christine Yen, CEO of Honeycomb; and Camunda’s Bernd Rücker. Mary Thengvall, Camunda’s Director of Developer Relations, moderated the panel. Here’s some of the points that came up:

  • We’ve built up these massive monolithic systems over the last few decades, but now need to break up these legacy pieces while still supporting them, all while adding new functionality in a more agile manner. This is making it difficult for many of the established companies to compete with the new competitors, such as older financial services companies competing with fintechs. (By the way, I talked about this on a recent webinar, and see it with my own enterprise customers)
  • There’s a need to protect — and improve — the customer experience while the monolith is being replaced piece by piece. In my opinion, “big bang” as a deployment model is dead: gradual migrations without disrupting the user experience should be the general method.
  • There has been a lot of change in roles and communication within organizations. DevOps is part of that, which changes what people are responsible for, and also the concept of process owners being responsible for the end-to-end metrics. Microservices (and service-oriented architecture in general) means that systems can be more targeted since they’re assembled from shared services for a unique purpose.
  • There are a lot of great tools and methodologies now, but many companies are not yet ready to implement them. Microservices, serverless architectures, etc. are changing how we design systems for future state.
  • The current pandemic crisis is driving some amount of digital transformation, and companies are having to decide what is critical for survival now versus what can wait. Ryan said that JP Morgan sent 300,000 employees home to work, and they are rethinking how productive that people can be in distributed environments, and how teams can still work collaboratively. As a financial company, they need to keep serving customers who need access to financial transactions, and are probably having to scale up their online customer experiences to accommodate. Yen believes there is as much of a focus on how people work together remotely to build applications, as there is on the technology itself.

The panel felt a bit unfocused, and wasn’t as engaging as yesterday’s panel. Possibly I’m not quite as fresh after live-blogging 6,000 words over two days.

The last presentation of the day, and of CamundaCon Live 2020, was Richard Tarling, co-head of Workflow Engineering at Goldman Sachs, on the process automation platform that they built with Camunda at its core. He is focused on workflow at enterprise scale: they have 60,000 users (the entire firm) with 8,000 daily users, participating in 10M new activities and 250M decisions per day spread over 650 compute servers. This includes 3,000 process models, 1,000 decision models, 6,000 forms models and 125 RPA bot automations, all created and supported by 1,500 platform developers. Yowsa.

Their goal with creating their digital automation platform was to accelerate developers, but also support non-technical/citizen developers. This means that they embraced model-driven development by creating six key design tools:

  • Workflow Control Centre
  • Workflow Application Project Modeler
  • Workflow Designer, based on bpmn.io
  • Data Modeler
  • Decision Designer, based on dmn.io
  • Forms Designer

They built some engine extensions for their implementation, specifically around the using a stripped-down embedded BPM engine to implement decision flows with high-performance, plus the creation of an open-source jDMN execution engine.

He walked through their overall design-time and execution platform architecture, and some of the things that they did to maximize performance while maximizing (developer) usability. Decision services is a big part of their platform, and he discussed their enterprise-wide decision services execution platform. Their architecture wasn’t born in the cloud, but he feels that their use of microservices design principles means that they could move into the cloud in a straightforward manner.

They have a number of different UI personas that they’ve developed for, resulting in a “zero inbox” persona versus a “power user”. They’ve recently redesigned these UIs with a mobile-first focus. They’re also supporting citizen developers for creating their own case management applications through a combination of model-driven design and pre-built components, plus a governed software development lifecycle built on GitLab. They’ve also built their own provisioning, runtime management and monitoring tools — they even use a BPMN-based process for provisioning.

If you’re building your own large-scale digital process/decision automation platform, definitely go and watch the replay of this presentation — Tarling has been in the trenches and has a ton of great advice. Lots of great Q&A at the end, too.

@phoebe_cat

Jakob Freund came back briefly to chat with Mary Thengvall and wrap the conference: thanks for giving a shout out to this blog (and my cat, who made a brief appearance on the Slack channel). And congrats to all for a great virtual conference that was much, much more than a long series of webinars.

I mentioned on Twitter today that CamundaCon is now the gold standard for online conferences: all you other vendors who have conferences coming up, take note. I believe that the key contributers to this success are live (not pre-recorded) presentations, use of a discussion platform like Slack or Discord alongside the broadcast platform, full engagement of a large number of company participants in the discussion platform before/during/after presentations, and fast upload of the videos for on-demand watching. Keep in mind that a successful conference, whether in-person or online, allows people to have unscripted interactions: it’s not a one-way broadcast, it’s a big messy collaborative conversation.