My writing on the Trisotech blog: better analysis and design of processes

I’ve been writing some guest posts over on the Trisotech blog, but haven’t mentioned them here for a while. Here’s a recap of what I’ve posted over there the past few months:

In May, I wrote about designing loosely-coupled processes to reduce fragility. I had previously written about Conway’s Law and the problems of functional silos within an organization, but then the pandemic disruption hit and I wanted to highlight how we can avoid the sorts of cascading supply chain process failures that we saw early on. A big part of this is not having tightly-coupled end-to-end processes, but separating out different parts of the process so that they can be changed and scaled independently of each other, but still form part of an overall process.

In July, I helped to organize the DecisionCAMP conference, and wrote about the BPMN-CMMN-DMN “triple crown”: not just the mechanics of how the three standards work together, but why you would choose one over the other in a specific design situation. There are some particular challenges with the skill sets of business analysts who are expected to model organizations using these standards, since they will end up using more of the one that they’re most familiar with regardless of its suitability to the task at hand, as well as challenges for the understandability of multi-model representations that require a business operations reader of the models to be able to see how this BPMN diagram, that CMMN model and this other DMN definition all fit together.

In August, I focused on better process analysis using analytical techniques, namely process mining, and gave a quick intro to process mining for those who haven’t seen it in action. For several months now, we haven’t been able to do a lot of business “as is” analysis through job shadowing and interviews, and I put forward the idea that this is the time for business analysts to start learning about process mining as another tool in their kit of analysis techniques.

In early September, I wrote about another problem that can arise due to the current trend towards distributed (work from home) processes: business email compromise fraud, and how to foil it with better processes. I don’t usually write about cybersecurity topics, but I have my own in-home specialist, and this topic overlapped nicely with my process focus and the need for different types of compliance checks to be built in.

Then, at the end of September, I finished up the latest run of posts with one about the process mining research that I had seen at the (virtual) academic BPM 2020 conference: mining processes out of unstructured emails, and queue mining to see the impact of queue congestion on processes.

Recently, I gave a keynote on aligning intelligent automation with incentives and business outcomes at the Bizagi Catalyst virtual conference, and I’ve been putting together some more detailed thoughts on that topic for this months’ post. Stay tuned.

Disclosure: Trisotech is my customer, and I am compensated for writing posts for publication on their site. However, they have no editorial control or input into the topics that I wrote about, and no input into what I write here on my own blog.

Closing comments from Bizagi Catalyst 2020

By the time we got to day 3 of the virtual Bizagi Catalyst 2020, Bizagi had given up on their event streaming platform and just published all of the pre-recorded presentations for on-demand viewing. We were supposed to do a live wrap-up at the end of the day with Rob Koplowitz of Forrester Research, Bizagi CEO Gustavo Gómez and myself, moderated by Bizagi’s Senior Director of Product Marketing Rachel Brennan, so we went ahead and recorded that yesterday. It’s now up on the on-demand page, check it out:

This was my first time speaking at — or attending! — Bizagi Catalyst, and I’m looking forward to more of them in the future. Hopefully somewhere more exciting than my own home office.

Bizagi Catalyst 2020 Day 2 keynotes and hackathon

With a quick nod to Ada Lovelace Day (which was yesterday) and women in technology, Bizagi’s Catalyst virtual conference kicked off today with a presentation by Rachel Brennan, Senior Director of Product Marketing, and Marlando Rhule, Professional Services Director, on some of the new industry accelerators available from Bizagi. The first of these was onboarding and KYC (know your client) process, including verification and risk assessment of both business and individual clients. Secondly was a permit lifecycle management process, specifically for building (and related) permits for municipal and state governments; it orchestrates communications between multiple applications for zoning and inspections, gathers information and approvals, generates letters and permits, and drives the overall process.

Coming soon, they will be releasing the APQC Process Classification Framework for Bizagi Modeler: the APQC frameworks are a good source of pre-built processes for specific industries as well as cross-industry frameworks.

Rachel also announce the Bizagi hackathon, which runs from October 19 to November 14. From the hackathon website:

It can be a new and innovative Widget to improve Bizagi´s forms, a new connector that extends Bizagi´s capabilities to connect with external systems or an experience-centric process using Bizagi Sites and Stakeholder concepts.

As with yesterday, the platform was pretty unstable, but eventually the second session stated with Luigi Mule of Blue Prism and their customer Royston Clark from Old Mutual, a financial services group in African and Asian markets. As I mentioned yesterday (and last week at another virtual conference) BPM vendors and RPA vendors are finally learning how to cooperate rather than position themselves as competitors: BPM orchestrates processes, and invokes RPA bots to perform tasks as the steps in the process. Eventually, many of the bots will be replaced with proper APIs, but in the meantime, bots provide value through integrating with legacy systems that don’t have exposed APIs.

At Old Mutual, they have 170 bots, 70% of which are integrated in a Bizagi process. Since they started with Blue Prism, they have automated the equivalent of eight million minutes of worker time: effectively, they have given that amount of time back to the business for more value-added activities. The combination of Bizagi and Blue Prism has given them a huge increase in agility, able to change and automate processes in a very short time frame.

Next up was supposed to be my keynote on aligning intelligent automation with incentives and business outcomes, but the broadcast platform failed quite spectacularly and Bizagi had to cancel the remainder of the day, which included the planned live Q&A after my presentation (I’d like to imagine that I’m so popular that I broke the internet). That also means that we missed the Q&A, but feel free to ask questions in the comments here, or on Twitter. You can see my slides below, and the keynote is recorded and available for replay.

I’ve been writing and presenting about aligning incentives with business processes for a long time, since I recognized that more collaborative and ad hoc processes needed to have vastly different metrics than our old-school productivity widget-counting. This was a good opportunity to revisit and update some of those ideas through the pandemic lens, since worker metrics and incentives have shifted quite a bit with work from home scenarios.

Assuming they have the event platform back online tomorrow, I’ll be back for a few of the sessions, and to catch up on some of today’s sessions that I missed.

Bizagi Catalyst 2020, Day 1

This week, I’m attending the virtual Bizagi Catalyst event, and I’ll be giving a short keynote and interactive discussion tomorrow. Today, the event kicked off with an address by CEO Gustavo Gomez on the impact of technology innovation, and the need for rapid response. This is a message that really resonates right now, as companies need to innovate and modernize, or they won’t make it through this current crisis. Supply chains are upside-down, workforces are disrupted, and this means that businesses need to change quickly to adapt. Gomez’ message was to examine your customer-facing processes in order to make them more responsive: eliminate unnecessary steps; postpone steps that don’t require customer interaction; and automate tasks. These three process design principles will improve your customer experience by reducing the time that they spend waiting while they are trying to complete a transaction, and will also improve the efficiency and accuracy of your processes.

He had the same message as I’ve had for several months: don’t stand still, but use this disruption to innovate. The success of companies is now based on their ability to change, not on their success at repetition: I’m paraphrasing a quote that he gave, and I can’t recall the original source although it’s likely Bill Drayton, who said “change begets change as much as repetition reinforces repetition”.

I unfortunately missed quite a bit of the following session, by Mata Veleta of insurance provider SCOR due to a glitchy broadcast platform. I did see the part of her presentation on how Bizagi supports them on their transformation journey, with a digitalization of a claims assessment application that was live in six weeks from design to go-live during the pandemic — very impressive. They are embracing the motto “think big, start small, move fast”, and making the agile approach a mindset across the business in addition to an application development principle. They’re building another new application for medical underwriting, and have many others under consideration now that they see how quickly they can roll things out.

The broadcast platform then fell over completely, and I missed the product roadmap session; I’m not sure if Bizagi should be happy that they had so many attendees that they broke the platform, or furious with the platform vendor for offering something that they couldn’t deliver. The “all-singing, all-dancing” platforms look nice when you see the demo, but they may not be scalable enough.

I went back later in the day and watched the roadmap session replay, with Ed Gower, VP Solutions Consulting, and Andrea Dominguez, Product Manager. Their roadmap has a few guiding themes: intelligent automation orchestration primarily through improved connectors to other automation components including RPA; governance to provide visibility into this orchestration; and a refreshed user experience on all devices. Successful low-code is really about what you can integrate with, so the focus on connectors isn’t a big surprise. They have a new connector with ABBYY for capture, which provides best-of-breed classification and extraction from documents. They also have a Microsoft Cognitive Services Connector for adding natural language processing to Bizagi applications, including features such as sentiment analysis. There are some new features coming up in the Bizagi Modeler (in December), including value stream visualizations.

The session by Tom Spolar and Tyler Rudkin of HSA Webster Bank was very good: a case study on how they use Bizagi for their low-code development requirements. They stated that they use another product for the heavy-duty integration applications, which means that Bizagi is used as true no/low-code as well as their collaborative BPMN modeling environment. They shared a lot of best practices, including what they do and don’t do with Bizagi: some types of projects are just considered a poor fit for the platform, which is a refreshing attitude when most organizations get locked into a Maslow’s hammer cognitive bias. They’ve had measurable results: several live deployments, the creation of reusable BPMN capabilities, and reduced case duration.

The final session of the day was a sneak peek at upcoming Bizagi capabilities with Kevin Guerrero, Technical Marketing Manager, and Francisco Rodriguez, Connectors Product Manager. Two of the four items that they covered were RPA-related, including integration with both UiPath and Automation Anywhere. As I saw at the CamundaCon conference last week, BPM vendors are realizing that integration with the mainstream RPA platforms is important for task automation/assistance, even if the RPA bots may eventually be replaced with APIs. Bizagi will be able to trigger UiPath attended bots on the user’s desktop, and start bots from the Bizagi Work Portal to exchange data. We saw a demo of how this is created in Bizagi Studio, including graphical mapping of input/output parameters with the bot, then what it looks like in the user runtime environment. They also discussed their upcoming integration with the cloud-based Automation Anywhere Enterprise A2019, calling cloud-based bots from Bizagi.

Moving on from RPA, they showed their connector with Microsoft Cognitive Services Form Recognizer, allowing for extraction of text and data from scanned forms if you’re using an Azure and Cognitive Services environment. There are a number of pre-defined standard forms, but you can also train Form Recognizer if you have customized versions of these forms, or even new forms altogether. They finished up with their new SAP Cloud Connector, which works with S/4HANA. We saw a demo of this, with the SAP connection being setup directly in Bizagi Studio. This is similar to their existing SAP connector, but with a direct connection to SAP Cloud.

I’ll be back for some of the sessions tomorrow, but since I have a keynote and interactive Q&A, I may not be blogging much.

Disclosure: I am being compensated for my keynote presentation, but not for anything that I blog here. These are my own opinions, as always.

CamundaCon 2020.2 Day 2: roadmap, business-driven DMN and ethical algorithms

I split off the first part of CamundaCon day 2 since it was getting a bit long: I had a briefing with Daniel Meyer earlier in the week on the new RPA integration, and had a lot of thoughts on that already. I rejoined for Camunda VP of Product Management Rick Weinberg’s roadmap presentation, which covered what’s coming in 2021. If you’re a Camunda customer, or thinking about becoming one, you should check out the replay of his session if you missed it. Expect to see updates to decision automation, developer experience, process monitoring and interoperability.

I tuned in to the business architecture track for a presentation by David Ibl, Enterprise Architect at LV 1871 (a German insurance company) on how they enabled their business specialists to perform decision model simulation and test case definition using their own DMN Manager based on the Camunda modeler toolkit. Their business people were already using BPMN for modeling processes, but were modeling business decisions as part of the process, and needed to use externalize the rules from the processes in order to simplify the processes. This was initially done by moving the decisions to code, then calling that from within the process, but that made the decisions much less transparent to the business. Now, the business specialists model both BPMN and DMN in Signavio, which are then committed to git; these models are then pulled from git both for deployment and for testing and simulation directly by the business people. You can read a much better description of it written by David a few months ago. A good example (and demo) on how business people can model, test and simulate their own decisions as well as processes. And, since they’re committed to open source, you can find the code for it on github.

I also attended a session by Omid Tansaz of Nexxbiz, a Camunda consulting services partner, on their insurance process monitoring capability that allows systems across the entire end-to-end chain of insurance processes to be monitored in a consolidated fashion. This includes broker systems, front- and back-off systems within the insurer, as well as microservices. They were already using Camunda’s BPM engine, and started using Optimize for process visualization since Optimize 3.0 can include external event sources (from all of the other systems in the end-to-end process) as well as the Camunda BPM processes. This is one of the first case studies of the external event capability in Optimize, since that was only released in April, and show the potential for having a consolidated view across multiple systems: not just visibility, but compliance auditing, bottleneck analysis, and real-time issue prevention.

The conference closed with a keynote by Michael Kearns from the University of Pennsylvania on the science of socially-aware algorithm design. Ethical algorithms (the topic of his recent book written with Aaron Roth) are not just an abstract concept, but impact businesses from risk mitigation through to implementation patterns. There are many cases of how algorithmic decision-making shows definite biases, and instead of punting to legal and regulatory controls, their research looks at technical solutions to the problem in the form of better algorithms. This is a non-trivial issue, since algorithms often have outcomes that are difficult to predict, especially when machine learning is involved. This is exactly why software testing is often so bad (just to inject my own opinion): developers can’t or don’t consider the entire envelope of possible outcomes, and often just test the “happy path” and a few variants.

Kearns’ research proposes embedding social values in algorithms: privacy, fairness, accountability, interpretability and morality. This requires a definition of what these social values mean in a precise mathematical. There’s already been some amount of work on privacy by design, spearheaded by the former Ontario Information and Privacy Commissioner Ann Cavoukian, since privacy is one of the better-understood algorithmic concepts.

Kearns walked us through issues around algorithmic privacy, including the idea that “anonymized” data often isn’t actually anonymized, since the techniques used for this assume that there is only a single source of data. For example, redacting data within a data set can make it anonymous if that’s the only data set that you have; as soon as other data sets exist that contain one or more of the same unredacted data values, you can start to correlate the data sets and de-anonymize the data. In short, anonymization doesn’t work, in general.

He then looked at “differential privacy”, which compares the results of an algorithm with and without a specific person’s data: if an observer can’t tell the discern between the outcomes, then the algorithm is preserving the privacy of that person’s data. Differential privacy can be implemented by adding a small amount of random noise to each data point, which makes is impossible to figure out the contribution of any specific data point., and the noise contributions will cancel out of the results when a large number of data points are analyzed. Problems can occur, however, with data points that have very small values, which may be swamped by the size of the noise.

He moved on to look at algorithmic fairness, which is trickier: there’s no agreed-upon definition of fairness, and we’re only just beginning to understand tradeoffs, e.g., between race and gender fairness, or between fairness and accuracy. He had a great example of college admissions based on SAT and GPA scores, with two different data sets: one for more financially-advantaged students, and the other for students from modest financial situations. The important thing to note is that the family financial background of a student has a strong correlation with race, and in the US, as in other countries, using race as an explicit differentiator is not allowed in many decisions due to “fairness”. However, it’s not really fair if there are inherent advantages to being in one data set over the other, since those data points are artificially elevated.

There was a question at the end about the role of open source in these algorithms: Kearns mentioned OpenDP, an open source toolset for implementing differential privacy, and AI Fairness 360, an open source toolkit for finding and mitigating discrimination and bias in machine learning models. He also discussed some techniques for determining if your algorithms adhere to both privacy and fairness requirements, and the importance of auditing algorithmic results on an ongoing basis.

CamundaCon 2020.2 Day 2 opening keynotes: BPM patterns and RPA integration

I’m back at CamundaCon 2020.2 for day 2, which kicked off with a keynote by Camunda co-founder and developer advocate Bernd Rücker. He’s a big fan of BPM and graphical models (obviously), but not of low-code: his view is that the best way to build robust process-based applications is with a stateful workflow engine, a graphical process modeler, and code. In my opinion, he’s not wrong for complex core applications, although I believe there are a lot of use cases for low code, too. He covered a number of different implementation architectures and patterns with their strengths and weaknesses, especially different types of event-driven architectures and how they are best combined with workflow systems. You can see the same concepts covered in some of his previous presentations, although every time I hear him give a presentation, there are some interesting new ideas. He’s currently writing a book call Practical Process Automation, which appears to be gathering many of these ideas together.

CTO Daniel Meyer was up next with details of the upcoming 7.14 release, particularly the RPA integration that they are releasing. He positions Camunda as having the ability to orchestrate any type of process, which may include endpoints (i.e., non-Camunda components for task execution) ranging from human work to microservices to RPA bots. Daniel and I have had a number of conversations about the future of different technologies, and although we have some disagreements, we are in agreement that RPA is an interim technology: it’s a stop-gap for integrating systems that don’t have APIs. RPA tends to be brittle, as pointed out by Camunda customer Goldman Sachs at the CamundaCon Live earlier this year, with a tendency to fail when anything in the environment changes, and no proper state maintained when failures occur. Daniel included a quote from a Forrester report that claims that 45% of organizations using RPA deal with breakage on at least a weekly basis.

As legacy systems are replaced, or APIs created for them, RPA bots will gradually be replaced as the IT infrastructure is modernized. In the meantime, however, we need to deal with RPA bots and limit the technical debt of converting the bots in the future when APIs are available. Camunda’s solution is to orchestrate the bots as external tasks; my advice would also be to refactor the bots to push as much process and decision logic as possible into the Camunda engine, leaving only the integration/screen scraping capabilities in the bots, which would further reduce the future effort required to replace them with APIs. This would require that RPA low-code developers learn some of the Camunda process and decision modeling, but this is done in the graphical modelers and would be a reasonable fit with their skills.

The new release includes task templates for adding RPA bots to processes in the modeler, plus an RPA bridge service that connects to the UiPath orchestrator, which in turn manages UiPath bots. Camunda will (I assume) extend their bridge to integrate with other RPA vendors’ orchestrators in the future, such as Automation Anywhere and Blue Prism. What’s interesting, however, is that the current architecture of this is that the RPA task in a process is an external task — a task that relies on an external agent to poll for work, rather than invoking a service call directly — then the Camunda RPA bridge invokes the RPA vendor’s orchestrator, then the RPA bots poll their own orchestrator. If you are using a different RPA platform, especially one that doesn’t have an orchestrator, you could configure the bots to poll Camunda directly at the external task endpoint. In short, although the 7.14 release will add some capabilities that make this easier (for enterprise customers only), especially if you’re using UiPath, you should be able to do this already with any RPA product and external tasks in Camunda.

Daniel laid out a modernization journey for companies with an existing army of bots: first, add in monitoring of the bot activities using Camunda Optimize, which now has the capability to monitor external events, in order to gain insights into the existing bot activities across the organization. Then, orchestrate the bots using the Camunda workflow engine (both BPMN and DMN models) using the tools and techniques described above. Lastly, as API replacements become available for bots, switch them out, which could require some refactoring of the Camunda models. There will likely be some bots left hanging around for legacy systems that are never going to have APIs, but that number should dwindle over the years.

Daniel also teased some of the “smart low-code” capabilities that are on the Camunda roadmap, which will covered in more detail later by Rick Weinberg, since support for RPA low-code developers is going to push them further into this territory. They’re probably never going to be a low-code platform, but are becoming more inclusive for low-code developers to perform certain tasks within a Camunda implementation, while professional developers are still there for most of the application development.

This is getting a bit long, so I’m going to publish this and start a new post for later sessions. On a technical note, the conference platform is a bit wonky on a mobile browser (Chrome on iPad); although it’s “powered by Zoom”, it appears in the browser as an embedded Vimeo window that sometimes just doesn’t load. Also, the screen resolution appears to be much lower than at the previous CamundaCon, with the embedded video settings maxing out at 720p: if you compare some of my screen shots from the two different conferences, the earlier ones are higher resolution, making them much more readable for smaller text and demos. In both cases, I was mostly watching and screen capping on iPad.

CamundaCon 2020.2 Day 1

I listened to Camunda CEO Jakob Freund‘s opening keynote from the virtual CamundaCon 2020.2 (the October edition), and he really hit it out of the park. I’ve known Jakob a long time and many of our ideas are aligned, and there was so much in particular in his keynote that resonated with me. He used the phrase “reinvent [your business] or die”, whereas I’ve been using “modernize or perish”, with a focus not just on legacy systems and infrastructure, but also legacy organizational culture. Not to hijack this post with a plug for another company, but I’m doing a keynote at the virtual Bizagi Catalyst next week on aligning intelligent automation with incentives and business outcomes, which looks at issues of legacy organizational culture as well as the technology around automation. Processes are, as he pointed out, the algorithms of an organization: they touch everything and are everywhere (even if you haven’t automated them), and a lot of digital-native companies are successful precisely because they have optimized those algorithms.

Jakob’s advice in achieving reinvention/modernization is to do a gradual transformation, not try to do a big bang approach that fails more often than it succeeds, and positions Camunda (of course) as the bridge between the worlds of legacy and new technology. In my years of technology consulting on BPM implementations, I also recommend using a gradual approach by building bridges between new and old technology, then swapping out the legacy bits as you develop or buy replacements. This is where, for example, you can use RPA to create stop-gap task automation with your existing legacy systems, then gradually replace the underlying legacy or at least create APIs to replace the RPA bots.

The second opening keynote was with Marco Einacker and Christoph Anzer of Deutsche Telekom, discussing how they are using process and task automation by combining Camunda for the process layer and RPA at the task layer. They started out with using RPA for automating tasks and processes, ending up with more than 3,000 bots and an estimated €93 million in savings. It was a very decentralized approach, with initially being created by business areas without IT involvement, but as they scaled up, they started to look for ways to centralize some of the ideas and technology. First was to identify the most important tasks to start with, namely those that were true pain points in the business (Einacker used the phrase ” look for the shittiest, most painful process and start there”) not just the easy copy-paste applications. They also looked at how other smart technologies, such as OCR and AI, could be integrated to create completely unattended bots that add significant value.

The decentralized approach resulted in seven different RPA platforms and too much process automation happening in the RPA layer, which increased the amount of technical debt, so they adapted their strategy to consolidate RPA platforms and separate the process layer from the bot layer. In short, they are now using Camunda for process orchestration, and the RPA bots have become tasks that are orchestrated by the process engine. Gradually, they are (or will be) replacing the RPA bots with APIs, which moves the integration from front-end to back-end, making it more robust with less maintenance.

I moved off to the business architecture track for a presentation by Srivatsan Vijayaraghavan of Intuit, where they are using Camunda for three different use cases: their own internal processes, some customer-facing processes for interacting with Intuit, and — most interesting to me — enabling their customers to create their own workflows across different applications. Their QuickBooks customers are primarily small and mid-sized business that don’t have the skills to set up their own BPM system (although arguably they could use one of the many low-code process automation platforms to do at least part of this), which opened the opportunity for Intuit to offer a workflow solution based on Camunda but customizable by the individual customer organizations. Invoice approvals was an obvious place to start, since Accounts Payable is a problem area in many companies, then they expanded to other approval types and integration with non-Intuit apps such as e-signature and CRM. Customers can even build their own workflows: a true workflow as a service model, with pre-built templates for common workflows, integration with all Intuit services, and a simplified workflow designer.

Intuit customers don’t interact directly with Camunda services; Camunda is a separately hosted and abstracted service, and they’ve used Kafka messages and external task patterns to create the cut-out layer. They’ve created a wrapper around the modeling tools, so that customers use a simplified workflow designer instead of the BPMN designer to configure the process templates. There is an issue with a proliferation of process definitions as each customer creates their own version of, for example, an invoice approval workflow — he mentioned 70,000 process definitions — and they will likely need to do some sort of automated cleanup as the platform matures. Really interesting use case, and one that could be used by large companies that want their internal customers to be able to create/customize their own workflows.

The next presentation was by Stephen Donovan of Fidelity Investments and James Watson of Doculabs. I worked with Fidelity in 2018-19 to help create the architecture for their digital automation platform (in my other life, I’m a technical architecture/strategy consultant); it appears that they’re not up and running with anything yet, but they have been engaging the business units on thinking about digital transformation and how the features of the new Camunda-based platform can be leveraged when the time comes to migrate applications from their legacy workflow platform. This doesn’t seem to have advanced much since they talked about it at the April CamundaCon, although Donovan had more detailed insights into how they are doing this.

At the April CamundaCon, I watched Patrick Millar’s presentation on using Camunda for blockchain ledger automation, or rather I watched part of it: his internet died partway through and I missed the part about how they are using Camunda, so I’m back to see it now. The RiskStream Collaborative is a not-for-profit consortium collaborating on the use of blockchain in the insurance industry; their parent organization, The Institutes, provides risk management and insurance education and is guided by senior executives from the property and casualty industry. To copy from my original post, RiskStream is creating a distributed network platform, called Canopy, that allows their insurance company members to share data privately and securely, and participate in shared business processes. Whenever you have multiple insurance companies in an insurance process, like a claim for a multi-vehicle accident, having shared business processes — such as first notice of loss and proof of insurance — between the multiple insurers means that claims can be settled quicker and at a much lower cost.

I do a lot of work with insurance companies, as well as with BPM vendors to help them understand insurance operations, and this really resonates: the FNOL (first notice of loss) process for multi-party claims continues to be a problem in almost every company, and using enterprise blockchain to facilitate interactions between the multiple insurers makes a lot of sense. Note that they are not creating or replacing claims systems in any way; rather, they are connecting the multiple insurance companies, who would then integrate Canopy to their internal claims systems such as Guidewire.

Camunda is used in the control framework layer of Canopy to manage the flows within the applications, such as the FNOL application. The control framework is just one slice of the platform: there’s the core distributed ledger layer below that, where the blockchain data is persisted, and an integration layer above it to integrate with insurers’ claims systems as well as the identity and authorization registry.

There was a Gartner keynote, which gave me an opportunity to tidy up the writing and images for the rest of this post, then I tuned back in for Niall Deehan’s session on Camunda Hackdays over on the community tech track, and some of the interesting creations that come out of the recent virtual version. This drives home the point that Camunda is, at its heart, open source software that relies on a community of developer both within and outside Camunda to extend and enhance the core product. The examples presented here were all done by Camunda employees, although many of them are not part of the development team, but come from areas such as customer-facing consulting. These were pretty quick demos so I won’t go into detail, but here are the projects on Github:

If you’re a Camunda customer (open source or commercial) and you like one of these ideas, head on over to the related github page and star it to show your interest.

There was a closing keynote by Capgemini; like the Gartner keynote, I felt that it wasn’t a great fit for the audience, but those are my only real criticisms of the conference so far.

Jakob Freund came back for a conversation with Mary Thengvall to recap the day. If you want to see the recorded videos of the live sessions, head over to the agenda page and click on Watch Now for any session.

There’s a lot of great stuff on the agenda for tomorrow, including CTO Daniel Meyer talking about their new RPA orchestration capabilities, and I’ll be back for that.

Process Mining Camp 2020: @Fluxicon takes it online, with keynote by @wvdaalst

Process Mining Camp 2020 (3)Last week, I was busy preparing and presenting to webinars for two different clients, so I ended up missing Software AG’s ARIS international user groups (IUG) conference and most of Fluxicon’s Process Mining Camp online conference, although I did catch a bit of the Lufthansa presentation. However, Process Mining Camp continues this week, giving me a chance to tune in for the remaining sessions. The format is interesting, there is only one presentation each day, presented live using YouTube Live (no registration required), with some Q&A at the end. The next day starts with Process Mining Café, which is an extended Q&A with the previous day’s presenter based on the conversations in the related Slack workspace (which you do need to register to join), then a break before moving on to that day’s presentation. The presentations are available on YouTube almost as soon as they are finished, but are being shared via Slack using unlisted links, so I’ll let Fluxicon make them public at their own pace (subscribe to their YouTube channel since they will likely end up there).

Anne Rozinat, co-founder of Fluxicon, was moderator for the event, and was able to bring life to the Q&A since she’s an expert in the subject matter and had questions of her own. Each day’s session runs a maximum of two hours starting at 10am Eastern, which makes it a reasonable time for all of Europe and North America (having lived in California, I know the west coasters are used to getting up for 7am events to sync with east coast times). Also, each presentation is a practitioner who uses process mining (specifically, Fluxicon’s Disco product) in real applications, meaning that they have stories to share about their data analysis, and what worked and didn’t work.

Process Mining Camp 2020 (9)Monday started with Q&A with Zsolt Varga of the European Court of Auditors, who presented last Friday. It was a great discussion and made me want to go back and see Varga’s presentation: he had some interesting comments on how they track and resolve missing historical data, as well as one of the more interesting backgrounds. There was then a presentation by Hilda Klasky of the Oak Ridge National Laboratory on process mining for electronic health records with some cool data clustering and abstraction to extract case management state transition patterns from what seemed to be a massive spaghetti mess. Process Mining Camp 2020 (13)Tuesday, Klasky returned for Q&A, then a presentation by Harm Hoebergen and Redmar Draaisma of Freo (an online loans subsidiary of Rabobank) on loan and credit processes across multiple channels. It was great to track Slack during a presentation and see the back-and-forth conversations as well as watch the questions accumulate for the presenter; after each presentation, it was common to see the presenter respond to questions and discussion points that weren’t covered in the live Q&A. For online conferences, this type of “chaotic engagement” (rather than tightly controlled broadcasts from the vendor, or non-functionality single-threaded chat streams) replaces the “hallway chats” and is essential for turning a non-engaging set of online presentations into a more immersive conference experience.

Process Mining Camp 2020 (16)The conference closing keynote today was by Wil van der Aalst, who headed the process mining group at Eindhoven University of Technology where Fluxicon’s co-founders did their Ph.D. studies. He’s now at RWTH Aachen University, although remains affiliated with Eindhoven. I’ve had the pleasure of meeting van der Aalst several times at the academic/research BPM conferences (including last year in Vienna), and always enjoy hearing him present. He spoke about some of the latest research in object-centric process mining, which addresses the issue of handling events that refer to multiple business objects, such as multiple items in a single order that may be split into multiple deliveries. Traditionally in process mining, each event record from a history log that forms the process mining data has a single case ID, plus a timestamp and an activity name. But what happens if an event impacts multiple cases?

Process Mining Camp 2020 (54)He started with an overview of process mining and many of the existing challenges, such as performance issues with conformance checking, and the fact that data collection/cleansing still takes 80% of the effort. However, process mining (and, I believe, task mining as a secondary method of data collection) can be using event logs where an event refers to multiple cases, requiring that the data be “flattened” to pick one of the cases as the identifier for the event record, then duplicate the record for each case referred to in the event. The problem arises because events can disappear when cases are merged again, which will cause problems in generating accurate process models. Consider your standard Amazon order, like the one that I’m waiting for right now. I placed a single order containing eight items a couple of days ago, which were supposed to be delivered in a single shipment tomorrow. However, the single order was split into three separate orders the day after I placed the order, then two of the orders are being sent in a single shipment that is arriving today, while the third order will be in its own shipment tomorrow. Think about the complexity of tracking by order, or item, or shipment: processes diverge and converge in these many-to-many relationships. Is this one process (my original order), or two (shipments), or three (final orders)?

Process Mining Camp 2020 visual notesThe really great part was engaging in the Slack discussion while the keynote was going on. A few people were asking questions (including me), and Mieke Jans posted a link to a post that she wrote on a procedure for cleansing event logs for multi-case processes – not the same as what van der Aalst was talking about, but a related topic. Anne Rozinat posted a link to more reading on these types of many-to-many situations in the context of their process mining product from their “Process Mining in Practice” online book. Not surprisingly, there was almost no discussion on the Twitter hashtag, since the attendees had a proper discussion platform; contrast this with some of the other conferences where attendees had to resort to Twitter to have a conversation about the content. After the keynote, van der Aalst even joined in the discussion and answered a few questions, plus added the link for the IEEE task force on process mining that promotes research, development, education and understanding of process mining: definitely of interest if you want to get plugged into more of the research in the field. As a special treat, Ferry Timp created visual notes for each day and posted them to the related Slack channel – you can see the one from today at the left.

Great keynote and discussion afterwards, I recommend tracking Fluxicon’s blog and/or YouTube channel to watch it – and all of the other presentations – when published.

Process mining backgrounder – recovering from my #PowerPointFail

Did you ever make some last-minute changes before a presentation, only to look on in horror when a slide pops up that was not exactly what you were expecting? That was me today, on a webinar about process intelligence that I did with Signavio. In the webinar, I was taking a step back from process automation — my usual topic of discussion — to talk more about the analytical tools such as process mining. This morning, I decided that I wanted to add a brief backgrounder on process mining, and pulled in some slides that I had created for a presentation back in 2013 on (what were then) evolving technologies related to process management. I got a bit too fancy, and created a four-image build but accidentally didn’t have the animation set on what should have been the last image added to the build, so it obscured all the good stuff on the slide.

I thought it was a pretty interesting topic, and I rebuilt the slide and recorded it. Check it out (it’s only 3-1/2 minutes long):

It’s webinar week! Check out my process intelligence webinar with @Signavio on Thursday

On Thursday, I’m presenting a webinar on process intelligence with Signavio. Here’s the description:

How do you get a handle on your company’s disrupted processes? How do you get real-time visibility into your organization’s strengths and weaknesses? How do you confidently chart a path to the future? The key is process intelligence: seeing your processes clearly and understanding what is actually happening versus what’s supposed to happen.

For example, your order-to-cash process is showing increased sales but decreasing customer satisfaction. Why? What is the root cause? Or, you have an opportunity to offer a new product but aren’t sure if your manufacturing process can handle it. To make this decision, you need a clear line of sight into what your organization can do. These areas are where process intelligence shines.

This webinar will help you answer questions like these, showing you – with examples – how process intelligence can help you drive real business results.

Rather than my usual focus on process automation, I’m digging a bit more into the process analysis side, particularly around process mining. With the current situation with a largely distributed workforce for many businesses, processes have change and it’s not possible to do Gemba walks or job shadowing to collect information on what the adjusted processes look like. Process mining and task mining provide the capabilities to do that remotely and accurately, and identify any problems with conformance/compliance as well as discover root causes. You can sign up at the link above to attend or receive the on-demand replay after the event.

I also posted last week about the webinar that I’m presenting on Wednesday for ABBYY on digital intelligence in the insurance industry, which is a related but different spin on the same issue: how are processes changing now, and what methodologies and technologies are available to handle this disruption. In case it’s not obvious, I don’t work for either of these vendors (who have some overlap in products) but provide “thought leadership” presentations to help introduce and clarify concepts for audiences. Looking forward to seeing everyone on either or both of these webinars later this week.