When a customer presenter was unable to attend at the last minute, Joe Pappas stepped in and gave an impromptu presentation and demo on building connectors. The Camunda Marketplace (formerly the Community Hub) is the place for some of the expanding shared resources that Jakob Freund spoke about in the keynote: there’s one section for Camunda-provided connectors, one for those provided by partners, and one for those from the community.
Joe walked through some of the connectors that he has built and published, showing the code and then demonstrating the functionality. You can find his connectors in the Community section of the Marketplace, including NATS inbound/outbound, email inbound, file watch inbound, and database inbound from Postgres or MongoDB. Not much to write about since it was mostly a code demo, but cool stuff!
I feel like I’m barely back from the academic research BPM conference in Utrecht, and I’m already at Camunda’s annual CamundaCon, being held in New York (Brooklyn, actually) — the first time for the main conference outside of Germany. The location change from Berlin is a bit of a tough call since they will lose some of the European customers who don’t have a budget for international travel, but the opportunity to see their North American customers will make up for it. They’re also running the conference virtually for those of you who can’t be here in person, and you can sign up for free to attend the presentations online.
Although I don’t blog about anything that happens after the bar is open, I did have a couple of interesting conversations at the networking event last night about my relationship with Camunda. I’m here this week as an independent analyst, and although they are covering my travel expenses, I’m not being paid for my time and (as usual) the opinions that I write here are my own. This is the same arrangement I have with any vendor whose conference I attend, although I have got a bit pickier about which locations I’m willing to travel to (hint: not Vegas). I’ve been covering Camunda a long time, starting 10 years ago with their fork from Activiti, back when they didn’t capitalize their name. They’ve been a client of mine in the past for creating white papers, webinars and speaking at their conference. I’ve also worked with some of their clients on technical strategy and architecture, which is the other side of my business.
The first day opened with a keynote from Camunda CEO Jakob Freund giving a brief retrospective of the last 10 years of their growth and especially their current presence in North America. There’s over 200 people attending today in person at the 74Wythe event space, plus an online contingent of attendees. He started with a vision of the automated enterprise, and how this is made difficult by the complexity of mission-critical processes that cross multiple silos of systems and organizational departments. Process orchestration allows for automation of the end-to-end processes by acting a a controller that can invoke the right resource — whether a person or a system — at the right time while maintaining end-to-end visibility and management. If you’re not embracing process orchestration, you run the risk of having broken processes that have a significant impact on your customer satisfaction, efficiency and innovation.
Camunda has more than 500 customers globally now, and has amassed over 5000 use cases for how those organizations are using Camunda’s software. This has allowed them to develop a process orchestration maturity model: from single projects, to broader initiatives, to distributed adoption, to a strategic scaled adoption of process orchestration. Although obviously Jakob sees the Camunda Process Orchestration Platform as a foundational platform, he looked at a number of other non-technical components such as stakeholder buy-in, plus technical add-ons and integration partners. I like that he started with strategic alignment and ended with value monitoring wrapping back to the alignment; this type of alignment between strategic goals and operational metrics is something that I strongly believe in and have written about quite a bit.
Since we’re in New York, his process orchestration in action part was focused on financial services, although with lessons for many other industries. I work a lot with my own financial services clients, and the challenges listed are very familiar. He walked through case studies of Desjardins (legacy BPMS replacement), Truist (merging systems from two merged banks), National Bank of Canada (automation CoE to radically reduce project development time), and NatWest (CoE to aid self-service projects).
He moved on to talk about the innovation that Camunda is introducing through their technology. They now address more of the BPM lifecycle than they started out with — which was purely as a developer tool — and now provide more tools for business and IT to collaborate on process improvement/automation projects. They are also addressing the accelerating of solutions through some low-code aspects; this was a necessary move for them in the face of the current market. Their challenge will be keeping the low code tooling from getting in the way of the developers, and keeping the technical details from getting in the way of the business people.
No technical conference today is complete without at least one slide on AI, and Jakob did not disappoint. He walked through how they see AI as it applies to process orchestration: predictive AI (e.g., process mining and decisioning), generative AI (e.g., form generator from simple language), and assistive AI (e.g., knowledge worker helper).
He described their connectors marketplace, which includes connectors created by them but also curated from their partners. Connectors are essential for integration, but their roadmap also includes process templates, internal marketplaces within an organization, and entire industry solutions and applications. This is an ambitious undertaking that a lot of vendors have done badly, and I’ll be very interested in seeing how this develops.
He finished up with some larger architecture issues: cloud support, security and compliance, multi-tenancy and how this allows them to support organizations both big and small. Their roadmap shows a lot of components that are targeted at broadening their reach while still supporting their long-term technical customers.
It’s now really the last day of BPM2023 in Utrecht, and we’re off at the Utrecht Science Park campus for the Industry Day. The goal of industry day is to have researchers and practitioners come together to discuss issues of overlap and integration. Jan vom Brocke gave the keynote “Generating Business Value with Business Process Management (BPM) – How to Engage with Universities to Continuously Develop BPM Capabilities”. I don’t think that I’ve seen Jan since we were both presenting at a series of seminars in Brazil ten years ago. His keynote was about how everything is a process (bad or good), but we need to consider how to leverage the opportunity to understand and improve processes with process management. This doesn’t mean that we want to draw a process model for everything and force it into a standardized way of running, but need to understand all types of processes and modes of operation. His work at ERCIS is encouraging the convergence of research and practice, which means (in part) bringing together the researchers and the practitioners in forums like today’s industry day, but also in more long-running initiatives.
He discussed the “BPM billboard” which shows how BPM can deliver significant value to organizations through real-world experience, academic discourse and in-depth case studies. Many businesses — particularly business executives — aren’t interested in process models or technical details of process improvement methodologies, but rather in strategy in their own business context: how can BPM be brought to bear on solving their strategic problems. This requires the identification or development process-centric capabilities within the organization, including alignment, governance, methods, technology, people and culture. Then the issues can lead to actionable projects, and the results of those projects.
He moved on to talk about the BPM context matrix, with a focus on how to make people love the BPM initiative. This requires recognizing the diversity in processes and also diversity in methods and intentions that should be applied to processes. He showed a chart of two process dimensions — frequency and variability — creating four distinct clusters of process types. This was then mapped using the BPM billboard to map onto specific approaches for each cluster. Developing more appropriate approaches in the specific business context then allows the organizations involved to understand how BPM can bring value, and fully buy in to the initiates.
His third topic was on establishing the value of process mining, or how to turn data into value. Many companies are interested in process mining, and may have started to work on some projects in their innovation areas, it’s a challenge for many of them to actually demonstrate the value. Process mining research tends to focus on the technical aspects, but there needs to be expansion of the other aspects: how it impacts individuals, groups and high level value chains.
His conclusion, which I completely agree with, is that we need to have both research and practice involved in order to move BPM forward. Practice informs research, and research supports practice: a dance that involves both equally.
Following the keynote, I was on a panel with Jan in addition to Jasper van Hattem from Apolix and Jason Dietz of Tesco. Lots of good conversation about BPM in practice, some of the challenges, and how research can better support practice.
The rest of the day was dedicated to breakouts to work on industry challenges. Representatives from four different organizations (Air France KLM Martinair Cargo, Tesco, GEMMA, and Dutch Railways) presented their existing challenges in process management, then the attendees joined into groups to brainstorm solutions and directions before a closing session to present the findings.
I didn’t stick around for the breakouts, it’s been a long week and my brain was full. Instead, I visited Rietveld Schröderhuis with its amazing architectural design and had a lovely long walk through Utrecht.
I did have a few people ask me throughout the week how many of these conferences that I’ve been to (probably because they were too polite to ask WHY I’m here), and I just did a count of seven: 2008 in Milan, 2009 in Ulm, 2010 in Hoboken, 2011 in Clermont-Ferrand (where I gave a keynote in the industry track), 2012 in Tallinn, then a long break until 2019 in Vienna, then this year in Utrecht.
I moved to the BPM Forum session for another rapid-fire succession of 15-minute presentations, a similar format to yesterday’s Journal First session. No detailed notes in such short presentations but I captured a few photos as things progressed. So many great research ideas!
Conversational Process Modelling: State of the Art, Applications, and Implications in Practice (Nataliia Klievtsova, Janik-Vasily Benzin, Timotheus Kampik, Juergen Mangler and Stefanie Rinderle-Ma), presented by Nataliia Klievtsova.
Large Language Models for Business Process Management: Opportunities and Challenges (Maxim Vidgof, Stefan Bachhofner and Jan Mendling), presented by Maxim Vidgof.
Towards a Theory on Process Automation Effects (Hoang Vu, Jennifer Haase, Henrik Leopold and Jan Mendling), presented by Hoang Vu.
Process Mining and the Transformation of Management Accounting: A Maturity Model for a Holistic Process Performance Measurement System, presented by Simon Wahrstoetter.
Business Process Management Maturity and Process Performance – A Longitudinal Study (Arjen Maris, Guido Ongena and Pascal Ravesteijn), presented by Arjen Maris.
From Automatic Workaround Detection to Process Improvement: A Case Study (Nesi Outmazgin, Wouter van der Waal, Iris Beerepoot, Irit Hadar, Inge van de Weerd and Pnina Soffer), presented by Pnina Soffer.
Detecting Weasels at Work: A Theory-driven Behavioural Process Mining Approach (Michael Leyer, Arthur H. M. ter Hofstede and Rehan Syed), presented by Michael Leyer.
It’s the last day of the main BPM2023 conference in Utrecht; tomorrow is the Industry Day where I will be speaking on a panel (although I would really like to talk to next year’s organizers about having a concurrent industry track rather than a separate day after many of the researchers and academics have departed). This morning, I attended the keynote by Matthias Weidlich of Humboldt-Universität zu Berlin on database systems and BPM.
He covered some of the history of database systems and process software, then the last 20 years of iBPMS where workflow software was expanded to include many other integrated technologies. At some point, some researchers and industry practitioners started to realize that data and process are really two sides of the same coin: you can’t do process analytics or mining without a deep understanding of the data, for example, nor can you execute decisions without processes without data. Processes have both metadata (e.g., state) and business data (process instance payload), and also link to other sources of data during execution. This data is particularly important as processes become more complex, where there may be multiple interacting processes within a single transaction.
He highlighted some of the opportunities for a tighter integration of process and data during execution, including the potential to use a database for event-based processing rather than the equivalent functionality within a process management system. One interesting development is the creation of query languages specifically for processes, both for mining and execution. Examining how existing query models can be used may allow some of the query work to be pushed down to a database system that is optimized for that functionality.
He finished by stating that data is everywhere in process management, and we should embrace database technology in process analysis and implementation. I’m of two minds about this: we don’t want to be spending a lot of time making one type of system perform an activity that is easier/better done in a different type, but we also don’t want to go down the rathole of just building our own process management system in a database engine. And yes, I’ve seen this done in practice, and it is always a terrible mistake. However, using a database to recognize and trigger events that are then managed in a process management system, or using a standardized query engine for process queries fall into the sweet spot of using both types of systems for what they do best in an integrated fashion.
Lots to think about, and good opportunities to see how database and process researchers and practitioners can work together towards “best of both worlds” solutions.
In the last session of the day, I attended another part of the RPA Forum, with two presentations.
The first presentation was “Is RPA Causing Process Knowledge Loss? Insights from RPA Experts” (Ishadi Mirispelakotuwa, Rehan Syed, Moe T. Wynn), presented by Moe Wynn. RPA has a lot of measurable benefits – efficiency, compliance, quality – but what about the “dark side” of RPA? Can it make organizations lose knowledge and control over their processes because people have been taken out of the loop? RPA is often quite brittle, and when (not if) it stops working, it’s possible that organizational amnesia has set in: no one remembers how the process works well enough to do it manually. The resulting process knowledge loss (PKL) can have a number of negative organizational impacts.
The study created a conceptual model for RPA-related PKL, and she walked us through the sets of human, organizational and process factors that may contribute. In layman’s terms, use it or lose it.
In my opinion, this is different from back-end or more technical automation (e.g., deploying a BPMS or creating APIs into enterprise system functionality) in that back-end automation is usually fully specified, rigorously coded and tested, and maintained as a part of the organization’s enterprise systems. Conversely, RPA is often created by the business areas directly and can be inherently brittle due to changes in the systems with which it interfaces. If an automated process goes down, there are likely service level agreements in place and IT steps in to get the system back online. If an RPA bot goes down, a person is expected to do the tasks manually that had been done by the bot, and there is less likely to be a robust SLA for getting the bot fixed and back online. Interesting discussion around this in the Q&A, although not part of the area of study for the paper as presented.
The second presentation was “A Business Model of Robotic Process Automation” (Helbig & Braun), presented by Eva Katarina Helbig of BurdaSolutions, an internal IT service provider for an international media group. Their work was based on a case study within their own organization, looking at establishing RPA as a driver of digitization and automation within a company based on an iterative, holistic view of business models with the Business Model Canvas as analysis tool.
They interviewed several people across the organization, mostly in operational areas, to develop a more structured model for how to approach, develop and deploy RPA projects, starting with the value proposition and expanding out to identify the customers, resources and key activities.
That’s it for day two of the main BPM2023 conference, and we’re off later to the Spoorwegmuseum for the conference dinner and a tour of the railway museum.
After the keynote, I attended the Journal First session, which was a collection of eight 15-minute presentations of papers that have been accepted by relevant journals (in contrast to the regular research papers seen in other presentations). It was like the speed-dating of presentations and I didn’t take any specific notes, but did snap a few photos and linked to the papers where I could find them. Lots of interesting ideas, in small snippets.
The second day of the main conference kicked off with a keynote by Marta Kwiatkowska, Professor of Computer Science at Oxford, on AI and machine learning in BPM. She started with some background on AI and deep learning, and linked this to automated process model discovery (process mining), simulation, what-if analysis, predictions and automated decisions. She posed the question of whether we should be worried about the safety of AI decisions, or at least advance the formal methods for provable guarantees in machine learning, and the more challenging topic of formal verification for neural networks.
She has done significant research on robustness for neural networks and the development of provable guarantees, and offered some recent directions of these applications in BPM. She showed the basics of calculating and applying robustness guarantees for image and video classification, and also for text classification/replacement. In the BPM world, she discussed using language-type prediction models for event logs, evaluating the robustness of decision functions to causal interventions, and the concept of reinforcement learning for teaching agents how to choose an action.
As expected, much of the application of AI to process execution is to the decisions within processes – automating decisions, or providing “next best action” recommendations to human actors at a particular process activity. Safety assurances and accountability/explainability are particularly important in these scenarios.
Given the popularity of AI in general, a very timely look at how it can be applied to BPM in ways that maintain robustness and correctness.
In the afternoon breakouts, I attended the RPA (robotic process automation) forum for three presentations.
The first presentation was “What Are You Gazing At? An Approach to Use Eye-tracking for Robotic Process Automation”, presented by Antonio Martínez-Rojas. RPA typically includes a training agent that captures what and where a human operator is typing based on UI logs, and uses that to create the script of actions that should be executed when that task is automated using the RPA “bot” without the person being involved – a type of process mining but based on UI event logs. In this presentation, we heard about using eye tracking — what the person is looking at and focusing on — during the training phase to understand where they are looking for information. This is especially interesting in less structured environments such as reading a letter or email, where the information may be buried in non-relevant text, and it’s difficult to filter out the relevant information. Unlike the UI event log methods, this can find what the user is focusing on while they are working, which may not be the same things in the screen that they are clicking on – an important distinction.
The second presentation was “Accelerating The Support of Conversational Interfaces For RPAs Through APIs”, presented by Yara Rizk. She presented the problem that many business people could be better supported through easier access to all types of APIs, including unattended RPA bots, and proposed a chatbot interface to APIs. This can be extracted by automatically interrogating the OpenAPI specifications, with some optional addition of phrases from people, to create natural language sentences: what is the intent of the action based on the API endpoint name and description plus sample sentences provided by the people. Then, the sentences are analyzed and filtered, and typically also with some involvement from human experts, and used to train the intent recognition models required to drive a chatbot interface.
The last presentation in this session was “Migrating from RPA to Backend Automation: An Exploratory Study”, presented by Andre Strothmann. He discussed how RPA robots need to be designed and prioritized so that they can be easily replaceable, with the goal to move to back-end automation as soon as it is available. I’ve written and presented many times about how RPA is a bridging technology, and most of it will go away in the 5-10 year horizon, so I’m pretty happy to see this presented in a more rigorous way than my usual hand-waving. He discussed the analysis of their interview data that resulted in some fundamental design requirements for RPA bots, design guidelines for the processes that orchestrate those bots, and migration considerations when moving from RPA bots to APIs. If you’re developing RPA bots now and understand that they are only a stopgap solution, you should be following this research.
We’ve started the breakout paper presentations and I’m in the session on design patterns and modeling. For these breakouts, I’ll mostly just offer a few notes since it’s difficult to get an in-depth sense in such a short time. I’ll provide the paper and author names in case you want to investigate further. Note that some of the images that I include are screenshots from the conference proceedings: although the same information was shown in the presentations, the screenshots are much more legible than my photos made during the presentations.
The first paper is “Not Here, But There: Human Resource Allocation Patterns” (Kanika Goel, Tobias Fehrer, Maximilian Röglinger, and Moe Thandar Wynn), presented by Tobias Fehrer. Patterns help to document BPM best practices, and they are creating a set of patterns specifically for human resource allocation within processes. They did a broad literature review and analysis to distill out 15 patterns, then evaluated and refined these through interviews with process improvement specialists to determine usefulness and pervasiveness. The resulting patterns fall into five categories: capability (expertise, role, preference), utilization (workload, execution constraints), reorganization (empower individual workers to make decisions to avoid bottlenecks), productivity (efficiency/quality based on historical data), and collaboration (based on historical interactions within teams or with external resources). This is a really important topic in human tasks within processes: just giving the same work to the same person/role all the time isn’t necessarily the best way to go about it. Their paper summarizes the patterns and their usefulness and pervasiveness measures, and also considers human factors such as the well-being and “happiness” of the process participants, and identifying opportunities for upskilling. Although he said explicitly that this is intended for a priori process design, there’s likely knowledge that can also be applied to dynamic runtime resource allocation.
The second presentation was “POWL: Partially Ordered Workflow Language” (Humam Kourani and Sebastiaan van Zelst), presented by Humam Kourani. He introduced their new modeling language, POWL, that allows for a better discovery and representation of partial orders, that is, where some activities have a strict order, while others may happen in any order. This is fairly typically in semi-structured case management, where there can be a combination of sets of tasks that can be performed in any order plus some predefined process segments.
The third presentation was “Benevolent Business Processes – Design Guidelines Beyond Transactional Value” (Michael Rosemann, Wasana Bandara, Nadine Ostern, and Marleen Voss), presented by Michael Rosemann. Benevolent processes consider the needs of the customer as being as important as (or even more important) the needs of the “provider”, that is, the organization that owns the process. BPM has historically been about improving efficiency, but many are looking at other metrics such as customer satisfaction. In my own writing and presentations, I make an explicit link between customer satisfaction and high-level revenue/cost metrics, and the concept of benevolent processes fits well with that. Benevolence goes beyond customer-centric process design to provide an immediate, unexpected and optional benefit to the recipient. A thought-provoking view on designing processes that will create fiercely loyal customers.
The final presentation in this session was “Action-Evolution Petri Nets: a Framework for Modeling and Solving Dynamic Task Assignment Problems” (Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, and Willem Van Jaarsveld), presented by Riccardo Lo Bianco. Petri nets have no mechanisms for calculating assignment decisions, so their work looks at how to model task assignment that attempts to optimize that assignment. For example, if there are two types of tasks and two resources, where one resource can only perform one type of task, and another resource can perform either type of task, how is the work best assigned? A standard assignment would just randomly assign tasks to resources, filtered by resource capability, but that may result in poor results depending on the composition of the tasks waiting in the queue. They have developed and shared a framework for modeling and solving dynamic task assignment problems.
Good start to the breakout sessions, and excellent insights on some difficult process modeling research problems.