BPM2023 Industry Day Keynote and Conference Wrapup

It’s now really the last day of BPM2023 in Utrecht, and we’re off at the Utrecht Science Park campus for the Industry Day. The goal of industry day is to have researchers and practitioners come together to discuss issues of overlap and integration. Jan vom Brocke gave the keynote “Generating Business Value with Business Process Management (BPM) – How to Engage with Universities to Continuously Develop BPM Capabilities”. I don’t think that I’ve seen Jan since we were both presenting at a series of seminars in Brazil ten years ago. His keynote was about how everything is a process (bad or good), but we need to consider how to leverage the opportunity to understand and improve processes with process management. This doesn’t mean that we want to draw a process model for everything and force it into a standardized way of running, but need to understand all types of processes and modes of operation. His work at ERCIS is encouraging the convergence of research and practice, which means (in part) bringing together the researchers and the practitioners in forums like today’s industry day, but also in more long-running initiatives.

He discussed the “BPM billboard” which shows how BPM can deliver significant value to organizations through real-world experience, academic discourse and in-depth case studies. Many businesses — particularly business executives — aren’t interested in process models or technical details of process improvement methodologies, but rather in strategy in their own business context: how can BPM be brought to bear on solving their strategic problems. This requires the identification or development process-centric capabilities within the organization, including alignment, governance, methods, technology, people and culture. Then the issues can lead to actionable projects, and the results of those projects.

He moved on to talk about the BPM context matrix, with a focus on how to make people love the BPM initiative. This requires recognizing the diversity in processes and also diversity in methods and intentions that should be applied to processes. He showed a chart of two process dimensions — frequency and variability — creating four distinct clusters of process types. This was then mapped using the BPM billboard to map onto specific approaches for each cluster. Developing more appropriate approaches in the specific business context then allows the organizations involved to understand how BPM can bring value, and fully buy in to the initiates.

His third topic was on establishing the value of process mining, or how to turn data into value. Many companies are interested in process mining, and may have started to work on some projects in their innovation areas, it’s a challenge for many of them to actually demonstrate the value. Process mining research tends to focus on the technical aspects, but there needs to be expansion of the other aspects: how it impacts individuals, groups and high level value chains.

His conclusion, which I completely agree with, is that we need to have both research and practice involved in order to move BPM forward. Practice informs research, and research supports practice: a dance that involves both equally.

Following the keynote, I was on a panel with Jan in addition to Jasper van Hattem from Apolix and Jason Dietz of Tesco. Lots of good conversation about BPM in practice, some of the challenges, and how research can better support practice.

The rest of the day was dedicated to breakouts to work on industry challenges. Representatives from four different organizations (Air France KLM Martinair Cargo, Tesco, GEMMA, and Dutch Railways) presented their existing challenges in process management, then the attendees joined into groups to brainstorm solutions and directions before a closing session to present the findings.

I didn’t stick around for the breakouts, it’s been a long week and my brain was full. Instead, I visited Rietveld Schröderhuis with its amazing architectural design and had a lovely long walk through Utrecht.

I did have a few people ask me throughout the week how many of these conferences that I’ve been to (probably because they were too polite to ask WHY I’m here), and I just did a count of seven: 2008 in Milan, 2009 in Ulm, 2010 in Hoboken, 2011 in Clermont-Ferrand (where I gave a keynote in the industry track), 2012 in Tallinn, then a long break until 2019 in Vienna, then this year in Utrecht.

BPM2023 Day 3 BPM Forum: come for the chatbots, stay for the weasels

I moved to the BPM Forum session for another rapid-fire succession of 15-minute presentations, a similar format to yesterday’s Journal First session. No detailed notes in such short presentations but I captured a few photos as things progressed. So many great research ideas!

Conversational Process Modelling: State of the Art, Applications, and Implications in Practice (Nataliia Klievtsova, Janik-Vasily Benzin, Timotheus Kampik, Juergen Mangler and Stefanie Rinderle-Ma), presented by Nataliia Klievtsova.

Large Language Models for Business Process Management: Opportunities and Challenges (Maxim Vidgof, Stefan Bachhofner and Jan Mendling), presented by Maxim Vidgof.

Towards a Theory on Process Automation Effects (Hoang Vu, Jennifer Haase, Henrik Leopold and Jan Mendling), presented by Hoang Vu.

Process Mining and the Transformation of Management Accounting: A Maturity Model for a Holistic Process Performance Measurement System, presented by Simon Wahrstoetter.

Business Process Management Maturity and Process Performance – A Longitudinal Study (Arjen Maris, Guido Ongena and Pascal Ravesteijn), presented by Arjen Maris.

From Automatic Workaround Detection to Process Improvement: A Case Study (Nesi Outmazgin, Wouter van der Waal, Iris Beerepoot, Irit Hadar, Inge van de Weerd and Pnina Soffer), presented by Pnina Soffer.

Detecting Weasels at Work: A Theory-driven Behavioural Process Mining Approach (Michael Leyer, Arthur H. M. ter Hofstede and Rehan Syed), presented by Michael Leyer.

BPM2023 Day 3 Keynote: Data Meets Process

It’s the last day of the main BPM2023 conference in Utrecht; tomorrow is the Industry Day where I will be speaking on a panel (although I would really like to talk to next year’s organizers about having a concurrent industry track rather than a separate day after many of the researchers and academics have departed). This morning, I attended the keynote by Matthias Weidlich of Humboldt-Universität zu Berlin on database systems and BPM.

He covered some of the history of database systems and process software, then the last 20 years of iBPMS where workflow software was expanded to include many other integrated technologies. At some point, some researchers and industry practitioners started to realize that data and process are really two sides of the same coin: you can’t do process analytics or mining without a deep understanding of the data, for example, nor can you execute decisions without processes without data. Processes have both metadata (e.g., state) and business data (process instance payload), and also link to other sources of data during execution. This data is particularly important as processes become more complex, where there may be multiple interacting processes within a single transaction.

He highlighted some of the opportunities for a tighter integration of process and data during execution, including the potential to use a database for event-based processing rather than the equivalent functionality within a process management system. One interesting development is the creation of query languages specifically for processes, both for mining and execution. Examining how existing query models can be used may allow some of the query work to be pushed down to a database system that is optimized for that functionality.

He finished by stating that data is everywhere in process management, and we should embrace database technology in process analysis and implementation. I’m of two minds about this: we don’t want to be spending a lot of time making one type of system perform an activity that is easier/better done in a different type, but we also don’t want to go down the rathole of just building our own process management system in a database engine. And yes, I’ve seen this done in practice, and it is always a terrible mistake. However, using a database to recognize and trigger events that are then managed in a process management system, or using a standardized query engine for process queries fall into the sweet spot of using both types of systems for what they do best in an integrated fashion.

Lots to think about, and good opportunities to see how database and process researchers and practitioners can work together towards “best of both worlds” solutions.

BPM2023 Day 2: RPA Forum

In the last session of the day, I attended another part of the RPA Forum, with two presentations. 

The first presentation was “Is RPA Causing Process Knowledge Loss? Insights from RPA Experts” (Ishadi Mirispelakotuwa, Rehan Syed, Moe T. Wynn), presented by Moe Wynn. RPA has a lot of measurable benefits – efficiency, compliance, quality – but what about the “dark side” of RPA? Can it make organizations lose knowledge and control over their processes because people have been taken out of the loop? RPA is often quite brittle, and when (not if) it stops working, it’s possible that organizational amnesia has set in: no one remembers how the process works well enough to do it manually. The resulting process knowledge loss (PKL) can have a number of negative organizational impacts.

The study created a conceptual model for RPA-related PKL, and she walked us through the sets of human, organizational and process factors that may contribute. In layman’s terms, use it or lose it.

In my opinion, this is different from back-end or more technical automation (e.g., deploying a BPMS or creating APIs into enterprise system functionality) in that back-end automation is usually fully specified, rigorously coded and tested, and maintained as a part of the organization’s enterprise systems. Conversely, RPA is often created by the business areas directly and can be inherently brittle due to changes in the systems with which it interfaces. If an automated process goes down, there are likely service level agreements in place and IT steps in to get the system back online. If an RPA bot goes down, a person is expected to do the tasks manually that had been done by the bot, and there is less likely to be a robust SLA for getting the bot fixed and back online. Interesting discussion around this in the Q&A, although not part of the area of study for the paper as presented.

The second presentation was “A Business Model of Robotic Process Automation” (Helbig & Braun), presented by Eva Katarina Helbig of BurdaSolutions, an internal IT service provider for an international media group. Their work was based on a case study within their own organization, looking at establishing RPA as a driver of digitization and automation within a company based on an iterative, holistic view of business models with the Business Model Canvas as analysis tool.

They interviewed several people across the organization, mostly in operational areas, to develop a more structured model for how to approach, develop and deploy RPA projects, starting with the value proposition and expanding out to identify the customers, resources and key activities.

That’s it for day two of the main BPM2023 conference, and we’re off later to the Spoorwegmuseum for the conference dinner and a tour of the railway museum.

BPM2023 Day 2: Journal First Breakout

After the keynote, I attended the Journal First session, which was a collection of eight 15-minute presentations of papers that have been accepted by relevant journals (in contrast to the regular research papers seen in other presentations). It was like the speed-dating of presentations and I didn’t take any specific notes, but did snap a few photos and linked to the papers where I could find them. Lots of interesting ideas, in small snippets.

The biggest business process management problems to solve before we die (Iris Beerepoot et al.), presented by Iris Beerepoot.

Methods that bridge business models and business processes: a synthesis of the literature (Paola Lara Machado, Montijn van de Ven, Banu Aysolmaz, Alexia Athanasopoulou, Baris Ozkan and Oktay Turetken), presented by Banu Aysolmaz.

Conformance checking of process event streams with constraints on data retention (Rashid Zaman, Marwan Hassani and Boudewijn Van Dongen), presented by Rashid Zaman.

ProcessGAN: Supporting the creation of business process improvement ideas through generative machine learning (Christopher van Dun, Linda Moder, Wolfgang Kratsch and Maximilian Röglinger), presented by Wolfgang Kratsch.

Quantifying chatbots’ ability to learn business processes (Christoph Kecht, Andreas Egger, Wolfgang Kratsch and Maximilian Roeglinger), also presented by Wolfgang Kratsch.

Extracting Reusable Fragments From Data-Centric Process Variants (Rik Eshuis), presented by Rik Eshuis.

Security and privacy concerns in cloud-based scientific and business workflows: A systematic review (Nafiseh Soveizi, Fatih Turkmen and Dimka Karastoyanova), presented by Nafiseh Soveizi.

Process fragments discovery from emails: Functional, data and behavioral perspectives discovery (Marwa Elleuch, Oumaima Alaoui Ismaili, Nassim Laga and Walid Gaaloul), presented by Marwa Elleuch. I found this paper really fascinating since so many business processes exist only in emails and spreadsheets, not enterprise systems or BPMS.

BPM2023 Day 2 Keynote: AI in Processes

The second day of the main conference kicked off with a keynote by Marta Kwiatkowska, Professor of Computer Science at Oxford, on AI and machine learning in BPM. She started with some background on AI and deep learning, and linked this to automated process model discovery (process mining), simulation, what-if analysis, predictions and automated decisions. She posed the question of whether we should be worried about the safety of AI decisions, or at least advance the formal methods for provable guarantees in machine learning, and the more challenging topic of formal verification for neural networks.

She has done significant research on robustness for neural networks and the development of provable guarantees, and offered some recent directions of these applications in BPM. She showed the basics of calculating and applying robustness guarantees for image and video classification, and also for text classification/replacement. In the BPM world, she discussed using language-type prediction models for event logs, evaluating the robustness of decision functions to causal interventions, and the concept of reinforcement learning for teaching agents how to choose an action.

As expected, much of the application of AI to process execution is to the decisions within processes – automating decisions, or providing “next best action” recommendations to human actors at a particular process activity. Safety assurances and accountability/explainability are particularly important in these scenarios.

Given the popularity of AI in general, a very timely look at how it can be applied to BPM in ways that maintain robustness and correctness.

BPM2023 Day 1: RPA Forum

In the afternoon breakouts, I attended the RPA (robotic process automation) forum for three presentations.

The first presentation was “What Are You Gazing At? An Approach to Use Eye-tracking for Robotic Process Automation”, presented by Antonio Martínez-Rojas. RPA typically includes a training agent that captures what and where a human operator is typing based on UI logs, and uses that to create the script of actions that should be executed when that task is automated using the RPA “bot” without the person being involved – a type of process mining but based on UI event logs. In this presentation, we heard about using eye tracking — what the person is looking at and focusing on — during the training phase to understand where they are looking for information. This is especially interesting in less structured environments such as reading a letter or email, where the information may be buried in non-relevant text, and it’s difficult to filter out the relevant information. Unlike the UI event log methods, this can find what the user is focusing on while they are working, which may not be the same things in the screen that they are clicking on – an important distinction.

The second presentation was “Accelerating The Support of Conversational Interfaces For RPAs Through APIs”, presented by Yara Rizk. She presented the problem that many business people could be better supported through easier access to all types of APIs, including unattended RPA bots, and proposed a chatbot interface to APIs. This can be extracted by automatically interrogating the OpenAPI specifications, with some optional addition of phrases from people, to create natural language sentences: what is the intent of the action based on the API endpoint name and description plus sample sentences provided by the people. Then, the sentences are analyzed and filtered, and typically also with some involvement from human experts, and used to train the intent recognition models required to drive a chatbot interface.

The last presentation in this session was “Migrating from RPA to Backend Automation: An Exploratory Study”, presented by Andre Strothmann. He discussed how RPA robots need to be designed and prioritized so that they can be easily replaceable, with the goal to move to back-end automation as soon as it is available. I’ve written and presented many times about how RPA is a bridging technology, and most of it will go away in the 5-10 year horizon, so I’m pretty happy to see this presented in a more rigorous way than my usual hand-waving. He discussed the analysis of their interview data that resulted in some fundamental design requirements for RPA bots, design guidelines for the processes that orchestrate those bots, and migration considerations when moving from RPA bots to APIs. If you’re developing RPA bots now and understand that they are only a stopgap solution, you should be following this research.

BPM2023 Day 1: Design Patterns and Modeling

We’ve started the breakout paper presentations and I’m in the session on design patterns and modeling. For these breakouts, I’ll mostly just offer a few notes since it’s difficult to get an in-depth sense in such a short time. I’ll provide the paper and author names in case you want to investigate further. Note that some of the images that I include are screenshots from the conference proceedings: although the same information was shown in the presentations, the screenshots are much more legible than my photos made during the presentations.

The first paper is “Not Here, But There: Human Resource Allocation Patterns” (Kanika Goel, Tobias Fehrer, Maximilian Röglinger, and Moe Thandar Wynn), presented by Tobias Fehrer. Patterns help to document BPM best practices, and they are creating a set of patterns specifically for human resource allocation within processes. They did a broad literature review and analysis to distill out 15 patterns, then evaluated and refined these through interviews with process improvement specialists to determine usefulness and pervasiveness. The resulting patterns fall into five categories: capability (expertise, role, preference), utilization (workload, execution constraints), reorganization (empower individual workers to make decisions to avoid bottlenecks), productivity (efficiency/quality based on historical data), and collaboration (based on historical interactions within teams or with external resources). This is a really important topic in human tasks within processes: just giving the same work to the same person/role all the time isn’t necessarily the best way to go about it. Their paper summarizes the patterns and their usefulness and pervasiveness measures, and also considers human factors such as the well-being and “happiness” of the process participants, and identifying opportunities for upskilling. Although he said explicitly that this is intended for a priori process design, there’s likely knowledge that can also be applied to dynamic runtime resource allocation.

The second presentation was “POWL: Partially Ordered Workflow Language” (Humam Kourani and Sebastiaan van Zelst), presented by Humam Kourani. He introduced their new modeling language, POWL, that allows for a better discovery and representation of partial orders, that is, where some activities have a strict order, while others may happen in any order. This is fairly typically in semi-structured case management, where there can be a combination of sets of tasks that can be performed in any order plus some predefined process segments.

The third presentation was “Benevolent Business Processes – Design Guidelines Beyond Transactional Value” (Michael Rosemann, Wasana Bandara, Nadine Ostern, and Marleen Voss), presented by Michael Rosemann. Benevolent processes consider the needs of the customer as being as important as (or even more important) the needs of the “provider”, that is, the organization that owns the process. BPM has historically been about improving efficiency, but many are looking at other metrics such as customer satisfaction. In my own writing and presentations, I make an explicit link between customer satisfaction and high-level revenue/cost metrics, and the concept of benevolent processes fits well with that. Benevolence goes beyond customer-centric process design to provide an immediate, unexpected and optional benefit to the recipient. A thought-provoking view on designing processes that will create fiercely loyal customers.

The final presentation in this session was “Action-Evolution Petri Nets: a Framework for Modeling and Solving Dynamic Task Assignment Problems” (Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, and Willem Van Jaarsveld), presented by Riccardo Lo Bianco. Petri nets have no mechanisms for calculating assignment decisions, so their work looks at how to model task assignment that attempts to optimize that assignment. For example, if there are two types of tasks and two resources, where one resource can only perform one type of task, and another resource can perform either type of task, how is the work best assigned? A standard assignment would just randomly assign tasks to resources, filtered by resource capability, but that may result in poor results depending on the composition of the tasks waiting in the queue. They have developed and shared a framework for modeling and solving dynamic task assignment problems.

Good start to the breakout sessions, and excellent insights on some difficult process modeling research problems.

BPM2023 Day 1 Keynote: Pfizer Vaccine Development Process Optimization

Following yesterday’s workshops, the main conference kicked off today with an introduction from the organizers, which included a diagram of the paper acceptance process that was mined from the actual activities in the paper submission and review platform — did anyone else find this funny? We’ve also moved from the beautiful, historic and non-air-conditioned university buildings to the artistic and somewhat cooler TivoliVredenburg concert hall.

We then had the opening keynote by Marc Kaptein, Medical Director at Pfizer. He was involved in vaccine development, among other activities at Pfizer, and provided some insights into how process optimization allowed them to manufacture billions of vaccine doses in record time. He is a medical doctor but also a Six Sigma black belt, which means he has a strong sense of how good processes matter.

He walked us through the COVID crisis from the initial thoughts that it could be isolated geographically (spoiler: it couldn’t), to the research that led to the development of mRNA type vaccines. He mentioned a number of the international researchers in different organizations who contributed along the way, and gave a great explanation of how mRNA vaccines work. He credited the CEO of Pfizer with going all in with whatever funding was required to develop a vaccine — “If we don’t do it, who will?” — and for refusing US government funding since the concomitant oversight/interference would have slowed them down. (Moderna took the funding and ended up releasing later than Pfizer)

Once they had done their initial trials and proved the efficacy of the vaccine, they were then faced with the manufacturing process problem: how to get from one to one billion in the shortest period of time? Even before they had trialed the vaccine, they were readying the production facilities to ramp up for this volume. They also invested in ensuring the supply of the lipids and mRNA, including ways of decreasing the time required to create mRNA. They also optimized more of the mechanical aspects of production, such as speeding up the vial filling process, moving to 100% automated inspection, and improving the capping and labeling processes. They improved their production maintenance cycle from a month to five days. They expanded their freezer farm facilities, and ended up building their own dry ice manufacturing facility since their suppliers couldn’t keep up with their demands.

All of these improvements led to almost 50% reduction in production cycle time from 110 days down to 60 days, which allowed them to produce an amazing 1.2 billion doses in 2021 and 2 billion in 2022. Kaptein pointed out that this won’t be the last pandemic, and these improvements to the process and production speed will serve everyone in the future.

Disclaimer: I’ve had five vaccinations including AstraZeneca, Pfizer and Moderna, so I’m completely non-partisan about big pharma. 🙂

BPM2023 Utrecht Workshop on Business Process Optimization

11 years ago, Marlon Dumas from Tartu University chaired the BPM2012 conference in Tallinn, Estonia, which I was able to attend. We had met at a previous conference (maybe Milan?), then since that time I’ve worked with his company Apromore to create a white paper on process mining. Today, he gave a keynote at the workshop here at BPM2023 on the status and perspectives of business process optimization.

He started with the origins of process optimization from 20+ years ago: start with discovery to create the as-is model, then develop of the to-be model, testing of the new model, and eventually deployment. Adding simulation to the loop allows for some testing and predicted performance to feed back into the design prior to deployment. This type of process analysis had a lot of flaws due to some fundamentally flawed assumptions about the correctness of the process model and simulation parameters, the skills and behaviour of the process participants, and general resource management. Marlon (and many others) have endorsed these imperfect methods in the past, and he invited us to tear up his earlier book on BPM. 😆

Since then, he has been working on a better sort of simulation model based on discovery from event logs: think of it as using process mining as an automated generator for more complex simulation parameters rather than just the base process model. They have shared their work for other researchers to review and extend.

This has opened the door to more automated process optimization techniques: search-based, which adds domain knowledge to the simulation model discovery to generate a set of possible process changes that can then be simulated and tested to determine the best improvement opportunities. Optimization, as he pointed out, is a multi-dimensional problem since we are always working towards the improvement of more than one performance indicator. Dimensions of improvement may include optimization of decision rules, flow, tasks and/or resources. They’ve done some additional work on an optimization engine that’s also shared on GitHub.

He moved on to talking about conversational process optimization, which makes search-based optimization just a step in a broader approach that puts a human expert in the loop to guide the exploration of the optimization space. In this approach, a conversational UI has an interactive discussion with a human expert, then combines that with the search-based optimization techniques, then presents that back to the expert for review and further conversation and optimization.

As the presentation finished and we were moving to questions, security kicked us out of our room for overcrowding, so we adjourned to the outdoor square. Lots of great discussion, with Marlon mentioning that the field of Operations Research is okay except that it’s the domain of a bunch of mathematicians, and urging us to cast off the shackles of process models. Also a good bit about the optimization of resource workload and allocation to maximize efficiency: people work best when they are “happy” (a proxy for “unstressed and productive”), which means having neither too much nor too little work, and the right mix of work. 

Marlon published his slide deck on Slideshare, which allowed me to steal a few screenshots rather than trying to photograph the live presentation.