BPM2023 Utrecht Workshop on BPM and Social Software

It’s been a minute! Last time that I attended the BPM academic research conference was in the “before times”, back in 2019 in Vienna. This week, I’m in Utrecht for this year’s version, and it’s lovely here – beautiful historic buildings and stroopwafels in the conference bag!

I’m starting with the workshop on BPM and social software. This was the first workshop that I attended at my first trip to the BPM conference in 2008 in Milan, also chaired by Rainer Schmidt.

All three of these presentations looked at different aspects of how traditional structured process modeling and orchestration fails at addressing end-to-end process management, and how social (including social media) constructs can help. In the first presentation, processes are too unstructured for (easy) automation; in the second, there’s a need for better ways to provide feedback from process participants to the design; and in the third, organizations can’t even figure out how to get started with BPM.

The first in the workshop was Joklan Imelda Camelia Goni, who was accompanied by her supervisor Amy van Looy, on “Towards a Measurement Instrument for Assessing Capabilities when Innovating Less-Structured Business Processes”. Some of the background research for this work was a Delphi study that I participated in during 2021-2022, so it was interesting to see how her research is advancing. There is a focus on capabilities within organizations: how capable are certain people or departments at determining the need for innovation and creating the innovation in (often manual) processes.

Next was Mehran Majidian Eidgahi on “Integrating Social Media and Business Process Management: Exploring the Role of AI Agents and the Benefits for Agility” (other paper contributors are Anne-Marie Barthe-Delanoë, Dominik Bork, Sina Namaki Araghi, Guillaume Mace-Ramete and Frédérick Bénaben). This looks at the problem of structured business process models that have been orchestrated/automated, but that need some degree of agility for process changes. He characterizes BPM agility at three stages: discovering, deciding and implementing, and sees that much of the work has focused on discovery (process mining) and implementing, but not as much on deciding (that is, analysis or design). Socializing BPM with the participants can bring their ideas and feedback into the process design, and they propose a social BPM platform for providing reactions, feedback and suggestions on processes. I’ve seen structures similar to this in some commercial BPM products, but one of the main issues is that the actual executing model is not how the participants envision it: it may be much more event-driven rather than a more traditional flow model. He presented some of their other research on bringing AI to the platform and framework, which provides a good overview of the different areas in which AI may be applied.

The last presentation in the workshop was by Sebastian Dunzer on “Design Principles for Using Business Process Management Systems” (other paper contributors Willi Tang, Nico Höchstädter, Sandra Zilker and Martin Matzner). He looks at the “pre-BPM” problem of how to have organizations understand how they could use BPM to improve their operations: in his words, “practice knows of BPM, but it remains unclear how to get started”. This resonates with me, since much of my consulting over the years has included some aspect of explaining that link between operational business problems and the available technologies. They did an issue-tracking project with a medium-sized company that allowed them to use practical applications and simultaneously provide research insights. Their research outcome was to generate design principles that link IT artifacts and users through functional relationships.

Many thanks to the conference chair, Hajo Reijers, for extending an invitation to me for the conference. I’ll be at more workshops later today, and the rest of the conference throughout the week.

150 episodes of the Process Pioneers podcast

The Process Pioneers podcast recently published their 150th episode, which is a significant milestone considering that most podcasts wither into inactivity pretty quickly. It’s hosted by Daniel Rayner, managing director of APAC for GBTEC, so I suppose that technically they sponsor it, but it’s really just a free-ranging discussion that doesn’t talk about their products (or at least didn’t when I was on it).

You can see my interview with Daniel on the podcast from last year, it was a lot of fun and we delved into some interesting points.

DecisionCAMP 2019: DMN TCK, BPO with AI and rules, and business logic hidden in spreadsheets

Close Is Not Close Enough. Keith Swenson, Fujitsu

A few months ago at bpmNEXT, I saw Keith Swenson give an update on the DMN Technology Compatibility Kit, and we’re seeing a bit of a repeat of that presentation here at DecisionCAMP. The TCK defines a set of test cases (as DMN decision models, input data and expected results) that assure conformance to the specification, plus a sample runner application that will pass the models and data to the vendor’s engine and evaluate the results.

DMN TCK. From Keith Swenson’s presentation.

There are about 120 test models and 1600 test cases, supporting only DMN 1.2; these tests come from examining the specification as well as cases from practice. It’s easy for a vendor to get involved in the the TCK, both in terms of running it against their engine and in terms of participating through submitting new test models and cases. You can see the vendors that have submitted their results; although many more vendors claim that they “have DMN”, their actual level of compatibility may be suspect.

The TCK committee is getting ready for DMN 1.3, and considering tests for modeling tools in addition to the current tests for the engine. He also floated the idea of a standardized API for DMN as a service, so that the calling application doesn’t need to know which engine it’s calling — possibly something that’s not going to be a big hit with vendors.

Business innovation of BPO realized by Task Center and AI and Rule Engine. Yoshihito Nakayama, NTT DATA INTRAMART

Yoshihito Nakayama presented on the current challenges of BPO with respect to improving productivity, and how they are resolving this using AI and a rules engine to aggregate and assign human tasks from multiple systems to different team members. This removes the requirement to manually review and assign work, and also provides a dashboard for visualizing work in progress and future forecasts.

Intramart’s Task Center for aggregating and assigning work. From Yoshihito Nakayama’s presentation.

AI is used to predict and optimize task classification and assignment, based on time required to complete the task and the individual worker skill level and productivity. It is also used to predict workload for task types and individual workers.

Their visualization dashboard shows drilldowns on current and past productivity, plus future forecasts. The simulation models for forecasting can be fine-tuned to optimize for cost, performance and other factors. It brings together work monitoring from all systems, including RPA processes. They’re also using process mining on a variety of systems to create a digital twin of the organization for better tracking and predictions, as well as other tools such as voice and image identification to recognize what tasks are being done that are not being recorded in any system logs.

They have a variety of case studies across industries, looking at automating non-routine work using case management, BPM, RPA, AI and rules.

Spaghetti Spreadsheets Untangled – Benefits of decision modeling when uncovering complex business logic hidden in spreadsheets. Charlotte Bouvy, M.C. Bouvy Consultancy

Charlotte Bouvy presented on her work done with SVB, the Netherlands social insurance administrator, on implementing business rules management. They are using DMN-based wizards for supporting 1,500 case workers, and the specific case was around the operational control and audit departments and the “lawfulness” of how the assessment work is done. Excel spreadsheets were used to do this, which had obvious problems in terms of being error prone and lacking domain-specific business logic. They implemented their SARA system to replace the spreadsheets with Oracle OPA, which allowed them to more accurately represent knowledge, as well as separate the data from the decision model while creating an executable model.

Decision model to determine lawfulness. From Charlotte Bouvy’s presentation.

These type of audit processes require sampling over a wide variety of case files to compare actual payments against expected amounts, with some degree of aggregation within specific laws being applied. Moving to a rules engine allowed them to model calculations and decisions, and separate data and model to avoid errors that occurred when copying and pasting data in spreadsheets. The executable model is now a single source of truth to which version control and change management can be applied. They are trying out different ways of using the SARA system: directly in Oracle Policy Modeler for building and debugging; via a web interview and an RPA robot for data input; and eventually via direct integration with the SVB’s case management system to load data.