BPM Milan: The Future of BPM

Peter Dadam of University of Ulm opened the last day of the conference (and my last session, since I’m headed out at the morning break) with a keynote on the future of BPM: Flyin with the Eagles, or Scratching with the Chickens?

He went through some of his history in getting into research (in the IBM DB2 area), with a conclusion when you ask current users about what they want, they tend to use the current technology as a given, and only request workarounds within the constraints of the existing solution. The role of research is, in part, to disseminate knowledge about what is possible: the new paradigm for the future. Anyone who has worked on the bleeding edge of innovation recognizes this, and realizes that you first have to educate the market on what’s possible before you can begin to start developing the use cases for it.

He discussed the nature of university research versus industrial research, where the pendulum has swung from research being done in universities, to the more significant research efforts being done (or being perceived as being done) in industrial research centers, to the closing of many industrial research labs and a refocusing on pragmatic, product-oriented research by the rest. This puts the universities back in the position of being able to offer more visionary research, but there is a risk of just being the research tail that the industry dog wags.

Moving on to BPM, and looking at it against a historical background, we have the current SOA frenzy in industry, but many enterprises implementing it are hard-pressed to say why their current SOA infrastructure provides anything for them that CORBA didn’t. There’s a big push to bring in BPM tools, particularly modeling tools, without considering the consequences of putting tools like this in the hands of users who don’t understand the impact of certain design decisions. We need to keep both the manual and automated processes in mind, and consider that exceptions are often not predictable; enterprises cannot take the risk of becoming less flexible through the implementation of BPM because they make the mistake of designing completely structured and rigid processes.

There’s also the issue of how the nature of web services can trivialize the larger relationship between a company and its suppliers: realistically, you don’t replace one supplier with another just because they have the same web services interface, without significant other changes (the exception to this is, of course, when the product provided by the supplier is the web service itself).

He sees that there is a significant risk that BPM technology will not develop properly, and that the current commercial systems are not suitable for advanced applications. He described several challenges in implementing BPM (e.g., complex structured processes; exceptions cannot be completely anticipated), and the implications in terms of what must exist in the system in order to overcome this challenge (e.g., expressive process meta model; ad-hoc deviations from the pre-planned execution sequence must be possible). He discussed their research (more than 10 years ago now) in addressing these issues, considering a number of different tools and approaches, how that resulted in the ADEPT process meta model and eventually the AristaFlow process management system. He then gave us a demo of the AristaFlow process modeler — not something that you see often in a keynote — before moving on to discuss how some of the previously stated challenges are handled, and how the original ADEPT research projects fed into the AristaFlow project. The AristaFlow website describes the motivation for this joint university-industry project:

In particular, in dynamic environments it must be possible to quickly implement and deploy new processes, to enable ad-hoc modifications of single process instances at runtime (e.g. to add, delete or shift process steps), and to support process schema evolution with instance migration, i.e. to propagate process schema changes to already running instances. These requirements must be met without affecting process consistency and by preserving the robustness of the process management system.

Although lagging behind many commercial systems in terms of user interface and some functionality, this provides much more dynamic functionality in areas such as allowing a user to add make minor modifications to the process instance that they are currently running.

He concluded with the idea that BPM technology could become as important as database technology, if done correctly, but it’s a very complex issue due to the impact on the work habits of the people involved, and the desire not to limit flexibility while still providing the benefits of process automation and governance. It’s difficult to predict what real-world process exceptions will occur and therefore what type of flexibility will be required during execution. By providing a process template rather than a rigidly-structured process instance, some of this flexibility can be achieved within the framework of the BPMS rather than forcing the users to break the process in order to handle exceptions.

BPM Milan: Managing Process Variability and Compliance

We finished the day with a panel on Managing Process Variability and Compliance in the Enterprise – An Opportunity Not To Be Missed, or a Fools Errand? This was moderated by Heiko Ludwig & Chris Ward of IBM Research, and included Manfred Reichert, University of Ulm, Schahram Dustdar of Vienna University of Technology, Jyoti Bhat of Infosys, and Claudio Bartolini of HP.

Any multinational company ends up with tools and business processes that are specific to each region or country, adopted typically to respond to the local regulatory environment. This presents challenges in establishing enterprise-wide best practices, process standardization and compliance: the issue is to either establish compliance, or accept and manage variability.

The consensus seems to be “it depends”: compliance provides better auditability on high-value processes, whereas variability provides benefits for processes that need to be highly flexible and agile, and you may not be able to apply the same principles across all business processes. It’s only possible to enforce enterprise-wide process compliance when there is a vital business need; it’s not something to be taken on lightly, since it will almost certainly decrease process agility, which will not have the support of regional management. Even with “compliant” processes, there will be variability across regions, particularly those greatly different in size; compliance may then be defined in terms of certain milestones and quality standards being met rather than a step-by-step identical process.

The panel was run in my least favorite form, namely serial individual presentations (which were fairly repetitive), followed by direct questions from the moderator to each of the panelists. Very little interaction between panelists, no fisticuffs, and not enough stimulating conversation.

BPM Milan: Diagnosing Differences between Business Process Models

Remco Dijkman of the Technical of Technology of Eindhoven presented a paper on Diagnosing Differences between Business Process Models, focusing on behavioral differences rather than the structural differences that were examined in the previous paper by IBM. The problem is the same: there are two process models, likely two versions of the same model, and there is a need to detect and characterize the differences between them.

He developed a taxonomy of differences between processes, both from similar processes in practice and from completed trace inequivalences. This includes skipped functions, different conditions (gateway type with same number of paths traversed), additional conditions (gateway conditions with a potentially larger number of paths traversed), additional start condition, different dependencies, and iterative versus once-off.

You can tell it’s getting near the end of the day — my posts are getting shorter and shorter — and we have only a panel left to finish off.

BPM Milan: Detecting and Resolving Process Model Differences

Jochen Kuester of IBM Zurich Research presented a paper on Detecting and Resolving Process Model Differences in the Absence of a Change Log, co-authored by Christian Gerth, Alexander Foerster and Gregor Engels. Detecting differences would be done in the case where a process model is changed, and there is a need to detect and resolve the differences between the models. They focus on detection, visualization and resolution of differences between the process models.

Detection of differences between process model, which involves reconstructing the change log that transforms one version to another. This is done by computing fragments for the process models similar to the process structure tree methods that we saw from other IBM researches yesterday, then identifying elements that are identical in both models (even if in a different part of the model), elements that are in the first model but not the second, and those that are in the second model but not the first. This allows correspondences to be derived for the fragments in the process structure tree. From there, they can detect differences in actions/fragments, whether an insertion, deletion or move of an action within or between fragments.

They have a grammar of compound operations describing these differences, which can now be used to create a change log by creating a joint process structure tree formed by combining the process structure tree of both models, tagging the nodes with the operations, and determining the position parameters of each of the operations.

They’ve prototyped this in IBM WebSphere Process Modeler.

BPM Milan: Workflow Simulation for Operational Decision Support

The afternoon started with the section on Quantitative Analysis, beginning with a presentation by Anne Rozinat from the Technical University of Eindhoven on Workfow Simulation for Operational Decision Support Using Design, Historic and State Information, with the paper co-authored by Moe Wynn, Wil van der Aalst, Arthur ter Hofstede and Colin Fidge.

As she points out, few organizations are using simulation in a structured and organized way; I’ve definitely seen this in practice, where process simulation is used much more during the sales demo than in the customer implementations. She sees three issues with how simulation is done now: resources are modeled incorrectly, simulation models may have to be created from scratch, and there is more of a focus on design than on operational decisions through lack of integration of historical operational data back into the model. I am seeing these last two problems solved in many commercial systems already: rarely is it necessary to separately model the simulation from the process model, and some number of modeling systems allow for the reintegration of historical execution data to drive the simulation.

Their approach uses three types of information:

  • design information, from the original process model
  • historic information, from historical execution data
  • state information, from currently executed workflows, primarily for setting the initial state of the simulation

They have created a prototype of this using YAWL and ProM, and she walked through the specifics of how this information is extracted from the systems, how the simulation model is generated, and how the current state is loaded without changing the simulation model: this latter step can happen often in order to create a new starting point for the simulation that corresponds to the current state in the operational system.

This last factor has the potential to turn simulation into a much more interactive and frequently-used capability, if you consider the capability of being able to run a simulation forward from the current state in the operational system in order to predict behavior over the upcoming period of time: consider, for example, being able to use the current state as the initial properties of the simulation, then adding resources to predict how long it will take to clear the actual backlog of work in order to determine the optimal number of people to add to a process at this point in time. This turns short-term simulation into a operational decision-making tool, rather than just a design tool.

BPM Milan: Setting Temporal Constraints in Scientific Workflows

Xiao Liu from Swinburne University of Technology presented his paper on A Probabilistic Strategy for Setting Temporal Constraints in Scientific Workflows, co-authored by Jinjun Chen and Yun Yang. This is motivated by the problem of using only a few overall user-specified temporal constraints on a process without considering system performance and issues of local fine-grained control: this can result in frequent temporal variations and huge exception-handling costs.

They established two basic requirements temporal constraints must allow for both coarse-grained and fine-grained control, and they must consider both user requirements and system performance. They used some probabilistic assumptions, such as normal distributions of activity durations. They determined the weighted joint normal distribution that estimated the overall completion time of the entire workflow based on the time required for each activity, the probability of iterations and the probability of different choice paths: assuming the normal distributes of events as earlier stated, this allows for the calculation of maximum and minimum duration from the mean by assuming that almost all process instance durations will be bounded by +/- 3 sigma (sorry, can’t find the sigma symbol right now). After aggregating to set the coarse-grained temporal constraints, they can propagate to set the fine-grained temporal constraints on each activity. There are modifications to the models if, for example, it’s known that there is not a normal distribution of activity durations.

This becomes relevant in practice when you consider setting service level agreements (SLAs) for processes: if you don’t have a good idea of how long a process is going to take and the variability from the mean, then you can’t set a reasonable SLA for that process. In cases where a violation of an SLA impacts a company financially, either immediately through compliance penalties or in the longer term through loss of revenue, this is particularly important.

BPM Milan: Instantiation Semantics for Process Models

Jan Mendling of Queensland University of Technology presented a paper on Instantiation Semantics for Process Models, co-authored with Gero Decker of HPI Potsdam. Their main focus was on determining the soundness of process models, particularly based on the entry points to processes.

They considered six different process notations and syntax: open workflow nets, YAWL, event-driven process chains, BPEL (the code, not a graphical representation), UML activity diagrams, and BPMN. They determined how an entry point is represented in each of these notations, with three different types of entry points: a start place (such as in open workflow nets), a start event (such as in BPMN), and a start condition (such as in event-driven process chains). He walked through a generic process execution environment, showing the entry points to process execution.

They created a framework called CASU: Creation (what triggers a new process instance), Activation (which of the multiple entry points are activated on creation), Subscription (which other start events are waited for upon the triggering of one start event), and Unsubscription (how long are the other start events waited for). Each of these four activities has several possible patterns, e.g., Creation can be based on a single condition, multiple events, or other patterns of events.

The CASU framework allows for the classification of the instantiation semantics of different modeling languages; he showed a classification table that evaluated each of the six process notations against the 5 Creation patterns, 5 Activation patterns, 3 Subscription patterns and 5 Unsubscription patterns, showing how well each notation supports each pattern. One important note is that BPEL and BPMN do not support the same patterns, meaning that there is not a 100% mapping between BPMN and BPEL: we all knew that, but it’s nice to see more research backing it up. 🙂

Having multiple start events in a process causes all sorts of problems in terms of understandability and soundness, and he doesn’t recommend this in general; however, since the notations support it and therefore it can be done in practice, analysis of multi-start point instantiation semantics is important to understand how the different modeling languages handle these situations.

BPM Milan: Predicting Coupling of Object-Centric Business Process Implementations

Ksenia Wahler of the IBM Zurich Research lab presented the first paper in the Modelling Paradigms & Issues section, on Predicting Coupling of Object-Centric Business Process Implementations, co-authored by Jochen Kuester.

Although activity-centric approaches are in the mainstream — e.g., BPMN for modeling and BPEL for implementation — object-centric approaches are emerging. The main principles of object-centric approaches are that process logic is distributed among concurrently running components, each component represents a life cycle of a particular object, component interaction ensures that overall logic is correctly implemented.

They are using Business State Machines (BSM) in IBM WebSphere Integration Developer to model this: object-centric modeling is offered as an alternative to BPEL for service orchestration. It uses finite state automation, tailored for execution in a service-oriented environment, with event-condition-action transitions. The advantages of this approach is that it is distributable, adaptable, and maintainable. However, this works when objects are independent, but this is rarely the case; hence the research into the management of coupling of objects. What they found is that rather than using a unidirectional mapping from the activity-centric view to the object-centric implementation, wherein the models can get out of sync, their approach is to feed back from any changes in the object-centric implementation to the process model. They needed to establish a coupling metric in order to asses the coupling density of the object model, as well as develop the mapping from activity-centric process models to object life cycle components, which they have based on workflow patterns.

She showed examples of translation from activity-centric to object-centric models: starting with a BPMN process model, consider the objects for which each activity is changing the state, and re-model to show the state changes for each object and the interactions between the objects based on their state changes. Each state-changing object becomes a component, and interactions between objects in terms of control handovers and decision notifications become wires (connections) between components in the assembly model. The degree of coupling is calculated from the interactions between the components, and a threshold can be set for the maximum acceptable degree of coupling. Objects with a high degree of coupling may be candidates for merging into a single life cycle, or may be targeted for refactoring (actually not true refactoring since it doesn’t preserve behavior; more like simplifying) the process model in order to reduce control handovers and decision notifications.

This type of object-centric approach is new to me, and although it is starting to make sense, I’m not sure that these notes will make sense to anyone else. It’s not clear (and the speaker couldn’t really clarify) the benefit of using this approach over an activity-centric approach.

BPM Milan: Michael Rosemann keynote

Michael Rosemann from the BPM Research Group at Queensland University of Technology, gave us today’s opening keynote on Understanding and Impacting the Practice of BPM, exploring the link between academia and industry. QUT hosted this conference last year, and has a strong BPM program.

He believes that research can be both rigorous and relevant, satisfying the requirements of both industry and academia, and his group works closely with industry, in part by offering BPM training but also looking to foster networking across the divide. It can be a mutually beneficial relationship: research impacts practice through their findings, and practice provides understanding of the practical applicability and empirical evidence to research.

Obviously I’m in strong agreement with this position: part of why I’m here is to bring awareness of this conference and the underlying research to my largely commercial audience. I had an interesting conversation earlier today about how vendors could become more involved in this conference; at the very least, I believe that BPM vendors should be sending a product development engineer to sit in the audience and soak up the ideas, but there’s probably also room for some low level of corporate sponsorship and a potential for recruitment.

Rosemann discussed how research can (and should) be influenced by industry demand, although there’s not a direct correlation between research topics and what’s missing in industry practice. There is some great research going on around process analysis and modeling, and some smaller amount (or so it seems) focused on process execution. He looks at the distinction between design science research — where the goal is utility — and behavioral science research — where the goal is truth — and the relationship between them: design science produces BPM artifacts to provide utility to behavioral science, which in turn provides truth through BPM theories.

BPM artifacts produced by research include constructs (vocabulary and symbols) such as process modeling techniques, models (abstractions and representations) such as process reference models, methods (algorithms and practices) such as process modeling methodologies, and instantiations (implemented and prototype systems) such as workflow prototype systems. Through an artifact’s lifecycle, design scientists test its soundness (internal consistency) and completeness (general applicability), and the behavioral scientists test its effectiveness at solving the problem and adoption in practice. In other words, design scientists create the artifacts, the artifacts are implemented (in a research facility or by industry), and behavioral scientists test how well they work.

There is a BPM community of practice in Australia that hosts events about BPM: originally just a showcase to expose QUT research to industry and government, it has become a much more collaborative community where the practitioners can indicate their interest in the research areas. All of the QUT research students have to have their elevator pitch worked out — why is their research important, and its applicability — which is going to start to tune their thinking towards where their research might (eventually) end up in practice.

He showed a BPM capability framework, showing various capability areas mapped against key success factors of strategic alignment, governance, methods, IT, people and culture; this framework has been replicated by a number of different organizations, including Gartner, to show areas on which companies need to focus when they are implementing BPM. He discussed other areas and methods of research, and the value of open debate with the practitioners; as always, it’s gratifying to see my blog used as an example in a presentation, and he used a snapshot of my post on the great BPMN debate as well as posts by Bruce Silver and Michael zur Muehlen. He walked through a number of other examples of interaction between research and industry, using a variety of techniques, and even the concept of private (consumer) process modeling.

He ended with a number of recommendations for BPM researchers:

  • have a BPM research vision
  • design a BPM research portfolio
  • conduct critical-path research
  • seek impact without compromising research rigor
  • build and maintain an industry network
  • collaborate with complementary research partners

Interestingly, a question at the end resulted in a discussion on BPM vendors and how they have the potential to span boundaries between research and practice. The larger vendors with significant research facilities are represented here: apparently almost 40% of the attendees here are from industry, although it appears that most are from deep within the research areas rather than product development or any customer-facing areas.

BPM Milan: Formal Methods and demos

There were two other papers presented in the Formal Methods section — Covering Places and Transitions in Open Nets by Christian Stahl and Karsten Wolf, and Correcting Deadlocking Service Choreographies Using a Simulation-Based Graph Edit Distance by Niels Lohmann — but we were hip-deep in mathematical notation, graph theory, automata sets and Boolean forumlae (who decided to put this section at the end of the day?), and I lost the will to blog.

We’re moving off to a demo session to close the day, which will include:

  • Business Transformation Workbench: A Practitioner’s Tool for Business Transformation, by Juhnyoung Lee, Rama Akkiraju, Chun Hua Tian, Shun Jiang, Sivaprashanth Danturthy, and Ponn Sundhararajan
  • Oryx – An Open Modeling Platform for the BPM Community, by Gero Decker, Hagen Overdick and Mathias Weske
  • COREPROSim: A Tool for Modeling, Simulating and Adapting Data-driven Process Structures, by Dominic Muuller, Manfred Reichert, Joachim Herbst, Detlef Koontges and Andreas
    Neubert
  • A Tool for Transforming BPMN to YAWL, Gero Decker, Remco Dijkman, Marlon Dumas and Luciano García-Bañuelos
  • BESERIAL: Behavioural Service Interface Analyser, by Ali Aiit-Bachir, Marlon Dumas and Marie-Christine Fauvet
  • Goal-Oriented Autonomic Business Process Modeling and Execution: Engineering Change Management Demonstration, by Dominic Greenwood

That’s it for blogging today; after the demos, I’ll be off to celebrate today’s real event, my birthday. And before you ask, I just turned 30 (in hexadecimal).