BPM Milan: Refined Process Structure Tree

Jussi Vanhatalo of the IBM Zurich Research Lab presented a paper on the Refined Process Structure Tree, co-authored by Hagen Voelzer and Jana Koehler. We’re in the last section of the day, on formal methods.

The research looks at the issues of parsing a business process model, and they offer a new parsing technique called the refined process structure tree that provides a more fine-grained model. Applications for parsing include:

  • translating a graph-based process model (e.g., BPMN) into a block-based process model (e.g., BPEL)
  • speeding up control-flow analysis
  • pattern-based editing
  • processing merging
  • understanding large process models
  • subprocess detection

He showed us an example of the last use case, subprocess detection, where sections of a process are detected and replaced by subprocesses, making the process more understandable (as we saw in the earlier paper on modularity).

There are a few requirements for parsing:

  • uniqueness: e.g., the same BPMN model is always translated to the same BPEL process
  • modularity: e.g., a local change in BPMN translates to a local change in BPEL
  • fast computation of parse tree, e.g., for process version merging, pattern-based editing, or control-flow analysis
  • granularity

The Normal Process Structure Tree, which they have presented in earlier research, is both unique and modular, and represents a hierarchy of canonical (non-overlapping) fragments. Its computing time is linear.

The Refined Process Structure Tree uses a relaxed notion of a fragment through specific definitions of boundary (entry and exit) nodes, and allows only for non-overlapping fragments that can be assembled into a hierarchy. Like the NPST, it is unique and modular, but is more fine-grained than the NPST (presumably because of the relaxed definition of a fragment). It can also be computed in linear time, and he walked through a linear time algorithm for computing the RPST.

In this paper, they assumed that there is only one start and one end node in a process, and that loops have separate entry and exit node; since the publication of this paper, their research has progressed and they have lifted both of these restrictions.

BPM Milan: From Personal Task Management to End User Driven Business Process Modeling

Todor Stoitsev of SAP Research presented the last of the flexibility and user interaction papers, From Personal Task Management to End User Driven Business Process Modeling. This is based on research about end-user development, but posits that BPMN is not appropriate for end users to work with directly for ad hoc process modeling.

There is quite a bit of related research to this: workflow patterns, ad hoc workflow, email-based workflow, instance-based task research, process mining, and other methods that provide better collaboration with the end users during process modeling. In this case, they’ve based their research on programming by example, where processes are inferred by capturing the activities of process participants. This involves not just the process participants (business users), but also a domain expert who uses the captured ad hoc activities to work towards a process model, which is eventually formalized in concert with a programmer, and turned into formal workflow models. In formalizing ad hoc processes, it’s critical to consider issues such as pattern reuse, and they have built tools for exploring task patterns as well as moving through to process definition, the latter of which is prototyped using jBPM.

As with most of the other papers today, I can’t do justice to the level of technical detail presented here; I’m sure that the proceedings are available in some form, or you can track down the authors for more information on their papers.

BPM Milan: Visual Support for Work Assignment

Massimiliano de Leoni presented a paper on Visual Support for Work Assignment in Process-Aware Information Systems, co-authored by Wil van der Aalst and Arthur ter Hofstede.

This is relevant for systems with worklists, where it may not be clear to the user which work item to select based on a variety of motivations. In most commercial BPMS, a worklist contains just a list of items, each with a short description of some sort; he is proposing a visual map of work items and resources, where any number of maps can be defined based on different metrics. In such a map, the user can select the work item based on its location on the map, which represents its suitability for processing by that user at that time.

He walked us through the theory behind this, then the structure of the visualization framework as implemented. He walked us through an example of how this would appear on a geographic map, which was a surprise to me: I was thinking about more abstract mapping concepts, but he had a geographic example that used a Google map to visualize the location of the resources (people who can process the work item) and work items. He also showed a timeline map, where work items were positioned based on time remaining to a deadline.

Maybe I’m just not a visual person, but I don’t see why the same information can’t be conveyed by sorting the worklist (where the user would then choose the first item in the list as being the highest recommendation), although future research in turning a time-lapse of the maps into a movie for process mining is a cool concept.

BPM Milan: Supporting Flexible Processes Through Log-Based Recommendations

For the first of the paper in the session on Flexibility and User Interaction, Helen Schonenberg of Eindhoven University of Technology presented a paper on Supporting Flexible Processes Through Log-Based Recommendations, co-authored by Barbara Weber, Boudewijn van Dongen and Wil van der Aalst.

This is related to the research on automated process mining (or process discovery) based on system logs, which is similar to the type of work being done by Fujitsu with their process discovery product/service

She started by discussing recommender systems, such as we are all familiar with from sites like Amazon: the user provides some input, and based on their past behavior and that of users who are similar in some way, the system recommends items. Recommendation algorithms are based on filtering of user/item matrices and aggregation of the results.

In the case of a process recommender, there is a key goal such as minimizing throughput time; from this and a filtered view of the history log of this and other users’ past performance, a next step can be recommended. Their current work is focused on fine-tuning the filtering algorithms by which the possible paths in the log are filtered for use as recommendations, and the weighted aggregation algorithms.

She walked us through their experimental setup and results, and showed that it is possible to improve processes by the use of runtime recommendations, in the case where users have the choice of which activity to execute next. This can be used in any system that has a logging system and uses a worklist for task presentation.

BPM Milan: Model Driven Business Transformation

The last paper in this session, a case study on Model-Driven Business Transformation, was an industry paper presented by Juliane Siegeris of gematik GmbH, co-authored by Oliver Grasl. gematik provides IT services related to the implementation of German health cards and other healthcare applications, but the case study is their own internal business reorganization into a matrix structure.

Each department modeled their own processes, which were then assembled into enterprise-wide process models: there were some issues related to different levels of experience between modelers in different departments. They used the Enterprise Architect tool (which they already used within their organization for IT specifications), and BPMN 1.0. They had some major challenges along the way, such as the need for large-scale modeling guidelines, support for organizational modeling, and methods for documenting processes beyond the BPMN diagrams; this resulted in the use of UML notation for some modeling and the creation of an online repository of process documentation.

She went through a number of the techniques that they used to ensure consistency, completeness and correctness in process models: guidelines, shared methods and templates, and a management and control structure around the modeling process. They are in the middle of this process modeling exercise, with a target date of the end of this month: considering that only 12% of their process models are complete and approved, this seems like a bit of an ambitious schedule.

BPM Milan: Modularity in Process Models

The second paper in this section on modeling guidelines was a review of modularity in process models, by Hajo Reijers and Jan Mendling. This was focused on factors related to modeling, including methodology, language and tools, and how they affect model quality; the goal being to provide guidance to process modelers for creating better models.

He first showed a general definition of model quality, but pointed out that they focused on error occurrence and understandability as measures of quality. Both errors and understandability are impacted by model size — bigger models have more errors and are less understandable — but density, average connector degree, cross-connectivity, and modeler education (but not education in a specific modeling technique) also impact these factors.

Looking specifically at modularity — the design principle of breaking down a process model into independently managed subprocesses — they hypothesized that use of modularization does not impact understandability. They created an experiment that showed participants one of two versions of two large process models (more than 100 tasks): one with subprocesses, the other flattened into a single process model. They then tested the subject’s understanding of the processes by asking 12 questions for each of the models; these were consultants experienced in process modeling, hence are accustomed to working with process models and would understand the visual syntax. What they found is that the average % of correct answers to the questions is higher for the modular than the flattened version, but for one of the models the difference was not statistically significant, whereas with the other, it was statistically significant.

This disproved their hypothesis, since modularity was important in model understandability one of the two complex models, but raised the question of why it was important for one of the models but not the other. The process with improved understandability on modularization had more subprocesses (hence was more modularized) than the one that didn’t, presenting a new hypothesis for future testing. They also found some correlation between success at answering “local” questions (those related to portions of the process rather than the overall process) and the degree of modularization.

Their conclusions:

  • Modularity in a process model appears to have a positive connection with its understandability
  • The effect manifests itself in large models if modularity is applied to a sufficiently high extent
  • Modularity seems to support comprehension that requires insight into local parts of the model

In the future, they will be relating this work to semi-automatic modularization of work.

BPM Milan: Applying Patterns During Business Process Modeling

Thomas Gschwind of IBM Research Zurich presented a paper on applying patterns during process modeling, co-authored by Jana Koehler and Janette Wong. This research was motivated by their customer’s concern for the quality of process models, and their first prototype using IBM WebSphere Business Modeler shows that 10% of the modeling time can be saved, which corresponds to about 70% of the pure editing time.

There are well-known basic workflow patterns, such as splitting and merging, but these are too fine-grained in many cases, and they were looking for pattern compounds that could be easily reused. He walked us through three pattern application scenarios, showing both the process flow and the process structure tree:

  • Compound patterns, including sequence (a set of steps in a fixed order), alternative compound (split and merge several alternative paths), parallel compound (split and merge several paths in parallel), and cyclic compound (loop). This represents the four most common of the basic workflow patterns, which is obviously just a starting point.
  • Gateway-guarded branches, which support the creation of unstructured models such as routing across the branches in a parallel split, including an alternative branch model pattern and parallel branch pattern. This can cause problems with the process if not used properly, although there are some constraints such as not allowing the parallel branch to flow backwards.
  • Closing a set of edges with a gateway, which is not always possible and is only implemented for some special cases.

He gave a live demo of creating a mortgage approval process using these patterns: he dragged a number of pre-defined tasks onto the workspace, then used a auto-linking functionality to create a basic process flow based on (I assume) the spatial orientation of the tasks. Changing a split gateway using the transformations also changed the merge gateway to the matching type. A wizard-type dialog prompts for some parameters about a set of activities, then generated the process map to match. He applied compound patterns and gateway-guarded patterns at points in the process.

This definitely reduced some of the effort in the process map drawing, and allowed users to create unstructured as well as structured processes. It’s available as a plugin for WebSphere Business Modeler, and is part of a comprehensive library of patterns, transformations and refactoring operations.

BPM Milan: Paul Harmon keynote

After a few brief introductions from the various conference organizers (in which we learned that next year’s conference is in Ulm, Germany), we had a keynote from Paul Harmon on the current state and future of BPM. It covered a lot of the past, too: from the origins of quality management and process improvement through every technique used in the past 100 years to the current methods and best practices. A reasonable summary of how we got to where we are.

His “future promise”, however, isn’t all that future: he talks about orchestrating ERP processes with a BPMS, something that’s already a well-understood functionality, if not widely implemented. He points out (and I agree) that many uses of BPMS today are not that innovative: they’re being used the same way as the workflow and EAI systems of 5 years ago, namely, as better programming tools to automate a process. He sees the value of today’s BPMS as helping managers to manage processes, both in terms of visibility and agility; of course, it’s hard to do that unless you have the first part in place, it’s just that a lot of companies spend too much effort on the first level of just automating the processes, and never get to the management part of BPM.

He discussed the importance of BPMN in moving BPMS into the hands of managers and business analysts, in that a basic — but still standards-compliant — BPMN diagram can be created without adornment by someone on the business side without having to consider many of the exception flows or technical implementation details: this “happy path” process will execute as it is, but won’t handle all situations. The exceptions and technical details can be added at a second modeling/design phase while still maintaining the core process as originally designed by the business person.

He also showed a different view of a business process: instead of modeling the internal processes, model the customer processes — what the customer goes through in order to achieve their goals — and align that with what goes on internally and what could be done to improve the customer experience. Since the focus is on the customer process and not the internal process, the need for change to internal process can become more evident: a variation on walking a mile in their shoes.

His definition of BPM is very broad, encompassing not just the core processes, but performance management, people, technology, facilities, management and suppliers/partners: an integration of quality, management and IT. Because of the broad involvement of people across an organization, it’s key to find a common language about process that spans IT and business management.

Although they’re not there yet, you can find a copy of his slides later this week by searching for BPM2008HarmonKeynote at BPtrends.com.

BPM Milan: Workshop wrap-up

This workshop is intended to be the starting point for collaborating on research in BPM and social software, and we wrapped up the day with a discussion of how the authors in the room can collaborate on a single paper to submit for journal publication, based on their existing research and the discussions that we had here today. This, of course, devolved into a discussion of the social tools that would be used in order to do this, and the game theory that applies to the collaborative authoring of papers.

I’m sure that the remainder of the conference will be quite different in nature than this highly interactive workshop, although equally valuable, but I can’t help but wondering why there’s not more BPM vendors (or more advanced customers) taking advantage of the opportunity to attend this conference. Although a great deal of innovation goes on within some vendor organizations already, even more could undoubtedly result from exposure to the research going on in the academic world.

That’s it for today: time for a quick nap, then off to the evening reception.

BPM Milan: Enterprise 2.0 in practice

Simone Happ from T-Systems Multimedia — the only other non-academic in the room — gave a presentation on Enterprise 2.0 initiatives that her company is seeing in practice. She started with some pretty general stuff on Web 2.0 and Enterprise 2.0, but moved on to some examples of how they are using wikis to manage/document customer requirements and report on project status, and how the immediacy of publication was important for both of those applications. She also covered some public examples of companies using Web 2.0 to interact with their customers, such as Dell’s Ideastorm, and sites that promote completely new business models by allowing anyone to publish their own ideas for co-monetization with the host company, such as SpreadShirt (or the US equivalent, Threadless) and MyMuesli.

I was expecting a few more concrete examples of Enterprise 2.0 within customer organizations (and maybe something about BPM, since this is a BPM conference); the presentation would have been appropriate as an intro to Enterprise 2.0 for a more general audience, but came off as a bit lightweight compared to the academic fare of the rest of the day.

The session ended with an interesting discussion on Enterprise 2.0, the issues with adoption and some of the success stories; nothing new here, but good to hear the opinions of the dozen or so in the room.