The research sessions have started here at BPM 2010, and I’m in the session of research papers on business process design. There are three 30-minute presentations (including Q&A) based on the presenter’s research paper; although I have the full text of the research papers, it’s always useful to hear the author’s take on it.
I haven’t listed all the author names on the presentations, but you can find the full list on the conference site’s research program section.
From Informal Process Diagrams To Formal Process Models
This paper from IBM Research (in India and the US) presents an approach for automatically converting informal process diagrams – such as are done in Visio – to formal process models that can be managed in a BPMS. This requires two main tasks: inference of the structure, that is, identifying the nodes and edges, and semantic interpretation to associate process modeling semantics.
Information process diagrams contain a lot of structural and semantic ambiguities that have to be resolved. Most of the existing process modeling tools use shape names to interpret the semantics when importing Visio diagrams, but often untrained process modelers will use a variety of shapes to mean a single element type, or use the same shape for multiple element types.
The authors use standard pattern classification techniques to interpret the process semantics, mimicking human reasoning and testing with both supervised and unsupervised clustering. They’ve created a tool, iDiscover, and compared it to a popular modeling tool’s Visio import capability to see which did a better job of inferring the formal process model from 185 process diagrams found in practice. Overall, their supervised classification method achieved rates over 90%, whereas the other modeling tool was in the 60% range.
Given that a lot of people will continue to use Visio to do their original process diagrams, no matter how many nice process discovery tools we give them, this can have a clear benefit in reducing translation errors and reducing the time required to manually correct the formal process models after translation.
Machine-Assisted Design of Business Process Models Using Descriptor Space Analysis
The next paper, from Technion and Ort Braude College in Israel, presented a method for assisting a process analyst to design new processes based on an analysis of existing process models within an organization. Linguistic analysis of relationships within processes allows several models to be developed based on the objects and actions within existing processes: an object hierarchy model, an object lifecycle model, an action hierarchy model and action lifecycle model. Insert some fancy mathematics on the resulting quad-dimensional descriptor space, and you end up with a system that can either refine an existing activity (e.g., a more or less specific form of an object or action) or suggest a next process step (e.g., an action-object pair), depending on the context. This allows a process designer to be led through the design of a completely new process using a wizard-like interface where the designer can specify the goal of the process, then be presented with suggested objects and actions as they step through refinements to the process.
Assuming that the existing process models cover a broad range of an organization’s typical objects and activities, and that new processes are typically similar in some way to existing processes, it makes sense that you’d be able to present a designer with something close to what they want; the key is in minimizing the number of refinements they would have to apply to the suggestions. Their experiments showed that this method required stepping through a number of refinement steps in order to achieve accurate models; their conclusion is that this is a useful starting point, but needs further research and experimentation for real-world business usage.
Impact of Granularity on Adjustment Behavior in Adaptive Reuse of Business Process Models
The last paper of this session, from Technische Universität Berlin, looked at the reuse of process models. Cognitive biases tend to limit our ability to adjust processes if that involves certain anchoring activities; the granularity of process models will have an impact on how many of those anchors are present. Completely logical: if you have a very high-level process model, it’s more likely to be able to be applied to a number of different real-world processes (although it may be of questionable value), whereas a more detailed process model will contain activities that tie it to a smaller number of real-world processes.
Their experiments showed that the granularity of a model has a significant impact on the percentage of correct adjustments when reusing a model, with more granular models resulting in more extraneous tasks being left in even though they may not apply to the target real-world process. In other words, if a designer takes a detailed process model as the starting point for a new process model, they are more likely to leave in a lot of unnecessary crap, which makes it harder to read the model as well as making it potentially inaccurate.
Good first session; over in the industry case studies track, there was a session on BPM in practice, featuring Nick Malik of Microsoft (which I was sorry to miss since I am an avid reader of his blog on enterprise architecture), Adelle Elia and Sandra Lyons of GTSI, and Paul Tazbaz of Wells Fargo.