BPM Milan: Model Driven Business Transformation

The last paper in this session, a case study on Model-Driven Business Transformation, was an industry paper presented by Juliane Siegeris of gematik GmbH, co-authored by Oliver Grasl. gematik provides IT services related to the implementation of German health cards and other healthcare applications, but the case study is their own internal business reorganization into a matrix structure.

Each department modeled their own processes, which were then assembled into enterprise-wide process models: there were some issues related to different levels of experience between modelers in different departments. They used the Enterprise Architect tool (which they already used within their organization for IT specifications), and BPMN 1.0. They had some major challenges along the way, such as the need for large-scale modeling guidelines, support for organizational modeling, and methods for documenting processes beyond the BPMN diagrams; this resulted in the use of UML notation for some modeling and the creation of an online repository of process documentation.

She went through a number of the techniques that they used to ensure consistency, completeness and correctness in process models: guidelines, shared methods and templates, and a management and control structure around the modeling process. They are in the middle of this process modeling exercise, with a target date of the end of this month: considering that only 12% of their process models are complete and approved, this seems like a bit of an ambitious schedule.

BPM Milan: Modularity in Process Models

The second paper in this section on modeling guidelines was a review of modularity in process models, by Hajo Reijers and Jan Mendling. This was focused on factors related to modeling, including methodology, language and tools, and how they affect model quality; the goal being to provide guidance to process modelers for creating better models.

He first showed a general definition of model quality, but pointed out that they focused on error occurrence and understandability as measures of quality. Both errors and understandability are impacted by model size — bigger models have more errors and are less understandable — but density, average connector degree, cross-connectivity, and modeler education (but not education in a specific modeling technique) also impact these factors.

Looking specifically at modularity — the design principle of breaking down a process model into independently managed subprocesses — they hypothesized that use of modularization does not impact understandability. They created an experiment that showed participants one of two versions of two large process models (more than 100 tasks): one with subprocesses, the other flattened into a single process model. They then tested the subject’s understanding of the processes by asking 12 questions for each of the models; these were consultants experienced in process modeling, hence are accustomed to working with process models and would understand the visual syntax. What they found is that the average % of correct answers to the questions is higher for the modular than the flattened version, but for one of the models the difference was not statistically significant, whereas with the other, it was statistically significant.

This disproved their hypothesis, since modularity was important in model understandability one of the two complex models, but raised the question of why it was important for one of the models but not the other. The process with improved understandability on modularization had more subprocesses (hence was more modularized) than the one that didn’t, presenting a new hypothesis for future testing. They also found some correlation between success at answering “local” questions (those related to portions of the process rather than the overall process) and the degree of modularization.

Their conclusions:

  • Modularity in a process model appears to have a positive connection with its understandability
  • The effect manifests itself in large models if modularity is applied to a sufficiently high extent
  • Modularity seems to support comprehension that requires insight into local parts of the model

In the future, they will be relating this work to semi-automatic modularization of work.

BPM Milan: Applying Patterns During Business Process Modeling

Thomas Gschwind of IBM Research Zurich presented a paper on applying patterns during process modeling, co-authored by Jana Koehler and Janette Wong. This research was motivated by their customer’s concern for the quality of process models, and their first prototype using IBM WebSphere Business Modeler shows that 10% of the modeling time can be saved, which corresponds to about 70% of the pure editing time.

There are well-known basic workflow patterns, such as splitting and merging, but these are too fine-grained in many cases, and they were looking for pattern compounds that could be easily reused. He walked us through three pattern application scenarios, showing both the process flow and the process structure tree:

  • Compound patterns, including sequence (a set of steps in a fixed order), alternative compound (split and merge several alternative paths), parallel compound (split and merge several paths in parallel), and cyclic compound (loop). This represents the four most common of the basic workflow patterns, which is obviously just a starting point.
  • Gateway-guarded branches, which support the creation of unstructured models such as routing across the branches in a parallel split, including an alternative branch model pattern and parallel branch pattern. This can cause problems with the process if not used properly, although there are some constraints such as not allowing the parallel branch to flow backwards.
  • Closing a set of edges with a gateway, which is not always possible and is only implemented for some special cases.

He gave a live demo of creating a mortgage approval process using these patterns: he dragged a number of pre-defined tasks onto the workspace, then used a auto-linking functionality to create a basic process flow based on (I assume) the spatial orientation of the tasks. Changing a split gateway using the transformations also changed the merge gateway to the matching type. A wizard-type dialog prompts for some parameters about a set of activities, then generated the process map to match. He applied compound patterns and gateway-guarded patterns at points in the process.

This definitely reduced some of the effort in the process map drawing, and allowed users to create unstructured as well as structured processes. It’s available as a plugin for WebSphere Business Modeler, and is part of a comprehensive library of patterns, transformations and refactoring operations.

BPM Milan: Paul Harmon keynote

After a few brief introductions from the various conference organizers (in which we learned that next year’s conference is in Ulm, Germany), we had a keynote from Paul Harmon on the current state and future of BPM. It covered a lot of the past, too: from the origins of quality management and process improvement through every technique used in the past 100 years to the current methods and best practices. A reasonable summary of how we got to where we are.

His “future promise”, however, isn’t all that future: he talks about orchestrating ERP processes with a BPMS, something that’s already a well-understood functionality, if not widely implemented. He points out (and I agree) that many uses of BPMS today are not that innovative: they’re being used the same way as the workflow and EAI systems of 5 years ago, namely, as better programming tools to automate a process. He sees the value of today’s BPMS as helping managers to manage processes, both in terms of visibility and agility; of course, it’s hard to do that unless you have the first part in place, it’s just that a lot of companies spend too much effort on the first level of just automating the processes, and never get to the management part of BPM.

He discussed the importance of BPMN in moving BPMS into the hands of managers and business analysts, in that a basic — but still standards-compliant — BPMN diagram can be created without adornment by someone on the business side without having to consider many of the exception flows or technical implementation details: this “happy path” process will execute as it is, but won’t handle all situations. The exceptions and technical details can be added at a second modeling/design phase while still maintaining the core process as originally designed by the business person.

He also showed a different view of a business process: instead of modeling the internal processes, model the customer processes — what the customer goes through in order to achieve their goals — and align that with what goes on internally and what could be done to improve the customer experience. Since the focus is on the customer process and not the internal process, the need for change to internal process can become more evident: a variation on walking a mile in their shoes.

His definition of BPM is very broad, encompassing not just the core processes, but performance management, people, technology, facilities, management and suppliers/partners: an integration of quality, management and IT. Because of the broad involvement of people across an organization, it’s key to find a common language about process that spans IT and business management.

Although they’re not there yet, you can find a copy of his slides later this week by searching for BPM2008HarmonKeynote at BPtrends.com.

BPM Milan: Workshop wrap-up

This workshop is intended to be the starting point for collaborating on research in BPM and social software, and we wrapped up the day with a discussion of how the authors in the room can collaborate on a single paper to submit for journal publication, based on their existing research and the discussions that we had here today. This, of course, devolved into a discussion of the social tools that would be used in order to do this, and the game theory that applies to the collaborative authoring of papers.

I’m sure that the remainder of the conference will be quite different in nature than this highly interactive workshop, although equally valuable, but I can’t help but wondering why there’s not more BPM vendors (or more advanced customers) taking advantage of the opportunity to attend this conference. Although a great deal of innovation goes on within some vendor organizations already, even more could undoubtedly result from exposure to the research going on in the academic world.

That’s it for today: time for a quick nap, then off to the evening reception.

BPM Milan: Enterprise 2.0 in practice

Simone Happ from T-Systems Multimedia — the only other non-academic in the room — gave a presentation on Enterprise 2.0 initiatives that her company is seeing in practice. She started with some pretty general stuff on Web 2.0 and Enterprise 2.0, but moved on to some examples of how they are using wikis to manage/document customer requirements and report on project status, and how the immediacy of publication was important for both of those applications. She also covered some public examples of companies using Web 2.0 to interact with their customers, such as Dell’s Ideastorm, and sites that promote completely new business models by allowing anyone to publish their own ideas for co-monetization with the host company, such as SpreadShirt (or the US equivalent, Threadless) and MyMuesli.

I was expecting a few more concrete examples of Enterprise 2.0 within customer organizations (and maybe something about BPM, since this is a BPM conference); the presentation would have been appropriate as an intro to Enterprise 2.0 for a more general audience, but came off as a bit lightweight compared to the academic fare of the rest of the day.

The session ended with an interesting discussion on Enterprise 2.0, the issues with adoption and some of the success stories; nothing new here, but good to hear the opinions of the dozen or so in the room.

BPM Milan: Workflow Enactment in a Social Software Environment

Davide Rossi of Universita di Bologna presented a paper on Workflow Enactment in a Social Software Environment, co-authored with Fabio Vitali. Davide took me to task earlier today for not responding to his comment on post from last month: at least I know that someone here reads my blog. 🙂

He started out discussing why enterprises like social software — ease of use, flexibility — but also some of the problems with acceptance in enterprises, such as traceability and enactment. Furthermore, when you look at a comparison between enterprise software such as BPM, there are some distinct differences in their structure, governance and user interaction. To bring these together, you need to consider the current methods of structured coordination as compared to emergent coordination, where there is no pre-defined process, but the users create their own processes in order to achieve the stated goal. Although there will need to be a few iterations before the process takes shape, eventually this “tools-first”approach can achieve results. There is a problem, however: the BPMS tools themselves, which are not really suited to being put in the hands of end users. Yes, I know that the vendor told you that would work, but you’ve probably already discovered that it doesn’t (usually).

The approach described is not to create a new tool, however, but to create a social layer on top of existing tools: a mashup of data and functions from a number of sources using X-Folders, which handles feed aggregation and filtering, storage management, and a reaction engine for interacting with external web services. This is not intended for transactional structured workflow, of course, but for evolving lightweight workflow processes between peers. The example shown in the paper used Google Calendar to set up events that represent time-based milestones in a business process, then used a feed from the calendar to an X-Folder so that reactions would be fired when the events occur to call web services to update activities in a discussion forum.

BPM Milan: Firm-hosted Online Communities

Sami Jantunen of Lappeenranta University of Technology presented a paper on Utilizing Firm-hosted Online Communities in Software Product Business: A Dimensional View, co-authored by Kari Smolander.

This is specifically related to online communities hosted by companies for business purposes, ranging from product development to business, and involving only internal resources or external and internal. These can include distributed/open source product development, product maintenance (including peer support), user community support, and brand building.

Issues in building an online community range from how to build a community that will attract users or other external participants, to how to create an online community for online product development. This involves research in a number of different areas: studies of the social aspects of community building, but also product management and software engineering for situations where the community will be contributing to product development.

They worked with three companies in looking at these online communities: Nokia, which covers the full range of objectives and stakeholders, SanomaWSOY (a media company), which is focused on the business objectives and the user community, and Tekla a software company), which is focused on collaborative product development.

They are creating an interactive research forum for supporting the development of firm-hosted online communities, providing some of their experiences but also a place for open discussion.

BPM Milan: From a social wiki to a social workflow system

Selim Erol of the Vienna University of Economics and Business Administration presented the first paper of the afternoon, co-authored with Gustaf Neumann, on using wikis in an organizational context, in terms of which aspects may have influence on the success of the implementation. They have also developed a prototype of wiki-based editor for workflow definitions, including enacting a web-based workflow based on the workflow definitions.

He gave a summary of wikis — again, likely unnecessary to an audience of academics who are all presenting papers on BPM and social software — and used Wikipedia as an example of how placing content authoring in an open space (public) ensures critical mass of community, which in turn ensures critical mass of content and artifacts; and how mutual control enables content negotiation and self-healing.

He summarized the characteristics of BPM, then looked at applying wiki characteristics to BPM, particularly in process (and rules) design. He sees a number of aspects that determine the degree to which collective intelligence can be used in a wiki environment:

  • Size of crowd/community participating
  • Level of crowd/community organization
  • Degree of objects’ structuredness/specificity
  • Degree of objects’ completeness

The risks of wiki application are much different in public and enterprise applications, however: in a public domain such as Wikipedia, there are issues such as edit wars and vandalism, whereas in an enterprise environment, the issues are more of lack of subjectivity, domination based on corporate rank, and desertion by the community due to smaller size and more politicized environment.

He gave a brief demonstration of the XoWiki-based workflow system that they have created, providing a wiki environment for specifying process flow collaboratively. It’s still a bit of a code-like interface, although also provides a graphical representation, but it’s great to be seeing process modeling done in a more generalized wiki context. I think that there needs to be more crossover between academia and the vendor world, however: he stated one key differentiator as being that it’s web-based, but a number of BPMS vendors have web-based process modelers now.

BPM Milan: Digital Identity

Ben Jennings of University College London presented a paper on Digital Identity and Reputation in the Context of a Bounded Social Ecosystem, co-authored by Anthony Finkelstein.

He started with a discussion about digital identity that reminded me briefly of Dick Hardt’s Identity 2.0 presentation: using himself as an example, showing how he appears in different contexts on the web, such as Flickr, Facebook and YouTube. We all have this same problem of the reconciliation of multiple digital identities: we all have to maintain multiple profiles and multiple social graphs on multiple social networks.

Within some sort of bounded social ecosystem — where we have common goals, such as within an enterprise — the digital identity concept changes: your identity is at least partially pre-created (e.g., through your local network credentials), but this isn’t enough in a large organization where everyone doesn’t know everyone personally and where there may be multiple systems that don’t share credentials. There are still issues of disambiguating and unifying identities between the systems in use within the bounded social context, especially if it’s not a closed enterprise: there must be some fairly complex pattern recognition even to match up email addresses, which can be specified in a number of different formats.

Once you’ve established digital identity, then you can start on the larger issue of trust and reputation; so far, the research has only reached the stage of automating the recognition of digital identity, but will be expanded to (for example) selecting the most appropriate person for a specific task in a process, based on their reputation as derived from their contributions to many other systems.