BPMN 1.1 poster

Previously, I posted about the free BPMN 1.0 poster available for download from ITPoster.net, and now the Business Process Technology Group at the Hasso Plattner Institute has published one for BPMN 1.1. Both provide a good quick reference; the BPT version has just the graphical object notation, while the ITPoster version also includes some patterns and antipatterns.

Also, check out BPT’s BPMN Corner, which has a number of good BPMN links, including Oryx, a web-based BPMN editor, and BPMN stencils for Visio and OmniGraffle.

Oracle BEA Strategy Briefing

Not only did Oracle schedule this briefing on Canada Day, the biggest holiday in Canada, but they forced me to download the Real Player plug-in in order to participate. The good part, however, is that it was full streaming audio and video alongside the slides.

Charles Phillips, Oracle President, kicked off with a welcome and some background on Oracle, including their focus on database, middleware and applications, and how middleware is the fastest-growing of these three product pillars. He described how Oracle Fusion middleware is used both by their own applications as well as ISVs and customers implementing their own SOA initiatives.

He outlined their rationale for acquiring BEA: complementary products and architecture, internal expertise, strategic markets such as Asia, and the partner and channel ecosystem. He stated that they will continue to support BEA products under the existing support lifetimes, with no forced migration policies to move off of BEA platforms. They now consider themselves #1 in the middleware market in terms of both size and technology leadership, and Phillips gave a gentle slam to IBM for over-inflating their middleware market size by including everything but the kitchen sink in what they consider to be middleware.

The BEA developer and architect online communities will be merged into the Oracle Technology Network: Dev2Dev will be merged into the Oracle Java Developer community, and Arch2Arch will be broadened to the Oracle community.

Retaining all the BEA development centers, they now have 4,500 middleware developers; most BEA sales, consulting and support staff were also retained and integrated into the the Fusion middleware teams.

Next up was Thomas Kurian, SVP of Product Development for Fusion Middleware and BEA product directions, with a more detailed view of the Oracle middleware products and strategy. Their basic philosophy for middleware is that it’s a unified suite rather than a collection of disjoint products, it’s modular from a purchasing and deployment standpoint, and it’s standards-based and open. He started to talk about applications enabled by their products, unifying SOA, process management, business intelligence, content management and Enterprise 2.0.

They’ve categorized middleware products into 3 categories on their product roadmap (which I have reproduced here directly from Kurian’s slide:

  • Strategic products
    • BEA products being adopted immediately with limited re-design into Oracle Fusion middleware
    • No corresponding Oracle products exist in majority of cases
    • Corresponding Oracle products converge with BEA products with rapid integration over 12-18 months
  • Continue and converge products
    • BEA products being incrementally re-designed to integrate with Oracle Fusion middleware
    • Gradual integration with existing Oracle Fusion middleware technology to broaden features with automated upgrades
    • Continue development and maintenance for at least 9 years
  • Maintenance products
    • BEA had end-of-life’d due to limited adoption prior to Oracle M&A
    • Continued maintenance with appropriate fixes for 5 years

For the “continue and converge” category, that is, of course, a bit different than “no forced migration”, but this is to be expected. My issue is with the overlap between the “strategic” category, which can include a convergence of an Oracle and a BEA product, and the “continue and converge” category, which includes products that will be converged into another product: when is a converged product considered “strategic” rather than “continue and converge”, or is this just the spin they’re putting on things so as to not freak out BEA customers who have put huge investments into a BEA product that is going to be converged into an existing Oracle product?

He went on to discuss how each individual Oracle and BEA product would be handled under this categorization. I’ve skipped the parts on development tools, transaction processing, identity management, systems management and service delivery, and gone right to their plans for the Service-Oriented Architecture products:

Oracle SOA product strategy

  • Strategic:
    • Oracle Data Integrator for data integration and batch ETL
    • Oracle Service Bus, which unifies AquaLogic Service Bus and Oracle Enterprise Service Bus
    • Oracle BPEL Process Manager for service orchestration and composite application infrastructure
    • Oracle Complex Event Processor for in-memory event computation, integrated with WebLogic Event Server
    • Oracle Business Activity Monitoring for dashboards to monitor business events and business process KPIs
  • Continue and converge:
    • BEA WL-Integration will be converged with the Oracle BPEL Process Manager
  • Maintenance:
    • BEA Cyclone
    • BEA RFID Server

Note that the Oracle Service Bus is in the “strategic” category, but is a convergence of AL-SB and Oracle ESB, which means that customers of one of those two products (or maybe both) are not going to be happy.

Kurian stated that Oracle sees four types of business processes — system-centric, human-centric, document-centric and decision-centric (which match the Forrester divisions) — but believes that a single product/engine that can handle all of these is the way to go, since few processes fall purely into one of these four categories. They support BPEL for service orchestration and BPMN for modeling, and their plan is to converge a single platform that supports both BPEL and BPMN (I assume that he means both service orchestration and human-facing workflow). Given that, here’s their strategy for Business Process Management products:

Oracle BPM product strategy

  • Strategic:
    • Oracle BPA Designer for process modeling and simulation
    • BEA AL-BPM Designer for iterative process modeling
    • Oracle BPM, which will be the convergence of BEA AquaLogic BPM and Oracle BPEL Process Manager in a single runtime engine
    • Oracle Document Capture & Imaging for document capture, imaging and document workflow with ERP integration [emphasis mine]
    • Oracle Business Rules as a declarative rules engine
    • Oracle Business Activity Monitoring [same as in SOA section]
    • Oracle WebCenter as a process portal interface to visualize composite processes

Similar to the ESB categorization, I find the classification of the converged Oracle BPM product (BEA AL-BPM and Oracle BPEL PM) as “strategic” to be at odds with his original definition: it should be in the “continue & converge” category since the products are being converged. This convergence is not, however, unexpected: having two separate BPM platforms would just be asking for trouble. In fact, I would say that having two process modelers is also a recipe for trouble: they should look at how to converge the Oracle BPA Designer and the BEA AL-BPM Designer

In the portals and Enterprise 2.0 product area, Kurian was a bit more up-front about how WebLogic Portal and AquaLogic UI are going to be merged into the corresponding Oracle products:

Oracle portal and Enterprise 2.0 product strategy

  • Strategic:
    • Oracle Universal Content Management for content management repository, security, publishing, imaging, records and archival
    • Oracle WebCenter Framework for portal development and Enterprise 2.0 services
    • Oracle WebCenter Spaces & Suite as a packaged self-service portal environment with social computing services
    • BEA Ensemble for lightweight REST-based portal assembly
    • BEA Pathways for social interaction analytics
  • Continue and converge:
    • BEA WebLogic Portal will be integrated into the WebCenter framework
    • BEA AquaLogic User Interaction (AL-UI) will be integrated into WebCenter Spaces & Suite
  • Maintenance:
    • BEA Commerce Services
    • BEA Collabra

In SOA governance:

  • Strategic:
    • BEA AquaLogic Enterprise Repository to capture, share and manage the change of SOA artifacts throughout their lifecycle
    • Oracle Service Registry for UDDI
    • Oracle Web Services Manager for security and QOS policy management on services
    • EM Service Level Management Pack as a management console for service level response time and availability
    • EM SOA Management Pack as a management console for monitoring, tracing and change managing SOA
  • Maintenance:
    • BEA AquaLogic Services Manager

Kurian discussed the implications of this product strategy on Oracle Applications customers: much of this will be transparent to Oracle Applications, since many of these products form the framework on which the applications are built, but are isolated so that customizations don’t touch them. For those changes that will impact the applications, they’ll be introduced gradually. Of course, some Oracle Apps are already certified with BEA products that are now designated as strategic Oracle products.

Oracle has also simplified their middleware pricing and packaging, with products structured into 12 suites:

Oracle Middleware Suites

He summed up with their key messages:

  • They have a clear, well-defined, integrated product strategy
  • They are protecting and enhancing existing customer investments
  • They are broadening Oracle and BEA investment in middleware
  • There is a broad range of choice for customer

The entire briefing will be available soon for replay on Oracle’s website if you’re interested in seeing the full hour and 45 minutes. There’s more information about the middleware products here, and you can sign up to attend an Oracle BEA welcome event in your city.

BPMN survey results

I really didn’t sit down this afternoon to write that last enormous post on the Great BPMN Debate, I remembered that Jan Recker (co-author on the research paper that sparked the debate, although not a participant in the debate) had sent me a pre-release copy of a paper that he authored, “BPMN Modeling — Who, Where, How and Why”, which summarizes the results of the survey that he conducted last year. One thought led to another, and before you know it, I’d written an essay on the most exciting thing to happen in BPM standards in ages.

Back to Jan’s paper however, which will be published this month on BPTrends. He surveyed 590 process modelers using BPMN from over 30 countries, and found some interesting results:

  • BPMN usage is split approximately in half over business and IT, which is a much higher percentage of IT users that I would have guessed. Business people are using it for process documentation, improvement, business analysis and stakeholder communications, whereas IT people are using it for process simulation, service analysis and workflow engineering.
  • As you might expect given that result, there’s a wide variation in the amount of BPMN used, ranging from just the core set for basic process models, to an extended set, to the full BPMN set. It would be interesting to see a correlation between this self-assessment and usage statistics based on the actual BPMN diagrams created, although as far as I know, the survey respondents didn’t submit any examples of their diagrams.
  • Not surprisingly, only 13.6% received any formal BPMN training, and I believe that this is the primary reason that most people are still using only a tiny subset of the BPMN constructs in order to create what are effectively old-fashioned flowcharts rather than full BPMN diagrams.

He finished with a list of the major obstacles that the respondents reported in using BPMN, or places that they would like to see improvement:

  • Support for specifying business rules, which echoes many of the other discussions that I’ve seen around having some standardization between process and rule vocabularies and modeling languages.
  • Support for process decomposition, although I really didn’t follow his argument on what this means.
  • Support for organizational modeling, particularly as that relates to the use of pools and lanes: sometimes, for examples, a lane indicates a role; other times, a department. There are some things happening at OMG with the Business Motivation Metamodel and Organizational Structure Metamodel that may help here.
  • There are some BPMN constructs that are less often used, although it’s not clear that anyone recommended getting rid of them.
  • The large number of different event types is problematic: “ease of use of process modeling is sacrificed for sheer expressive power”. This is a variation on the previous point (and on the crux of the Great BPMN Debate), indicating that actual BPMN users are a bit overwhelmed by the number of symbols.

I’ll publish a link to the paper when it appears on BPTrends; it’s fairly short and worth the read.

The Great BPMN Debate

If you have even a passing interest in BPMN, you’re probably aware of the great debate happening amongst a few of the BPM bloggers in the past week:

Michael zur Muehlen and Jan Recker publish an academic research paper on BPMN usage, “How Much Language is Enough? Theoretical and Practical Use of the Business Process Modeling Notation”, to be presented at an upcoming conference. To quote the introduction in the paper, its aim is “to examine, using statistical techniques, which elements of BPMN are used in practice”, and they laid out their methods for gathering the underlying data. They used some standard cluster analysis techniques to identify clusters of BPMN objects based on usage, and determined that the practical complexity (what’s really used) was significantly different from the theoretical complexity (the total set) of BPMN. Michael teaches in the BPM program at Stevens Institute of Technology, so I wasn’t surprised to see a stated objective related to BPMN training: “BPMN training programs could benefit from a structure that introduces students to the most commonly used subset first before moving on to advanced modeling concepts.” Note that he says “before moving on to”, not “while completely disregarding”.

Michael then blogged about the paper but went further by listing three implications that were not expressed in the paper:

  • Practitioners should start with the more commonly-used BPMN elements, and leave the more specialized constructs for analysts who will presumably be doing more complex modeling.
  • Vendors that support BPMN can make a realistic determination of what percentage of BPMN diagrams can be represented in their tool based on today’s usage of BPMN.
  • Standards bodies should consider if they should be creating additional complexity if no one is using it.

It was these implications that sparked the arguments that followed, starting with Bruce Silver’s post directly challenging much of what Michael said in his post. It appeared to me that Bruce hadn’t read the full research paper, but was commenting only on Michael’s blog post, hence didn’t fully appreciate that the paper was really just analyzing what people are doing now, not making any value judgements about it. Bruce was a bit harsh, especially where he suggests that Michael’s “BPMN Overhead” label on the least-used objects was “clearly meant to mean ‘useless appendages’.” Bruce had some valid rebuttals to Michael’s three implications, and although I disagree somewhat with Bruce’s first two points (as I commented on his post, and was rewarding by Bruce telling me that I was stating the bloody obvious), I agree that the standard makers have not included unnecessary complexity, but that they have created a standard that the market still needs to grow into. However, I find the BPMN specification to be overly verbose, creating a greater degree of perceived complexity than may actually exist.

Michael responded to Bruce’s post by pointing out that the aim of their research was to find out how people actually use BPMN, not how vendors, consultants and standards bodies think that they use it (or think that they should use it). Michael restates his three implications in greater detail, the first two of which seem to align with what I thought that he said (and stated in my comment on Bruce’s original post). His clarification on his third point was interesting:

We actually like BPMN’s advanced vocabulary. But have you asked end users what they think? Well, we did. Not only in this study but also in Jan’s large-scale BPMN usability studies we did find that users are in fact very troubled by the sheer number of, for example, event constructs. Are they used at a large scale? No. Do users understand their full capacity? Typically not. Why is this not at all reflected in BPMN development? That is exactly our point. Sure, our argument is a somewhat provocative statement. But if it helps to channel some attention to end usage, that’s fair by our standards.

Bruce responds in turn, saying that if Michael had presented this as “statistical correlations between diagram elements in a sample of BPMN collected in the wild”, then it would have been okay, but that the conclusions drawn from the data are flawed. In other words, he’s saying that the research paper is valid and interesting, but the post that Michael wrote promoting the paper (and including those unintentionally provocative implications) is problematic. As it turns out, in terms of Michael’s group of the 17 least-used BPMN constructs, Bruce could live without 15 of them, but will fight to the end for the other two: exception flow and intermediate error event. However, Michael doesn’t say that these are useless — that’s Bruce’s paraphrasing — just that they’re not used.

There’s a bit of chicken-and-egg going on here, since I believe that business analysts aren’t using these constructs because they don’t know that they exist, not because they’re useless. Many analysts don’t receive any sort of formal training in BPMN, but are given a BPMN-compliant tool and just use the things that they know from their swimlane flowcharting experience.

Anyway, Bruce finishes up by misinterpreting (I believe) the conclusion of Michael’s post:

Michael ends his piece by asserting that the real BPMN is not what vendors, consultants, and trainers like me say it is, but the way untrained practitioners are using it today.

What Michael actually said was:

[O]ur own experiences with BPMN and with those organizations using it gave us this hunch that the theoretical usage (what vendors and consultants and trainers tell us) often has little to do with what the end users think or do (the practical usage). And why is it important to know what the end users think and do? Because it can help the researchers, vendors, consultants and trainers of this world to channel their attention and efforts to those problems real users face. Instead of the problems we think exist in practice.

Although it’s not completely clear, I believe that Michael is saying that we need to understand what people are doing with BPMN now in order to design both training and systems.

This was an interesting debate to watch, since I know and respect both Michael and Bruce, and I found merit in the arguments on both sides although I don’t fully agree with either.

There was an interesting coda on the validity of BPMN for model-driven development with Tom Baeyens weighing in on the debate and stating that BPMN should stick to being a modeling notation and not be concerned with the underlying execution details: “[t]he properties can be removed and the mapping approach to concrete executable process languages should be left up to the vendors.” Bruce responded with some thoughts on model portability, and how that drives his categorization of BPMN constructs.

If you’re at all interested in BPMN, it’s worth taking the time to work your way through this debate, and keep and eye on Bruce and Michael’s blogs for any further commentary.

BPMN and the Business Process Expert

There’s something funny about chatting via IM with someone as you’re listening to them give a public webinar, even when you do know that the presentation is pre-recorded — I was on Skype with Bruce Silver today during his webinar The Business Process Expert and the Future of BPM on ebizQ, where he was speaking with Marco ten Vaanholt of SAP’s BPX community.

Except for one “happy smiling faces” graphic worthy only of Jim Sinur’s blog pimping marketing team, I really enjoyed Bruce’s presentation, although I’ve heard at least parts of it before. He started with a comprehensive description of BPM and why model-driven design is so critical to process agility, which he segued into a description of BPMN and its importance in making process models executable: the heart of model-driven design. He feels that it’s necessary to define the role of Business Process Expert (BPX): someone that bridges between business and IT, creating executable requirements for BPM solutions. Obviously, BPMN is a critical skill for the BPX, and Bruce offers a number of resources including a free series of articles and e-learning modules that he’s done on the SAP BPX community and the longer paid courses that he offers online and public classes through BPM Institute. No wonder he hasn’t blogged for months: he’s been too busy creating all this.

Marco ten Vaanholt talked about the importance of BPM and SOA — fairly motherhood sort of stuff — then dug into some details of the SAP BPX community, which is an incredibly well-developed resource for anyone involved in BPM, whether you’re an SAP customer or not. The core of the BPX community is collaboration and collective learning on business scenarios, process lifecycles, change leadership, social responsibility, horizontal and vertical practices, modeling tools, methodologies and a variety of other topics. It’s not just a discussion forum, however: there’s a lot of really valuable content, such as Bruce’s articles and e-learning, from both SAP and the community in general.

Marilyn Pratt, the BPX community evangelist, has been keeping me up to date on what’s happening on BPX and the worldwide community events in which she’s been involved, and I’m looking forward to catching up with her and seeing more of BPX in action when I attend SAPPHIRE in May.

There was some good Q&A at the end about process modeling and the BPX community. Definitely worth watching the replay, which should be available online at the original webinar link above.

IIR BPM: Facilitated session on standards

Alec Sharp led a facilitated session on standards that we love, hate, or wish were there (or don’t care about). This is a bit similar to the BPM Think Tank roundtables, but we’re at about six small tables so had a chance for some mini-break-out sessions to discuss ideas, then gather them together.

The notes that came out of this:

  • One group had some general comments about standards, stating that a common language can simplify, but that the alphabet soup of standards is too complicated and IT driven.
  • Another group hates BPMN because they feel that a 200-page specification isn’t understandable by business users, and that BPMN is really for specifying automated process execution but is not for business consumption. It’s stifling and constrains what can be modelled.
  • Standards aren’t written in plain English. There are two sets of standards: methodology standards and tool standards, and we often confuse the two. Once is focussed on human-driven processes, and the other on technology-driven processes. A great analogy: the people coming up with the tools have never baked the cake, or even eaten one.
  • Standards are often misunderstood, both in terms of who they’re for and what they’re for: they’re misinterpreted by marketing types. [I see this a lot with BPEL having become a standard “check box” on BPM RFPs rather than a real requirement.]
  • Standards can seem inflexible.
  • Interchange standards are either insufficient or improperly used by the tools, making it near-impossible to do round-tripping between different tools. They’re intended to use for translation between business and technology domains, but notational standards are possibly becoming less understandable because they are targetted at flowing into interchange standards. [I’m not sure that I agree with this: IT may require that business model in specific forms rather than just allow business to use BPMN in the way that they best fits the organization.]
  • Standards should be discovered, not invented [Vint Cerf, via Michael zur Muehlen], and BPM standards have been mostly invented.
  • In defense of standards, one person noted that the form of a sonnet is one of the most constrained/standardized forms of writing, but that Shakespeare wrote some of his most beautiful works as sonnets.
  • I got in a few comments about the importance of interchange standards, and how round-tripping is one of the primary problems with these standards — or rather their implementation within the BPA and BPM tools.
  • There’s an issue with the priority when adopting standards: is it to empower the business users, or to support IT implementation? If the former, then it will likely work out, but if it’s for the latter, then the business is not going to totally buy in to the standards.
  • The relationship with the business has changed: it used to be treated as a black box, but now has to be more integrated with IT, which means that they have to bite the bullet and start using some of these standards rather than abdicate responsibility for process modelling.

I don’t necessarily agree with all of these points, since this turned into mostly a standards-bashing session, but it was an interesting debate.

BPM Think Tank Day 2: BPMN/BPDM Roundtable

I’m just getting to the last of the BPM Think Tank sessions, namely, the roundtables and one lunch session that I had documented on paper. The three sessions of roundtables spanned Tuesday and Wednesday afternoons, and were some of the best conversations that I had at the conference. I’ll cover each of the ones that I attended in a separate post, then the summaries of the others in another post, just to keep things from getting too long. These were fairly unstructured, general sessions so the notes might be a bit fragmented

The first roundtable that I attended was BPMN and BPDM, with Stephen White of IBM and Antoine Lonjon of MEGA.

There are insufficient books and tools for educating the community on how to use BPMN for different purposes. There is a requirement for a reference document to educate end-user organizations that is smaller and more understandable than the specification (possibly both a business-oriented primer and a technical reference). Stephen stated that additional reference documents will be available within a few months. There is an HTML version of the specification online at ModelDriven.org.

Small consulting organizations and independents can’t realistically get involved in standards creation so we’re always “users” of the specification. I didn’t raise this point, but do agree with it — paying my own travel expenses and missing out on days of revenue to attend standards meetings several times each year is just not in my budget.

BPM vendors are unlikely to replace their own internal model formats with BPDM, but will translate to/from BPDM. Vendors need to review and understand BPDM and how it maps between different representations. There is a need for BPMN/BPDM conformance testing and certification of BPA/BPM products.

BPDM gives BPMN credibility as a modelling format since the specification is now “complete”. There was a great deal of discussion, both in this session and at other times during the think tank where this same point was raised, namely, that BPMN was rushed out without a serialization format, and that may have been a short-term mistake. One person at the table was concerned that combining BPMN and BPDM, and thereby increasing complexity, may be a mistake.

A comment that Phil Gilbert made on my TIBCO webinar Q&A post made a valid point about how there’s two main use cases for BPMN: non-executable process mapping and analysis by business analysts, and “visual coding” to create an executable process. We discussed this a bit at the roundtable, particularly around how business analysts could use the basic shapes (i.e., skip some of the internal graphic symbols that distinguish between different flavours of the shapes) and hence might benefit from a much simpler training program to get started. There was some discussion about how far up the chain that BPMN will or can be used for modelling businesses, e.g., whether it can be extended to strategy and goals or whether that’s more the mandate of BMM (Business Motivational Metamodel)

I had an interesting side conversation with Antoine after the roundtable ended about adoption patterns for BPMN and BPDM. Although standards organizations tend to have the “if you build it, they will come” attitude towards standards adoption, I believe that there needs to be some good reasons put forward for why BPDM provides benefits to the end customer and for the BPM vendors before we can expect to see widespread adoption.

BPM Think Tank Day 1: Modeling Notations/Metamodels

For the next two sessions, we’ve split into two tracks, business and technical, and I’m in the technical track where Stephen White of IBM (the “father of BPMN”) is talking about modeling notations and metamodels, namely, BPMN, BPDM and UML.

White started out by listing all of the process-related standards both within OMG, and those external to OMG, such as BPEL, XDL, ebXML BPSS (ebBP) and WS-CDL. I’m starting to think that they missed a great opportunity at lunch with the vegetable soup: a few letter-shaped noodles and we would have had alphabet soup for lunch as well as the dose of it that we’re having now. 🙂

He then focussed specifically on the three key process standards within OMG: UML, BPMN and BPDM.

UML’s been around quite a while; I know it primarily as a way to model software development concepts, and have never been happy with the attempts to shoehorn it into business analyst usages since it is difficult to explain the visual syntax of some UML diagrams to business users when they need to review them. UML activity models were added as a variation of state diagrams, and were beefed up with some business semantics to allow for use by business analysis to model business processes, but they’re not as functional as BPMN diagrams for business process models and have pretty much been replaced in that area by BPMN; I expect that they’re used mostly for modelling flows within software (by developers) than for business processes these days.

BPMI first developed BPML (an XML process execution language) which was later replaced by BPEL, and realized that a graphical notation standard was required, leading to the development of BPMN. BPMN was developed to be usable by the business community, and to be able to generate executable processes (through mapping to a language such as BPEL) by providing not just graphical elements, but the attributes for those elements. BPMN is intended to be methodology-agnostic, and to allow a business analyst to model a process in as simple or as complex an amount of detail that they deem suitable for the application.

White covered the basics of BPMN: the four core elements of activities, events, gateways and connectors represented as shapes, then variations on each of those such as the border type and thickness of an event to indicate if it is a start, intermediate or end event. I covered some of this in my recent webinar on business process modelling, or there’s tons of more detailed BPMN references around the web including Bruce Silver’s BPMN course.

This is turning into a bit too much of a BPMN primer, but I’m hanging on for the BPDM part. Also, I’m in the middle of the second row and can’t exactly sneak out unnoticed.

There seems to be a huge point of confusion with some of the audience members over pools and lanes in the swimlane elements of BPMN, and when to use a pool versus a lane; this seems like one of the most obvious things in the standard, so I’m not sure why this is a problem. Pools, in general, represent separate spheres of control; a business process starts, ends and has all of its elements within a single pool. Pools are often used to represent separate organizations in a B2B process representation. Lanes are sub-partitions of pools, usually used to indicate an organizational role or department, or even specific systems that participate in the process in some way; elements of a business process will be in different lanes of the same pools to indicate where (or by whom) that element is executed. Only message flows pass between elements in different pools, which implies a level of asynchronicity; whereas sequence flows are used to connect activities, gateways and events within the same pool, whether in the same lane or not.

BPMN 1.1 was just completed, with a few notational changes:

  • Signal, a new event type, denoted by a black triangle within the event circle shape. A signal is used to broadcast a specific state to other processes outside the immediate scope.
  • Reduction in scope for link events, because of the inclusion of the signal event; these are now basically goto events within a process.
  • Visual distinction between events that throw and catch, indicated by whether the internal icon is filled or an outline.
  • Rule event is now called “conditional”.
  • Icon for multiple events is a pentagon rather than a 6-pointed star.

Nine minutes after the session was supposed to end, we finally start on BPDM, so I think that this is going to be quick. I’m starting to understand by standards are never released on schedule…

BPDM was started in 2003 as a metamodel of business processes, initially without a notation and later aligned with BPMN. It’s intended to support the specification of multi-party choreography (think of message flows between pools) as well as process orchestration: basically, orchestration is what goes on inside a pool, that is, internal business processes; choreography is what happens between pools, that is, B2B interactions. BPMN 2.0, which will include BPDM, will update how choreography processes are modelled.

BPDM, as the metamodel, defines the meaning of the notation and provides the standardized structure behind it that allows for translation between different modelling languages. In BPMN 2.0, the meaning of BPMN will be changed to Business Process Model and Notation to indicate the inclusion of the BPDM metamodel into BPMN 2.0.

Webinar Q&A

I gave a webinar last week, sponsored by TIBCO, on business process modeling; you’ll be able to find a replay of the webinar, complete with the slides, here). Here’s the questions that we received during the webinar and I didn’t have time to answer on the air:

Q: Any special considerations for “long-running” processes – tasks that take weeks or months to complete?

A: For modeling long-running processes, there’s a few considerations. You need to be sure that you’re capturing sufficient information in the process model to allow the processes to be monitored adequately, since these processes may represent risk or revenue that must be accounted for in some way. Second, you need to ensure that you’re building in the right triggers to release the processes from any hold state, and that there’s some sort of manual override if a process needs to be released from the hold state early due to unforeseen events. Third, you need to consider what happens when your process model changes while processes are in flight, and whether those processes need to be updated to the new process model or continue on their existing path; this may require some decisions within the process that are based on a process version, for example.

Q: Do you have a recommendation for a requirements framework that guides analysts on these considerations, e.g. PRINCE2?

A: I find most of the existing requirements frameworks, such as use cases, to be not oriented enough towards processes to be of much use with business process modeling. PRINCE2 is a project management methodology, not a requirements framework.

Q: The main value proposition of SOA is widely believed to be service reuse. Some of the early adopters of SOA, though, have stated that they are only reusing a small number of services. Does this impact the value of the investment?

A: There’s been a lot written about the “myth” of service reuse, and it has proved to be more elusive than many people thought. There’s a few different philosophies towards service design that are likely impacting the level of reuse: some people believe in building all the services first, in isolation of any calling applications, whereas others believe in only building services that are required to meet a specific application’s needs. If you do the former, then there’s a chance that you will build services that no one actually needs — unlike Field of Dreams, if you build it, they may not come. If you do the latter, then your chance of service reuse is greatly reduced, since you’re effectively building single-purpose services that will be useful to another application only by chance.

The best method is more of a hybrid approach: start with a general understanding of the services required by your key applications, and use apply some good old-fashioned architectural/design common sense to map out a set of services that will maximize reusability without placing an undue burden on the calling applications. By considering the requirements of more than one application during this exercise, you will at least be forcing yourself to consider some level of reusability. There’s a lot of arguments about how granular is too granular for services; again, that’s mostly a matter that can be resolved with some design/development experience and some common sense. It’s not, for that matter, fundamentally different than developing libraries of functions like we used to do in code (okay, like I used to do in code) — it’s only the calling mechanism that’s different, but the principles of reusability and granularity have not changed. If you designed and build reusable function libraries in the past, then you probably have a lot of the knowledge that you need to design — at least at a conceptual level — reusable services. If you haven’t built reusable function libraries or services in the past, then find yourself a computer science major or computer engineer who has.

Once you have your base library of services, things start getting more interesting, since you need to make sure that you’re not rewriting services that already exist for each new application. That means that the services must be properly documented so that application designers and analysts are aware of their existence and functionality; they must provide backwards compatibility so that if new functionality is added into a service, it still works for existing applications that call it (without modifying or recompiling those applications); and most important of all, the team responsible for maintaining and creating new services must be agile enough to be able to respond to the requirements of application architects/designers who need new or modified services.

As I mentioned on the webinar, SOA is a great idea but it’s hard to justify the cost unless you have a “killer application” like BPM that makes use of the services.

Q: Can the service discovery part be completely automated… meaning no human interaction? Not just discovery, but service usage as well?

A: If services are registered in a directory (e.g., UDDI), then theoretically it’s possible to discover and use them in an automated fashion, although the difficultly lies in determining which service parameters are mapped to which internal parameters in the calling application. It may be possible to make some of these connections based on name and parameter type, but every BPMS that I’ve seen requires that you manually hook up services to the process data fields at the point that the service is called.

Q: I’d be interested to know if you’re aware of a solid intro or training in the use and application of BPMN. I’ve only found general intros that tend to use the examples in the standard.

A: Bruce Silver offers a comprehensive course in BPMN, which I believe it available as either an online or classroom course.

Q: Does Data Object mean adding external documentation like a Word document into the BPM flow?

A: The origin of the data object is, in part, to serve the requirements of document-centric BPM, where the data object may represent a document (electronic, scanned paper, or a physical paper document) that travels with the workflow. Data objects can be associated with a sequence flow object — the arrows that indicate the flow in a process map — to show that the data artifact moves along that path, or can be shown as inputs and outputs to a process to show that the process acts on that data object. In general, the data object would not be documentation about the process, but would be specific to each instance of the process.

Q: Where is the BPMN standard found?

A: BPMN is now maintained by OMG, although they link through to the original BPMN website still.

Q: What is the output of a BPMN process definition? Any standard file types?

A: BPMN does not specify a file type, and as I mentioned in the webinar, there are three main file formats that may be used. The most commonly used by BPA and BPM vendors, including TIBCO, is XPDL (XML Process Definition Language) from the Workflow Management Coalition. BPEL (Business Process Execution Language) from OASIS has gained popularity in the past year or so, but since it was originally designed as a web service orchestration language, it doesn’t include support all of the BPMN constructs so there may be some loss of information when mapping from BPMN into BPEL. BPDM (Business Process Definition Metamodel), a soon-to-be-released standard from OMG, promises to do everything that XPDL does and more, although it will be a while before the level of adoption nears that of XPDL.

Q: What’s the proper perspective BPM implementers should have on BPMN, XPDL, BPEL, BPEL4People, and BPDM?

A: To sum up from the previous answer: BPMN is the only real contender as a process notation standard, and should be used whenever possible; XPDL is the current de facto standard for interchange of BPMN models between tools; BPDM is an emerging standard to watch that may eventually replace XPDL; BPEL is a web service orchestration language (rarely actually used as an execution language in spite of its name); and BPEL4People is a proposed extension to BPEL that’s trying to add in the ability to handle human-facing tasks, and the only standard that universally causes laughter when I name it aloud. This is, of course, my opinion; people from the integration camp will disagree — likely quite vociferously — with my characterization of BPEL, and those behind the BPDM standard will encourage us all to cast out our XPDL and convert immediately. Realistically, however, XPDL is here to stay for a while as an interchange format, and if you’re modeling with BPMN, then your tools should support XPDL if you plan to exchange process models between tools.

I’m headed for the BPM Think Tank next week, where all of these standards will be discussed, so stay tuned for more information.

Q: How would one link the business processes to the data elements or would this be a different artifact altogether?

A: The BPMN standard allows for the modeler to define custom properties, or data elements, with the scope depending on where the properties are defined: when defined at the process level, the properties are available to the tasks, objects and subprocesses within that process; when defined at the activity level, they’re local to that activity.

Q: I’ve seen some swim lane diagrams that confuse more than illuminate – lacking specific BPMN rules, do you have any personal usage recommendations?

A: Hard to say, unless you state what in particular that you find confusing. Sometimes there is a tendency to try to put everything in one process map instead of using subprocesses to simplify things — an overly-cluttered map is bound to be confusing. I’d recommend a high-level process map with a relatively small number of steps and few explicit data objects to show the overall process flow, where each of those steps might drill down into a subprocess for more detail.

Q: We’ve had problems in the past trying to model business processes at a level that’s too granular. We ended up making a distinction between workflow and screen flow. How would you determine the appropriate level of modeling in BPM?

A: This is likely asking a similar question to the previous one, that is, how to keep process maps from becoming too confusing, which is usually a result of too much detail in a single map. I have a lot of trouble with the concept of “screen flow” as it pertains to process modeling, since you should be modeling tasks, not system screens: including the screens in your process model implies that there’s not another way to do this, when in fact there may be a way to automate some steps that will completely eliminate the use of some screens. In general, I would model human tasks at a level where a task is done by a single person and represents some sort of atomic function that can’t be split between multiple people; a task may require that several screens be visited on a legacy system.

For example, in mutual funds transaction processing (a particular favorite of mine), there is usually a task “process purchase transaction” that indicates that a person enters the mutual fund purchase information to their transaction processing system. In one case, that might mean that they visit three different green screens on their legacy system. Or, if someone wrote a nice front-end to the legacy system, it might mean that they use a single graphical screen to enter all the data, which pushes it to the legacy system in the background. In both cases, the business process is the same, and should be modeled as such. The specific screens that they visit at that task in order to complete the task — i.e., the “screen flow” — shouldn’t be modeled as explicit separate steps, but would exist as documentation for how to execute that particular step.

Q: The military loves to be able to do self-service, can you elaborate on what is possible with that?

A: Military self-service, as in “the military just helped themselves to Poland?” 🙂 Seriously, BPM can enable self-service because it allows anyone to participate in part of a process while monitoring what’s happening at any given step. That allows you to create steps that flow out to anyone in the organization or even, with appropriate network security, to external contractors or other participants. I spoke in the webinar about creating process improvement by disintermediation; this is exactly what I was referring to, since you can remove the middle-man by allowing someone to participate directly in the process.

Q: In the real world, how reliable are business process simulations in predicting actual cycle times and throughput?

A: (From Emily) It really depends on the accuracy of your information about the averages of your cycles. If they are relatively accurate, then it can be useful. Additionally, simulation can be useful in helping you to identify potential problems, e.g. breakpoints of volume that cause significant bottlenecks given your average cycle times.

I would add that one of the most difficult things to estimate is the arrival time of new process instances, since rarely do they follow those nice even distributions that you see when vendors demonstrate simulation. If you can use actual historical data for arrivals in the simulation, it will improve the accuracy considerably.

Q: Would you have multiple lanes for one system? i.e. a legacy that has many applications in it therefore many lanes in the legacy pool ?

A: It depends on how granular that you want to be in modeling your systems, and whether the multiple systems are relevant to the process analysis efforts. If you’re looking to replace some of those systems as part of the improvement efforts, or if you need to model the interactions between the systems, then definitely model them separately. If the applications are treated as a single monolithic system for the purposes of the analysis, then you may not need to break them out.

Q: Do you initially model the current process as-is in the modeling tool?

A: I would recommend that you at least do some high-level process modeling of your existing process. First of all, you need to establish what the metrics are that you’re establishing for your ROI, and often these aren’t evident until you map out your process. Secondly, you may want to run simulations in the modeling tool on the existing process to verify your assumptions about the bottlenecks and costs of the process, and to establish a baseline against which to compare the future-state process.

Q: Business Managers : concerns – failure to achieve ROI ?

A: I’m not exactly sure what this question means, but assume that it relates to the slide near the end of the webinar that discusses role changes caused by BPM. Management and executives are most concerned with risk around a project, and they may have concerns that the ROI is too ambitious (either because the new technology fails or too many “soft” ROI factors were used in the calculation) and that the BPM project will fail to meet the promises that they’ve likely made to the layers of management above them. The right choice of ROI metrics can go a long ways to calming their fears, and educating them on the significant benefits of process governance that will result from the implementation of BPM. Management will now have an unprecedented view of the current state and performance of the end-to-end process. They’ll also have more comprehensive departmental performance statistics without manual logging or cutting and pasting from several team reports.

Q: I am a manager in a MNC and I wanted to know how this can help me in my management. How can I use it in my daily management? One example please?

A: By “MNC” I assume that you mean “multi-national corporation”. The answer is no different than from any other type of organization, except that you’re likely to be collaborating with other parts of your organization in other countries hence have the potential to see even greater benefits. One key area for improvement that can be identified with business process modeling, then implemented in a BPMS, is all of the functional redundancy that typically occurs in multi-nationals, particularly those that grow by acquisition. Many functional areas, both administrative/support and line-of-business, will be repeated in multiple locations, for no better reason than that it wasn’t possible to combine them before technology was brought to bear on it. Process modeling will allow you to identify areas that have the potential to be combined across different geographies, and BPM technology allows processes to flow seamlessly from one location to another.

Q: How much detail is allowed in a process diagram (such as the name of the supplier used in a purchase order process or if the manager should be notified via email or SMS to approve a loan)? Is process visibility preferred compared to good classic technical design, in the BPM world?

A: A placeholder for the name of a supplier would certainly be modeled using a property of the process, as would any other custom data elements. As for the channel used for notifying the manager, that might be something that the manager can select himself (optimally) rather than having that fixed by the process; I would consider that to be more of an implementation detail although it could be included in the process model.

I find your second question interesting, because it implies that there’s somehow a conflict between good design and process visibility. Good design starts with the high-level process functional design, which is the job of the analyst who’s doing the process modeling; this person needs to have analytical and design skills even though it’s unlikely that they do technical design or write code. Process visibility usually refers to the ability of people to see what’s happening within executing processes, which would definitely be the result of a good design, as opposed to something that has to be traded off against good design. I might be missing the point of your question, feel free to add a comment to clarify.

Q: Are there any frameworks to develop a BPM solution?

A: Typically, the use of a BPMS implies (or imposes) a framework of sorts on your BPM implementation. For example, you’re using their modeling tool to draw out your process map, which creates all the underpinnings of the executable process without you writing any code to do so. Similarly, you typically use a graphical mapping functionality to map the process parameters onto web services parameters, which in turn creates the technical linkages. Since you’re working in a near-zero-code environment, there’s no real technical framework involved beyond the BPMS itself. I have seen cases where misguided systems integrators create large “frameworks” — actually custom solutions that always require a great deal of additional customization — on top of a BPMS that tends to demote the BPMS to a simple queuing system. Not recommended.

There were also a few questions specifically about TIBCO, for which Emily Burns (TIBCO’s marketing manager, who moderated the webinar) provided answers:

Q: Is TIBCO Studio compatible with Windows Vista?

A: No, Vista is not yet supported.

Q: Are there some examples of ROI from the industry verticals

A: On TIBCO’s web site, there are a variety of case studies that discuss ROI here: http://www.tibco.com/solutions/bpm/customers.jsp. Additionally, these are broken down into some of the major verticals here: http://www.tibco.com/solutions/bpm/bpm_your_industry.jsp

Q: Is there any kind of repository or library of “typical” process? I’m particularly interested in clinical trials.

A: TIBCO’s modeling product ships with a large variety of sample processes aggregated by industry.

And lastly, my own personal favorite question and answer, answered by Emily:

Q: What’s the TLA for BPM+SOA?

A: RAD 🙂