BPM Think Tank Day 1: Modeling Notations/Metamodels

For the next two sessions, we’ve split into two tracks, business and technical, and I’m in the technical track where Stephen White of IBM (the “father of BPMN”) is talking about modeling notations and metamodels, namely, BPMN, BPDM and UML.

White started out by listing all of the process-related standards both within OMG, and those external to OMG, such as BPEL, XDL, ebXML BPSS (ebBP) and WS-CDL. I’m starting to think that they missed a great opportunity at lunch with the vegetable soup: a few letter-shaped noodles and we would have had alphabet soup for lunch as well as the dose of it that we’re having now. 🙂

He then focussed specifically on the three key process standards within OMG: UML, BPMN and BPDM.

UML’s been around quite a while; I know it primarily as a way to model software development concepts, and have never been happy with the attempts to shoehorn it into business analyst usages since it is difficult to explain the visual syntax of some UML diagrams to business users when they need to review them. UML activity models were added as a variation of state diagrams, and were beefed up with some business semantics to allow for use by business analysis to model business processes, but they’re not as functional as BPMN diagrams for business process models and have pretty much been replaced in that area by BPMN; I expect that they’re used mostly for modelling flows within software (by developers) than for business processes these days.

BPMI first developed BPML (an XML process execution language) which was later replaced by BPEL, and realized that a graphical notation standard was required, leading to the development of BPMN. BPMN was developed to be usable by the business community, and to be able to generate executable processes (through mapping to a language such as BPEL) by providing not just graphical elements, but the attributes for those elements. BPMN is intended to be methodology-agnostic, and to allow a business analyst to model a process in as simple or as complex an amount of detail that they deem suitable for the application.

White covered the basics of BPMN: the four core elements of activities, events, gateways and connectors represented as shapes, then variations on each of those such as the border type and thickness of an event to indicate if it is a start, intermediate or end event. I covered some of this in my recent webinar on business process modelling, or there’s tons of more detailed BPMN references around the web including Bruce Silver’s BPMN course.

This is turning into a bit too much of a BPMN primer, but I’m hanging on for the BPDM part. Also, I’m in the middle of the second row and can’t exactly sneak out unnoticed.

There seems to be a huge point of confusion with some of the audience members over pools and lanes in the swimlane elements of BPMN, and when to use a pool versus a lane; this seems like one of the most obvious things in the standard, so I’m not sure why this is a problem. Pools, in general, represent separate spheres of control; a business process starts, ends and has all of its elements within a single pool. Pools are often used to represent separate organizations in a B2B process representation. Lanes are sub-partitions of pools, usually used to indicate an organizational role or department, or even specific systems that participate in the process in some way; elements of a business process will be in different lanes of the same pools to indicate where (or by whom) that element is executed. Only message flows pass between elements in different pools, which implies a level of asynchronicity; whereas sequence flows are used to connect activities, gateways and events within the same pool, whether in the same lane or not.

BPMN 1.1 was just completed, with a few notational changes:

  • Signal, a new event type, denoted by a black triangle within the event circle shape. A signal is used to broadcast a specific state to other processes outside the immediate scope.
  • Reduction in scope for link events, because of the inclusion of the signal event; these are now basically goto events within a process.
  • Visual distinction between events that throw and catch, indicated by whether the internal icon is filled or an outline.
  • Rule event is now called “conditional”.
  • Icon for multiple events is a pentagon rather than a 6-pointed star.

Nine minutes after the session was supposed to end, we finally start on BPDM, so I think that this is going to be quick. I’m starting to understand by standards are never released on schedule…

BPDM was started in 2003 as a metamodel of business processes, initially without a notation and later aligned with BPMN. It’s intended to support the specification of multi-party choreography (think of message flows between pools) as well as process orchestration: basically, orchestration is what goes on inside a pool, that is, internal business processes; choreography is what happens between pools, that is, B2B interactions. BPMN 2.0, which will include BPDM, will update how choreography processes are modelled.

BPDM, as the metamodel, defines the meaning of the notation and provides the standardized structure behind it that allows for translation between different modelling languages. In BPMN 2.0, the meaning of BPMN will be changed to Business Process Model and Notation to indicate the inclusion of the BPDM metamodel into BPMN 2.0.

Webinar Q&A

I gave a webinar last week, sponsored by TIBCO, on business process modeling; you’ll be able to find a replay of the webinar, complete with the slides, here). Here’s the questions that we received during the webinar and I didn’t have time to answer on the air:

Q: Any special considerations for “long-running” processes – tasks that take weeks or months to complete?

A: For modeling long-running processes, there’s a few considerations. You need to be sure that you’re capturing sufficient information in the process model to allow the processes to be monitored adequately, since these processes may represent risk or revenue that must be accounted for in some way. Second, you need to ensure that you’re building in the right triggers to release the processes from any hold state, and that there’s some sort of manual override if a process needs to be released from the hold state early due to unforeseen events. Third, you need to consider what happens when your process model changes while processes are in flight, and whether those processes need to be updated to the new process model or continue on their existing path; this may require some decisions within the process that are based on a process version, for example.

Q: Do you have a recommendation for a requirements framework that guides analysts on these considerations, e.g. PRINCE2?

A: I find most of the existing requirements frameworks, such as use cases, to be not oriented enough towards processes to be of much use with business process modeling. PRINCE2 is a project management methodology, not a requirements framework.

Q: The main value proposition of SOA is widely believed to be service reuse. Some of the early adopters of SOA, though, have stated that they are only reusing a small number of services. Does this impact the value of the investment?

A: There’s been a lot written about the “myth” of service reuse, and it has proved to be more elusive than many people thought. There’s a few different philosophies towards service design that are likely impacting the level of reuse: some people believe in building all the services first, in isolation of any calling applications, whereas others believe in only building services that are required to meet a specific application’s needs. If you do the former, then there’s a chance that you will build services that no one actually needs — unlike Field of Dreams, if you build it, they may not come. If you do the latter, then your chance of service reuse is greatly reduced, since you’re effectively building single-purpose services that will be useful to another application only by chance.

The best method is more of a hybrid approach: start with a general understanding of the services required by your key applications, and use apply some good old-fashioned architectural/design common sense to map out a set of services that will maximize reusability without placing an undue burden on the calling applications. By considering the requirements of more than one application during this exercise, you will at least be forcing yourself to consider some level of reusability. There’s a lot of arguments about how granular is too granular for services; again, that’s mostly a matter that can be resolved with some design/development experience and some common sense. It’s not, for that matter, fundamentally different than developing libraries of functions like we used to do in code (okay, like I used to do in code) — it’s only the calling mechanism that’s different, but the principles of reusability and granularity have not changed. If you designed and build reusable function libraries in the past, then you probably have a lot of the knowledge that you need to design — at least at a conceptual level — reusable services. If you haven’t built reusable function libraries or services in the past, then find yourself a computer science major or computer engineer who has.

Once you have your base library of services, things start getting more interesting, since you need to make sure that you’re not rewriting services that already exist for each new application. That means that the services must be properly documented so that application designers and analysts are aware of their existence and functionality; they must provide backwards compatibility so that if new functionality is added into a service, it still works for existing applications that call it (without modifying or recompiling those applications); and most important of all, the team responsible for maintaining and creating new services must be agile enough to be able to respond to the requirements of application architects/designers who need new or modified services.

As I mentioned on the webinar, SOA is a great idea but it’s hard to justify the cost unless you have a “killer application” like BPM that makes use of the services.

Q: Can the service discovery part be completely automated… meaning no human interaction? Not just discovery, but service usage as well?

A: If services are registered in a directory (e.g., UDDI), then theoretically it’s possible to discover and use them in an automated fashion, although the difficultly lies in determining which service parameters are mapped to which internal parameters in the calling application. It may be possible to make some of these connections based on name and parameter type, but every BPMS that I’ve seen requires that you manually hook up services to the process data fields at the point that the service is called.

Q: I’d be interested to know if you’re aware of a solid intro or training in the use and application of BPMN. I’ve only found general intros that tend to use the examples in the standard.

A: Bruce Silver offers a comprehensive course in BPMN, which I believe it available as either an online or classroom course.

Q: Does Data Object mean adding external documentation like a Word document into the BPM flow?

A: The origin of the data object is, in part, to serve the requirements of document-centric BPM, where the data object may represent a document (electronic, scanned paper, or a physical paper document) that travels with the workflow. Data objects can be associated with a sequence flow object — the arrows that indicate the flow in a process map — to show that the data artifact moves along that path, or can be shown as inputs and outputs to a process to show that the process acts on that data object. In general, the data object would not be documentation about the process, but would be specific to each instance of the process.

Q: Where is the BPMN standard found?

A: BPMN is now maintained by OMG, although they link through to the original BPMN website still.

Q: What is the output of a BPMN process definition? Any standard file types?

A: BPMN does not specify a file type, and as I mentioned in the webinar, there are three main file formats that may be used. The most commonly used by BPA and BPM vendors, including TIBCO, is XPDL (XML Process Definition Language) from the Workflow Management Coalition. BPEL (Business Process Execution Language) from OASIS has gained popularity in the past year or so, but since it was originally designed as a web service orchestration language, it doesn’t include support all of the BPMN constructs so there may be some loss of information when mapping from BPMN into BPEL. BPDM (Business Process Definition Metamodel), a soon-to-be-released standard from OMG, promises to do everything that XPDL does and more, although it will be a while before the level of adoption nears that of XPDL.

Q: What’s the proper perspective BPM implementers should have on BPMN, XPDL, BPEL, BPEL4People, and BPDM?

A: To sum up from the previous answer: BPMN is the only real contender as a process notation standard, and should be used whenever possible; XPDL is the current de facto standard for interchange of BPMN models between tools; BPDM is an emerging standard to watch that may eventually replace XPDL; BPEL is a web service orchestration language (rarely actually used as an execution language in spite of its name); and BPEL4People is a proposed extension to BPEL that’s trying to add in the ability to handle human-facing tasks, and the only standard that universally causes laughter when I name it aloud. This is, of course, my opinion; people from the integration camp will disagree — likely quite vociferously — with my characterization of BPEL, and those behind the BPDM standard will encourage us all to cast out our XPDL and convert immediately. Realistically, however, XPDL is here to stay for a while as an interchange format, and if you’re modeling with BPMN, then your tools should support XPDL if you plan to exchange process models between tools.

I’m headed for the BPM Think Tank next week, where all of these standards will be discussed, so stay tuned for more information.

Q: How would one link the business processes to the data elements or would this be a different artifact altogether?

A: The BPMN standard allows for the modeler to define custom properties, or data elements, with the scope depending on where the properties are defined: when defined at the process level, the properties are available to the tasks, objects and subprocesses within that process; when defined at the activity level, they’re local to that activity.

Q: I’ve seen some swim lane diagrams that confuse more than illuminate – lacking specific BPMN rules, do you have any personal usage recommendations?

A: Hard to say, unless you state what in particular that you find confusing. Sometimes there is a tendency to try to put everything in one process map instead of using subprocesses to simplify things — an overly-cluttered map is bound to be confusing. I’d recommend a high-level process map with a relatively small number of steps and few explicit data objects to show the overall process flow, where each of those steps might drill down into a subprocess for more detail.

Q: We’ve had problems in the past trying to model business processes at a level that’s too granular. We ended up making a distinction between workflow and screen flow. How would you determine the appropriate level of modeling in BPM?

A: This is likely asking a similar question to the previous one, that is, how to keep process maps from becoming too confusing, which is usually a result of too much detail in a single map. I have a lot of trouble with the concept of “screen flow” as it pertains to process modeling, since you should be modeling tasks, not system screens: including the screens in your process model implies that there’s not another way to do this, when in fact there may be a way to automate some steps that will completely eliminate the use of some screens. In general, I would model human tasks at a level where a task is done by a single person and represents some sort of atomic function that can’t be split between multiple people; a task may require that several screens be visited on a legacy system.

For example, in mutual funds transaction processing (a particular favorite of mine), there is usually a task “process purchase transaction” that indicates that a person enters the mutual fund purchase information to their transaction processing system. In one case, that might mean that they visit three different green screens on their legacy system. Or, if someone wrote a nice front-end to the legacy system, it might mean that they use a single graphical screen to enter all the data, which pushes it to the legacy system in the background. In both cases, the business process is the same, and should be modeled as such. The specific screens that they visit at that task in order to complete the task — i.e., the “screen flow” — shouldn’t be modeled as explicit separate steps, but would exist as documentation for how to execute that particular step.

Q: The military loves to be able to do self-service, can you elaborate on what is possible with that?

A: Military self-service, as in “the military just helped themselves to Poland?” 🙂 Seriously, BPM can enable self-service because it allows anyone to participate in part of a process while monitoring what’s happening at any given step. That allows you to create steps that flow out to anyone in the organization or even, with appropriate network security, to external contractors or other participants. I spoke in the webinar about creating process improvement by disintermediation; this is exactly what I was referring to, since you can remove the middle-man by allowing someone to participate directly in the process.

Q: In the real world, how reliable are business process simulations in predicting actual cycle times and throughput?

A: (From Emily) It really depends on the accuracy of your information about the averages of your cycles. If they are relatively accurate, then it can be useful. Additionally, simulation can be useful in helping you to identify potential problems, e.g. breakpoints of volume that cause significant bottlenecks given your average cycle times.

I would add that one of the most difficult things to estimate is the arrival time of new process instances, since rarely do they follow those nice even distributions that you see when vendors demonstrate simulation. If you can use actual historical data for arrivals in the simulation, it will improve the accuracy considerably.

Q: Would you have multiple lanes for one system? i.e. a legacy that has many applications in it therefore many lanes in the legacy pool ?

A: It depends on how granular that you want to be in modeling your systems, and whether the multiple systems are relevant to the process analysis efforts. If you’re looking to replace some of those systems as part of the improvement efforts, or if you need to model the interactions between the systems, then definitely model them separately. If the applications are treated as a single monolithic system for the purposes of the analysis, then you may not need to break them out.

Q: Do you initially model the current process as-is in the modeling tool?

A: I would recommend that you at least do some high-level process modeling of your existing process. First of all, you need to establish what the metrics are that you’re establishing for your ROI, and often these aren’t evident until you map out your process. Secondly, you may want to run simulations in the modeling tool on the existing process to verify your assumptions about the bottlenecks and costs of the process, and to establish a baseline against which to compare the future-state process.

Q: Business Managers : concerns – failure to achieve ROI ?

A: I’m not exactly sure what this question means, but assume that it relates to the slide near the end of the webinar that discusses role changes caused by BPM. Management and executives are most concerned with risk around a project, and they may have concerns that the ROI is too ambitious (either because the new technology fails or too many “soft” ROI factors were used in the calculation) and that the BPM project will fail to meet the promises that they’ve likely made to the layers of management above them. The right choice of ROI metrics can go a long ways to calming their fears, and educating them on the significant benefits of process governance that will result from the implementation of BPM. Management will now have an unprecedented view of the current state and performance of the end-to-end process. They’ll also have more comprehensive departmental performance statistics without manual logging or cutting and pasting from several team reports.

Q: I am a manager in a MNC and I wanted to know how this can help me in my management. How can I use it in my daily management? One example please?

A: By “MNC” I assume that you mean “multi-national corporation”. The answer is no different than from any other type of organization, except that you’re likely to be collaborating with other parts of your organization in other countries hence have the potential to see even greater benefits. One key area for improvement that can be identified with business process modeling, then implemented in a BPMS, is all of the functional redundancy that typically occurs in multi-nationals, particularly those that grow by acquisition. Many functional areas, both administrative/support and line-of-business, will be repeated in multiple locations, for no better reason than that it wasn’t possible to combine them before technology was brought to bear on it. Process modeling will allow you to identify areas that have the potential to be combined across different geographies, and BPM technology allows processes to flow seamlessly from one location to another.

Q: How much detail is allowed in a process diagram (such as the name of the supplier used in a purchase order process or if the manager should be notified via email or SMS to approve a loan)? Is process visibility preferred compared to good classic technical design, in the BPM world?

A: A placeholder for the name of a supplier would certainly be modeled using a property of the process, as would any other custom data elements. As for the channel used for notifying the manager, that might be something that the manager can select himself (optimally) rather than having that fixed by the process; I would consider that to be more of an implementation detail although it could be included in the process model.

I find your second question interesting, because it implies that there’s somehow a conflict between good design and process visibility. Good design starts with the high-level process functional design, which is the job of the analyst who’s doing the process modeling; this person needs to have analytical and design skills even though it’s unlikely that they do technical design or write code. Process visibility usually refers to the ability of people to see what’s happening within executing processes, which would definitely be the result of a good design, as opposed to something that has to be traded off against good design. I might be missing the point of your question, feel free to add a comment to clarify.

Q: Are there any frameworks to develop a BPM solution?

A: Typically, the use of a BPMS implies (or imposes) a framework of sorts on your BPM implementation. For example, you’re using their modeling tool to draw out your process map, which creates all the underpinnings of the executable process without you writing any code to do so. Similarly, you typically use a graphical mapping functionality to map the process parameters onto web services parameters, which in turn creates the technical linkages. Since you’re working in a near-zero-code environment, there’s no real technical framework involved beyond the BPMS itself. I have seen cases where misguided systems integrators create large “frameworks” — actually custom solutions that always require a great deal of additional customization — on top of a BPMS that tends to demote the BPMS to a simple queuing system. Not recommended.

There were also a few questions specifically about TIBCO, for which Emily Burns (TIBCO’s marketing manager, who moderated the webinar) provided answers:

Q: Is TIBCO Studio compatible with Windows Vista?

A: No, Vista is not yet supported.

Q: Are there some examples of ROI from the industry verticals

A: On TIBCO’s web site, there are a variety of case studies that discuss ROI here: http://www.tibco.com/solutions/bpm/customers.jsp. Additionally, these are broken down into some of the major verticals here: http://www.tibco.com/solutions/bpm/bpm_your_industry.jsp

Q: Is there any kind of repository or library of “typical” process? I’m particularly interested in clinical trials.

A: TIBCO’s modeling product ships with a large variety of sample processes aggregated by industry.

And lastly, my own personal favorite question and answer, answered by Emily:

Q: What’s the TLA for BPM+SOA?

A: RAD 🙂

Survey on BPMN

You may have seen this announced on other BPM blogs, but there’s currently a survey out on the use of, and satisfaction with, BPMN by process modellers. This is part of a PhD research project by Jan Recker at the Queensland University of Technology in Brisbane, Australia (a city that I remember fondly, in spite of the fact that it was pouring rain last time that I was there).

As a perq for completing the survey, you’ll get the summarized results of the survey, plus access to recent studies on BPMN, so it’s worth doing if you’re using BPMN. The details, from Jan’s request to me:

BPMN is gaining huge momentum in practitioner communities, up to a point that even those vendors who were initially reluctant to adopt it, can no longer completely ignore it. But what exactly are the factors that drive this acceptance? How satisfied are end users of BPMN with the notation? Do user experiences on BPMN match those by BPA tool vendors?

Jan Recker from the BPM Research Group at Queensland University of Technology is undertaking a worldwide survey on the use of BPMN by process modellers to shed light into this question. You can help Jan by completing the survey available here:

http://www.bpm.fit.qut.edu.au/projects/acceptance/survey/BPMN/.

The best way to contact Jan is via email: [email protected]

I’m hoping that if I publish his request, maybe they’ll sponsor me to come down and speak at their BPM conference in September 🙂

TUCON: Tom Laffey and Matt Quinn

Last in the morning’s general session was Tom Laffey, TIBCO’s EVP of products and technologies, and Matt Quinn, VP of product management and strategy. Like Ranadivé’s talk earlier, they’re talking about enterprise virtualization: positioning messaging, for example, as virtualizing the network layer, and BPM as enterprise process virtualization. I’m not completely clear if virtualization is just the current analyst-created buzzword in this context.

Laffey and Quinn tag-teamed quite a bit during the talk, so I won’t attribute specific comments to either. TIBCO products cover a much broader spectrum that I do, so I’ll focus just on the comments about BPM and SOA.

TIBCO’s been doing messaging and ESB for a long time, and some amount of the SOA talk is about incremental feature improvements such as easier use of adapters. Apparently, Quinn made a prediction some months ago that SOA would grow so fast that it would swallow up BPM, so that BPM would just be a subset of SOA. Now, he believes (and most of us from the BPM side agree 🙂 ) that BPM and SOA are separate but extremely synergistic practices/technologies, and both need to developed to a position of strength. To quote Ismael Ghalimi, BPM is SOA’s killer application, and SOA is BPM’s enabling infrastructure, a phrase that I’ve included in my presentation later today; like Ismael, I see BPM as a key consumer of what’s produced via SOA, but they’re not the same thing.

They touched on the new release of Business Studio, with its support for BPMN, XPDL and BPEL as well as UML for some types of data modelling. There’s some new intelligent workforce management features, and some advanced user interface creation functionality using intelligent forms, which I think ties in with their General Interface AJAX toolkit.

Laffey just defined “mashup” as a browser-based event bus, which is an interesting viewpoint, and likely one that resonates better with this audience than the trendier descriptions.

They discussed other functionality, including business rules management, dynamic virtual information spaces (the ability to tap into a real-time event message stream and extract just what you want), and the analytics that will be added with the acquisition of Spotfire. By the way, we now appear to be calling analytics “business insight”, which lets us keep the old BI acronym without the stigma of the business intelligence latency legacy. 🙂

They finished up with a 2-year roadmap of product releases, which I won’t reproduce here because I’d hate to have to embarrass them later, and some discussion of changes to their engineering and product development processes.

A Quick Peek at Cordys BPM

A month ago, I had a chance for a comprehensive demo of the Cordys BPMS via Webex, and I saw them briefly at the Gartner show last week. Their suite is of particular interest to me because the entire process life cycle of modelling, execution and monitoring is completely browser-based. I’ve been pushing browser-based process modelling/design for quite a while, since I think that this is the key to widespread collaboration in process modelling across all stakeholders of a process. I’ve reviewed a couple of browser-based process modellers — a full-featured version from Appian, and a front-end process mapping/sketch tool from Lombardi — and if it wasn’t already clear from what Appian has done, Cordys also proves that you can create a fully-functional process designer that runs in a browser and can have participants outside the corporate firewall. Like Appian, however, they currently only support Internet Explorer (and hence Windows), which will limit the collaboration capabilities at some point.

cordys-bpmn_402515333_o

Cordys’ claim is that their modeller is BPMN compliant and supports the entire set of BPMN elements including all of the complex constructs such as transactions and compensation rollback, although I saw a few non-standard visual notations. They also support both XPDL 2.0 and BPEL for import and export, but no word on BPDM. Given this dedication to standards, I find it surprising that they can integrate only with their own ESB and business rules engine, although you could call third-party products via web services. They also have their own content repository (although you can integrate with any repository that allows object access via URL) and their own BAM. In general, I find that when a smaller vendor tries to build everything in a BPM suite themselves, some of the components are going to be lacking; furthermore, many organizations already have corporate standards for some or all of these, and you’d better integrate with the major players or you won’t get in the door.

Like most BPMS’, much of the Cordys process design environment is too complex for the average business user/analyst, and probably would be used by someone on the IT side with input from the business people; a business analyst might draw some of the process models, but as soon as you start clicking on objects and pulling up SOAP syntax, they’re going to be out of there. Like most BPMS vendors, Cordys claims that the process design environment is “targetted towards business people”, but vendors have been doing this for years now, and the business people have yet to be convinced. To be fair, I was given the demo by the very enthusiastic product architect who knew that I’m technical, so he pulled out every bell and whistle for a ride; likely business users see a very different version of the demo.

There’s a lot of functionality here, although nothing that I haven’t seen in some form in other products. There’s support for human-facing tasks either via browser-based inbox and search functions, or by forwarding the tasks to any email system via SMTP (like Outlook). There also appear to be shared worklists, but I didn’t get a sense of how automated work allocation could be performed, something that’s required to support high-volume transaction processing environments. There’s also support for web services orchestration to handle the system integration side of the BPM equation.

One thing that I like is the visual process debugger: although you have to hack a bit of XML to kick things off, you can step through a process, calling web services and popping up user interfaces as you hit the corresponding steps, and stepping over or into subprocesses (very reminiscent of a code debugger, but in a visual form).

They do a good job of an object repository as well, which helps increase reusability of objects, and allows you to search for processes and artifacts (such as forms or web services) to see where they’re used. Any process that’s built can also be exposed as a web service: just add inputs and outputs at the start and end points and the WSDL is auto-generated, allowing the process to be called as a service from any other application or service.

Cordys mashup<geek>Another thing that I really liked is the AJAX-based framework and modelling layer for UI/forms design, which is an extension of Xforms. In addition to a nice graphical UI design environment, you can generate a working user interface directly from the WSDL of a web service — something that I’ve seen in other products such as webMethods, but I still think is cool — and run it immediately in the designer. In the demo that I saw, the architect found an external currency conversion web service, introspected it with the designer and generated a form representing the web service inputs and outputs that he popped directly onto the page, where he could then run it directly in debug mode, or rearrange and change the form objects. Any web service in the internal repository — including a process — can be dragged from the repository directly onto the page to auto-generate the UI. Linked data objects on a form communicate directly (when possible) without returning to the server in a true AJAX fashion, and you can easily create mashups such as the example that I saw with the external currency converter, a database table, and MSN Messenger. For the hardcore among us, you can also jump directly to the underlying scripting code.</geek>

Unfortunately, the AJAX framework is not available as a separate offering, only as part of the BPMS; I think that Cordys could easily spin this off as a pretty nice browser-based development environment, particularly for mashups.

The Incredible BPMN

Last year, I developed a course on process standards for FileNet (now IBM) that they use to train their sales teams and partners. It included a bit on BPMN, among other standards, because FileNet will soon be launching an ability to model in BPMN through an integration with Visio.

Frighteningly, their recent press release says “New features support BPM standards for business process modeling (BPMN) and execution (XPDL) and BPM integration as part of an overall Service Oriented Architecture (SOA) strategy” — do they think that the X in XPDL stands for eXecution?

This morning, I received an email from someone at a FileNet reseller who recently took the course that I developed online. He said “Thanks for your really great webcast” (I love feedback like that!), and also created a BPMN diagram using icons from The Incredible Machine:

The thought of combining the whimsical — yet design-like — diagrams of TIM with those of BPMN gave me a giggle and really made my day.

ProcessWorld Day 1: Keynote with Prof. Scheer

The opening keynote this morning was by Prof. August-Wilhelm Scheer, the founder and serious brain-trust behind IDS Scheer. You have to love this guy: not only is he brilliant and able to describe his ideas clearly, he opened and closed his session by playing sax in a jazz trio on stage.

He covered a lot of material in his talk, and I can’t begin to do it justice but will try to hit a few of the high points.

The goal of a modelling tool like ARIS is to support business processes from strategy to model to detailed description to implementation, including changes to any part of that chain and how the changes ripple through the other layers. The design-implementation-control life cycle of business processes, with a current strong focus on the optimization end of things, serves to bring together process modelling and execution like never before.

The business model at the top of any business process is the key competitive differentiator for an organization, requiring identification of the value proposition, supply chain, and target customer. This places the business model, and the surrounding business architecture, as part of an overall enterprise architecture. Looking at the business process architecture stack (think Zachman column 2), the business model leads to the business process, which requires/populates the business process repository. This, in turn, populates the IT-business process repository for the subset of the processes to be automated, through standardized modelling formats like BPMN and serialization formats like BPEL, which in turn connect to the enterprise service repository that documents the underlying services. Surrounding all this is the business process platform for service assembly/orchestration, portals, B2B, WFMS (wow, haven’t heard that term for a while: workflow management systems, for the youngsters in the crowd) and EAI.

IDS Scheer is involved with (or at least concerned with) a number of process-related standards, including ones such as BPMN and IDEF at the business process modelling level. I’m interested to see if they’re involved in the BPM Think Tank that OMG runs, such as the one coming up in July in San Francisco — an email exchange with someone from OMG a few minutes ago indicate that they’re not heavily involved in OMG standards. ARIS’ business model metamodel and their generally high level of innovation could almost certainly contribute to OMG standards development, if they’re not already.

One interesting point that Prof. Scheer finished with (well, before he started playing sax again) was that BPMS (i.e., process execution) vendor platforms will continue to be proprietary in spite of their “commitment” to standards (my quotation marks, since I agree with this thought), so products like ARIS are necessary in order to help facilitate the movement of models between execution systems. The business view needs to be open, while the implementation layer will remain proprietary.

BPMN-XPDL-BPEL value chain revisited

Right after I dissed the new for-pay incarnation of Business Integration Journal Business Transformation and Innovation, it turns out that I’m mentioned in an article in the November/December issue.

For the past 12 to 18 months, there has been growing interest and discussion surrounding BPMN, XPDL and BPEL. What has begun to take form is the recognition of the BPMN-XPDL-BPEL Value Chain, a concept first credited to analyst Sandy Kemsley by XPDL expert Keith Swenson.

I normally don’t read this publication cover to cover, but I was checking my email subscriptions folder and saw a message with the title “The BPMN-XPDL-BPEL Value Chain”. Having coined the phrase “BPMN-XPDL-BPEL value chain” in a blog post covering the BPM Think Tank last May, I tend to notice when it crops up elsewhere 🙂

The BIJ article, written by Nathaniel Palmer of WfMC, discusses the three process standards and their interrelationship, particularly around how XPDL and BPEL are complementary, not competitive. Nothing here that I haven’t written before, but it’s a good overview/summary article on the subject.

Of course, being from WfMC, which authors the XPDL standard, he doesn’t mention OMG’s BPDM, which could prove eventually to be XPDL’s nemesis.