CASCON Workshop: Accelerate Service Integration In Your BPM and SOA Applications

I’m attending a workshop at the first morning of CASCON, the conference on software research hosted by IBM Canada. There’s quite a bit of good work done at the IBM Toronto software lab, and this annual conference gives them a chance to engage the academic and corporate community to present this research.

The focus of this workshop is service integration, including enabling new services from existing applications and creating new services by composing from existing services. Hacking together a few services into a solution is fairly simple, but your results may not be all that predictable; industrial-strength service integration is a bit more complex, and is concerned with everything from reusability to service level agreements. As Allen Chan of IBM put it when introducing the session: “How do we enable mere mortals to create a service integration solution with predictable results and enterprise-level reliability?”

The first presentation was by Mannie Kagan, an IBMer who is working with TD Bank on their service strategy and implementation; he walked us through a real-life example of how to integrate services into a complex technology environment that includes legacy systems as well as newer technologies. Based on this, and a large number of other engagements by IBM, they are able to discern patterns in service integration that can greatly aid in implementation. Patterns can appear at many levels of granularity, which they classify as primitive, subflow, flow, distributed flow, and connectivity topology. From there, they have created an ESB framework pattern toolkit, an Eclipse-based toolkit that allows for the creation of exemplars (templates) of service integration that can then be adapted for use in a specific instance.

He discussed two particular patterns that they’ve found to be particularly useful: web service notification (effectively, pub-sub over web services), and SCRUD (search, create, read, updated, delete); think of these as some basic building blocks of many of the types of service integrations that you might want to create. This was presented in a specific IBM technology context, as you might imagine: DataPower SOA appliances for processing XML messages and legacy message transformations, and WebSphere Services Registry and Repository (WSRR) for service governance.

In his wrapup, he pointed out that not all patterns need to be created at the start, and that patterns can be created as required when there is evidence of reuse potential. Since patterns take more resources to create than a simple service integration, you need to be sure that there will be reuse before it is worth creating a template and adding it to the framework.

Next up was Hans-Arno Jacobsen of University of Toronto discussing their research in managing SLAs across services. He started with a business process example of loan application processing that included automated credit check services, and had an SLA in terms of parameters such as total service subprocess time, service roundtrip time, service cost and service uptime. They’re looking at how the SLAs can guide the efficient execution of processes, based in a large part on event processing to detect and determine the events within the process (published state transitions). He gave quite a detailed description of content-based routing and publish-subscription models, which underlie event-driven BPM, and their PADRES ESB stack that hides the intricacies of the underlying network and system events from the business process execution by creating an overlay of pub-sub brokers that filters and distributes those events. In addition to the usual efficiencies created by the event pub-sub model, this allows (for example) the correlation of network slowdowns with business process delays, so that the root cause of a delay can be understood. Real-time business analytics can also be driven from the pub-sub brokers.

He finished by discussing how business processes can actually be guided by SLAs, that is, runtime use of SLAs rather than just for monitoring processes. If the process can be allocated to multiple resources in a fine-grained manner, then the ESB broker can dynamically determine the assignment of process parts to resources based on how well those resources are meeting their SLAs, or expected performance based on other factors such as location of data or minimization of traffic. He gave an example of optimization based on minimizing traffic by measuring message hops, which takes into account both rate of message hops and distance between execution engines. This requires that the distributed execution engines include engine profiling capabilities that allows an engine to determine not only its own load and capacity, but that of other engines with which it communicates, in order to minimize cost over the entire distribute process. To fine-tune this sort of model, process steps that have a high probability of occurring in sequence can be dynamically bound to the same execution engine. In this situation, they’ve seen a 47% reduction in traffic, and a 50% reduction in cost relative to the static deployment model.

After a brief break, Ignacio Silva-Lepe from IBM Research presented on federated SOA. SOA today is mostly used in a single domain within an organization, that is, it is fairly siloed in spite of the potential for services to be reused across domains. Whereas a single domain will typically have its own registry and repository, a federated SOA can’t assume that is the case, and must be able to discover and invoke services across multiple registries. This requires a federation manager to establish bridges across domains in order to make the service group shareable, and inject any cross-domain proxies required to invoke services across domains.

It’s not always appropriate to have a designated centralized federation manager, so there is also the need for domain autonomy, where each domain can decide what services to share and specify the services that it wants to reuse. The resulting cross-domain service management approach allows for this domain autonomy, while preserving location transparency, dynamic selection and other properties expected from federated SOA. In order to enable domain autonomy, the domain registry must not only have normal service registry functionality, but also references to required services that may be in other domains (possibly in multiple locations). The registries then need to be able to do a bilateral dissemination and matching of interest and availability information: it’s like internet dating for services.

They have quite a bit of work planned for the future, beyond the fairly simple matching of interest to availability: allowing domains to restrict visibility of service specifications to authorized parties without using a centralized authority, for example.

Marsha Checkik, also from University of Toronto, gave a presentation on automated integration determination; like Jacobsen, she collaborates with the IBM Research on middleware and SOA research; unlike Jacobsen, however, she is presenting on research that is at a much earlier stage. She started with a general description of integration, where a producer and a consumer share some interface characteristics. She went on to discuss interface characteristics (what already exists) and service exposition characteristics (what we want): the as-is and to-be state of service interfaces. For example, there may be a requirement for idempotence, where multiple “submit” events over an unreliable communications medium would result in only a single result. In order to resolve the differences in characteristics between the as-is and to-be, we can consider typical service interface patterns, such as data aggregation, mapping or choreography, to describe the resolution of any conflicts. The problem, however, is that there are too many patterns, too many choices and too many dependencies; the goal of their research is to identify essential integration characteristics and make a language out of them, identify a methodology for describing aspects of integration, identify the order in which patterns can be determined, identify decision trees for integration pattern determination, and determine cases where integration is impossible.

Their first insight was to separate pattern-related concerns between physical and logical characteristics; every service has elements of both. They have a series of questions that begin to form a language for describing the service characteristics, and a classification for the results from those questions. The methodology contains a number of steps:

  1. Determine principle data flow
  2. Determine data integrity data flow, e.g., stateful versus stateless
  3. Determine reliability flow, e.g., mean time between failure
  4. Determine efficiency, e.g., response time
  5. Determine maintainability

Each of these steps determines characteristics and mapping to integration patterns; once a step is completed and decisions made, revisiting it should be minimized while performing later steps.

It’s not always possible to provide a specific characteristic for any particular service; their research is working on generating decision trees for determining if a service requirement can be fulfilled. This results in a pattern decision tree based on types of interactions; this provides a logical view but not any information on how to actually implement them. From there, however, patterns can be mapped to implementation alternatives. They are starting to see the potential for automated determination of integration patterns based on the initial language-constrained questions, but aren’t seeing any hard results yet. It will be interesting to see this research a year from now to see how it progresses, especially if they’re able to bring in some targeted domain knowledge.

Last up in the workshop was Vadim Berestetsky of IBM’s ESB tools development group, presenting on support for patterns in IBM integration offerings. He started with a very brief description of an ESB, and WebSphere Message Broker as an example of an ESB that routes messages from anywhere to anywhere, doing transformations and mapping along the way. He basically walked through the usage of the product for creating and using patterns, and gave a demo (where I could see vestiges of the MQ naming conventions). A pattern specification typically includes some descriptive text and solution diagrams, and provides the ability to create a new instance from this pattern. The result is a service integration/orchestration map with many of the properties already filled in; obviously, if this is close to what you need, it can save you a lot of time, like any other template approach.

In addition to demonstrating pattern usage (instantiation), he also showed pattern creation by specifying the exposed properties, artifacts, points of variability, and (developer) user interface. Looks good, but nothing earth-shattering relative to other service and message broker application development environments.

There was an interesting question that goes to the heart of SOA application development: is there any control over what patterns are created and published to ensure that they are useful as well as unique? The answer, not surprisingly, is no: that sort of governance isn’t enforced in the tool since architects and developers who guide the purchase of this tool don’t want that sort of control over what they do. However, IBM may see very similar patterns being created by multiple customer organizations, and choose to include a general version of that pattern in the product in future. A discussion about using social collaboration to create and approve patterns followed, with Berestetsky hinting that something like that might be in the works.

That’s it for the workshop; we’re off to lunch. Overall, a great review of the research being done in the area of service integration.

This afternoon, there’s the keynote and a panel that I’ll be attending. Tomorrow, I’ll likely pop in for a couple of the technical papers and to view the technology showcase exhibits, then I’m back Wednesday morning for the workshop on practical ontologies, and the women in technology lunch panel. Did I mention that this is a great conference? And it’s free?

IBM FileNet BPM Product Update

All this news this week about Case Manager, my old friend BPM seems like it’s been left on the sidelines, although partially hidden within the new Case Manager offering. However, we have one session by Mike Fannon, BPM product manager, giving us the update on what’s happening with BPM.

The first thing is new OS platform support for Linux and zLinux; although this is important for many customers who have standardized on Linux – or want to integrate BPM with CM8 on their mainframes – you can imagine this is not the most exciting announcement to me. Yes, I have customers who will love this. Now move on. 🙂

Next is the port of the Process Engine to a standalone Java app (not J2EE), from its original C++ beginnings. Although this seems on the surface to be not a lot more exciting than the Linux support, this is pretty significant, and not just for the performance boost that they’re seeing. This means improvements to the complexity of the APIs and database interfaces, better standardization, and also brings PE in line architecturally with the Content Engine and even allows PE and CE to share the same database. In the future, they’re considering moving it to a J2EE container, which provides a lot more flexibility for things like server farming.

They’re also supporting multi-tenancy, allowing multiple PEs to run on the same virtual server with separate application environment, user space, backup and restore for each tenant. These PE Stores (analogous to CE Object Stores) seem to be replacing the old isolated regions paradigm, and there are procedures for moving isolated regions to separate PE Stores. If you’re an old BPM hack, then all your old VW-prefixed admin commands will be replaced as the vestiges of Visual WorkFlo are finally purged. As the owner of a small systems integration firm, I designed and wrote one of the first VW apps back in 1994, so this does bring a small tear to my eye, although this has obviously being too long coming.

From an upgrade standpoint, there are supported upgrade paths (some staged, some direct) from eProcess 5.2 (can’t believe that’s still out there) as well as BPM 3.53 and later, including migration tools for in-flight process instances. There are changes to the data model of the underlying database tables, so if you’ve built any applications such as advanced analytics that directly hit the operational database, you’re going to have some refactoring to do.

Process Analyzer has been extended to add capabilities for Case Manager, such as aggregation based on case properties. In addition to using Excel pivot tables, which has always been done in the past for PA, you can use Cognos BI instead. Of course, since PA is based on a set of cubes in a MS SQL Server/MS Analysis Services engine that is trickle-fed from the PE database, this has always been possible, but I assume that it’s just better integrated now. Unsurprisingly, they to have a direction to eliminate Microsoft technology dependencies, so at some point in the future, I expect that you’ll see PA data store ported off the SQL Server/Analysis Services platform. The Process Monitor dashboard has also been updated to handle Case Manager data, and better integrated with Cognos.

There were a number of enhancements to the ECM Widgets in March and June, such as support for Business Space instead of the Mashup Center, and some new widgets for process history and get next work item (finally). It looks like they’re building out the widget functionality to the point where it’s actually usable for real applications; without the get next work item, you couldn’t use it to build any sort of heads-down processing functionality.

There are really few functionality improvements to BPM; most of this is refactoring and platform porting. I think that a lot of the BPM creative juices are going towards Case Manager, and if you look at the direction of the 100% Java PE port and ability to share databases with CE, it’s possible that we’ll see some sort of merging of ECM, BPM and Case Manager into a single engine in the future. IBM, of course, did not say that.

IBM Case Manager Technical Roundtable

Bill Lobig, Mike Marin, Peggy (didn’t catch her last name) and Lauren Mayes hosted a freeform roundtable for any technical questions about the new Case Manager product.

I had a chat with Mike prior to the talk, and he reinforced this during the session, about the genesis of Case Manager: although there were a lot of ideas that came from the old BPF product, Mike and his team spent months interviewing the people who had used BPF to find out what worked and what didn’t work, then built something new that incorporated the features most needed by customers. The object model for the case is now part of the basic server classes rather than being a higher-level (and therefore less efficient) custom object, there are new process classes to map properties between case folders and processes, and a number of other significant architectural changes and upgrades to make this happen. I see TIBCO going through this same pain right now with the lack of upgrade path from iProcess to AMX BPM, and to the guy in the audience who said that it’s not fair that IBM gives you a crappy product, you use it and provide feedback on how to improve it, then they charge you for the new product: well, that’s just how software works sometimes, and vendors will never have true innovation if they always have to be supporting their (and your) entire legacy. There does need to be some sort of migration path at least for the completed case folder objects from BPF to Case Manager native case objects, although that hasn’t been announced, since these are long-term corporate assets that have to be managed the same as any other content; however, I would not expect any migration of the BPF apps themselves.

More process functionality is being built right into the content engine; this is significant in that you’ve always required both ECM and BPM to do any process management, but it sounds like some functionality is being drawn into the content engine. Does this mean that the content and process engines eventually be merged into a single platform and a single product? That would drive further down the road of repositioning FileNet BPM as content-centric – originally done at the time of the FileNet acquisition, I believe, to avoid competition with WebSphere BPM – since if it’s truly content-centric, then why not just converge the engines, including the ACM capabilities? That would certainly make for a more seamless and consistent development environment, especially around issues like object modeling and security.

One consistent message that’s coming across in all the Case Manager sessions is accelerating the development time by allowing a business analyst to create a large part of a case application without involving IT; this is part of what BPF was trying to provide, and even BPM prior to that. I was FileNet’s evangelist for the launch of the eProcess product, which was the first version of the current generation of BPM, and we put forward the idea back in 2000 that a non-technical (or semi-technical) analyst could do some amount of the model-driven application development.

There are obviously still some rough edges in Case Manager still, since version 1.0 isn’t even out yet. In a previous session, we saw some of the kludges for content analytics, dashboarding and business rules, and it sounds like role-based security and e-forms isn’t really fully integrated either. The implications of these latter two are tied up with the ease in which you can migrate a case application from one environment to another, such as from development to test to production: apparently, not completely seamless, although they are able to bundle part of a case application/template and move it between environments in a single operation. Every vendor needs to deal with this issue, and those that have a more tightly integrated set of objects making up an application have a much easier time with this, especially if they also offer a cloud version of their software and need to migrate easily between on premise and cloud environments, such as TIBCO, Fujitsu and Appian. IBM is definitely playing catchup in the area of moving defined applications between environments, as well as their overall integration strategy within Case Manager.

IOD ECM Keynote

Ron Ercanbrack, VP at IBM (my old boss from my brief tenure at FileNet in 2000-1, who once introduced me at a FileNet sales kickoff conference as the “Queen of BPM”), gave a brief ECM-focused keynote this morning. He covered quite a bit of the information that I was briefed on last week, including Case Manager, Content Analytics, improved content integration including CMIS, the Datacap and PSS acquisitions, enhancements to Content Collector, and more. He positioned Case Manager as a product “running on top of BPM”, which is a bit different than the ECM-centric message that I’ve heard so far, but likely also accurate: there are definitely significant components of each in there.

He was followed by Carl Kessler, VP of Development, to give a Case Manager demo; this covered the end-user case management environment (pretty much what we’ve seen in previous sessions, only live), plus Content Analytics for text mining which is not really integrated with Case Manager: it’s a separate app with a different look and feel. I missed the launch point, so I don’t know whether he launched this from a property value in Case Manager or had to start from scratch using the terms relevant to that case. It has some very nice text mining capabilities for searching through the content repository for correlation of terms, including some pretty graphs, but it’s a separate app.

We then went off to the Cognos Real-time Monitoring Dashboard, which is yet again another non-integrated app with a different look and feel. He showed a dashboard that had a graph for average age of cases and allowed drill-down on different parameters such as industry type and dispute type, but that’s not really the same as a fully integrated product suite. Although all of the components applications are functional, this needs a lot more integration at the end-user level.

I did get a closer look at some of the Case Builder functionality than I’ve seen already: in the tasks definition section, there are required tasks, optional tasks and user-created tasks, although it’s not clear what user-created tasks are since this is design-time, not runtime.

Ercanbrack came back to the stage for a brief panel with three customers – Bank of America, State of North Dakota, and BlueCross BlueShield of Tennessee – talking about their ECM journeys. This was not specific to case management at all, but using records/retention management to reduce storage costs and risks in financial services, using e-discovery as part of a legal action in healthcare, and content management with a case management approach for allowing multiple state government agencies to share documents more effectively.

IBM Announcements: Case Manager, CMIS and More

I had a pre-IOD analyst briefing last week from IBM with updates to their ECM portfolio, given by Ken Bisconti, Dave Caldera and Craig Rhinehart. IOD – Information on Demand – is IBM’s conference covering business analytics and information management, the latter of which includes data management and content management. The former FileNet products fall into their content management portfolio (including FileNet BPM, which was repositioned as document-centric BPM following the acquisition so as to not compete with the WebSphere BPM products), and includes case management capabilities in their Business Process Framework (BPF). I also had a one-to-one session with Bisconti while at IOD to get into a bit more detail.

The big announcement, at least to me, was the new Case Manager product, to ship in Q4 (probably November, although IBM won’t commit to that). IBM has been talking about an advanced case management strategy for several months now, and priming the pump about what “should” be in a case management product, but this is the first that we’ve seen a real product as part of that strategy; I’m sure that the other ACM vendors with products already released are ROFL over IBM’s statement in the press release that this is the “industry’s first advanced case management product”. With FileNet Content Manager at the core for managing the case file and the associated content, they’ve drawn on a variety of offerings across different software groups and brands to create this product: ILOG rules, Cognos realtime monitoring, Lotus collaboration and social networking, and WebSphere Process Server to facilitate integration to multiple systems. This is one of their “industry solutions” that spans multiple software groups, and I can just imagine the internal political wrangling that went on to make this happen. As excited as they sounded about bringing all these assets together in a new product, they’ll need to demonstrate a seamless integration and common user experience so that this doesn’t end up looking like some weird FrankenECM. Judging from the comments at the previous session that I attended, it sounds like the ILOG integration, at the very least, is a bit shaky in the first release.

They’re providing analytics – both via the updated Content Analytics offering (discussed below) and Cognos – to allow views of individual case progression as well as analysis of persistent case information to detect patterns in case workload. It sounds like they’re using Cognos for analyzing the case metadata, and Content Analytics for analyzing the unstructured information, e.g., documents and emails, associated with the case.

A key capability of any case management system, and this is no exception, is the ability to handle unstructured work, allowing a case worker to use their own experience to determine the next steps to progress the case towards outcome. Workers can create tasks and activities that use the infrastructure of queues and inboxes; this infrastructure is apparently new as part of this offering, and not based on FileNet BPM. Once a case is complete, it remains in the underlying Content Manager repository, where it is subject to retention policies like any other content. They’ve made the case object and its tasks native content types, so like any other content class in FileNet Content Manager, you can trigger workflows (in BPM) based on the native event types of the content class, such as when the object is created or updated. The old Business Process Framework (BPF), which was the only prior IBM offering in the case management arena, isn’t being discontinued, but customers will definitely be encouraged to create any new case management applications on Case Manager rather than BPF, and eventually to rewrite their BPF applications to take advantage of new features.

As we’re seeing in many other BPM and case management products, they’ve created the ability to deploy reusable templates for vertical solutions in order to reduce the time required to deploy a solution from months down to days. IBM’s focus will initially be on the horizontal platform, and they’re relying on partners and customers to build the industry-specific templates. Partners in the early adoption program are already providing templates for claims, wealth management and other solutions. The templates are designed for use by business analysts, so that a BA can use a pre-defined template to create and deploy a case management solution with minimal IT involvement.

For user experience, they’re providing three distinct interfaces:

  • A workbench for BAs to create case solutions, based on the afore-mentioned templates, using a wizard-based interface. This includes building the end user portal environment with the IBM iWidget component (mashup) environment.
  • A role-based portal for end users, created by the BAs in the workbench, with personalization options for the case worker.
  • Analytics/reporting dashboards reporting on case infrastructure for managers and case workers, leveraging Cognos and Content Analytics.

They did have some other news aside from the Case Manager announcement; another major content-related announcement is support for the CMIS standard, allowing IBM content repositories (FileNet CM, IBM CM8 and CMOD) to integrate more easily with non-IBM systems. This is in a technology preview only at this point, but since IBM co-authored the standard, you can expect full support for it in the future. I had a recent discussion with Pega indicating that they were supporting CMIS in their case management/BPM environment, and we’re seeing the same from other vendors, meaning that you’ll be able to integrate an industrial strength repository like FileNet CM into the BPM or ACM platform of your choice.

They had a few other announcements and points to discuss on the call:

  • IBM recently acquired Datacap, a document capture (scanning) product company, which refreshes their high-performance document scanning and automated recognition capabilities. This integrates with FileNet CM, but also with the older IBM CM8 Content Manager and (soon) CMOD, plus other non-IBM content repositories. Datacap uses a rules-based capability for better content capture, recognition and classification.
  • There are improvements to Office Document Services; this is one of the areas where CMIS will help as well, allowing IBM to hold its nose and improve their integration with SharePoint and Exchange. There’s a big focus on content governance, such as managing retention lifecycles, including content federation across multiple heterogeneous repositories.
  • There are updates to the information lifecycle governance (ILG) portfolio, including Content Collector and eDiscovery. Content Collector has better content collection, analysis and management capabilities for office documents, email and SAP data. eDiscovery now provides better support for legal discovery cases, with enhanced security roles for granular content access, redaction APIs and better keyword identification. This ties back into governance, content lifecycle management and retention management: disposal of information at the appropriate times is key to reducing legal discovery costs, since you’re not having to retrieve, distribution and review a lot of content that is no longer legally required.
  • IBM’s recent acquisition of PSS Systems complements the existing records management and eDiscovery capabilities with retention-related analytics and policy solutions.
  • The relatively new IBM Content Analytics (ICA) product has been updated, providing analytics on content retention management (i.e., find what you need to decommission) as well as more general “BI for content” for advanced analytics on what’s in your content repositories and related contextual data from other sources. This integrates out of the box with Cognos (which begs the question, why isn’t this actually just Cognos) as well as the new Case Manager product to provide analytics for the manager dashboard views. The interesting thing is that “content” in this situation is more than just IBM content repositories, it’s also competitive content repositories and even things like Twitter feeds via IBM’s new BigInsights offering. They have a number of ICA technology demos here at IOD, including the BigInsights/Twitter analysis, and ICA running on Hadoop infrastructure for scalability.
  • The only announcement for FileNet BPM seemed to be expanding to some new Linux platforms, and I’ve heard that they’re refactoring the process engine to improve performance and maintenance but no whiff of new functionality aside from the Case Manager announcement. I plan to attend the BPM technical briefing this afternoon, and should have some more updates after that.

I still find the IBM ECM portfolio – much like their BPM and other portfolios – to contain too many products: clearly, some of these should be consolidated, although IBM’s strategy seems to be to never sunset a product if they have a couple of others that do almost the same thing and there’s a chance that they can sell you all of them.

Advanced Case Management Empowering The Business Analyst

We’re still a couple of hours away from the official announcement about the release of IBM Case Manager, and I’m at a session on how business analysts will work with Case Manager to build solutions based on templates.

Like the other ACM sessions, this one starts with an overview of IBM’s case management vision as well as the components that make up the Case Manager product: ECM underlying it all, with Lotus Sametime for real-time presence and chat, ILOG JRules for business rules, Cognos Real Time Monitor for dashboards, IBM Content Analytics for unstructured content analysis, IBM (Lotus) Mashup Center for user interface and some new case management task and workflow functionality that uses P8 BPM under the covers. Outside the core of Case Manager, WebSphere Process Server can be invoked for integration/SOA applications, although it appears that this is done by calling it from P8 BPM, which was existing functionality. On top of this, there are pre-built solutions and solution templates, as well as a vast array of services from IBM GBS and partners.

IBM Case Management Vision

The focus in this session is on the tools for the business analyst in the design-time environment, either based on a template or from scratch, including the user interface creation in the Mashup Center environment, analytics for both real-time and historical views of cases, and business rules. This allows a business analyst to capture requirements from the business users and create a working prototype that will form the shell of the final case application, if not the full executing application. The Case Builder environment that a business analyst works in to design case solutions also allows for testing and deploying the solution, although in most cases you won’t have your BAs deploying directly to a production environment.

Defining a case solution stats with the top-level case solution creation, including name, description and properties, then completing the following:

  • Define case types
  • Specify roles
    • Define role inbasket
  • Define personal inbasket
  • Define document types
  • Associate runtime UI pages

We didn’t see the ILOG JRules integration, and for good reason: in the Q&A, they admitted that this first version of Case Manager didn’t quite have that up to scratch, so I imagine that you have to work in both design environments, then call JRules from a BPM step or something of that nature.

The more that I see of Case Manager, the more I see the case management functionality that was starting to migrate into the FileNet ECM/BPM product from the Business Process Framework (BPF); I predicted that BPF would become part of the core product when I reviewed P8 BPM v4.5 a year and a half ago, and while this is being released as a separate product rather than part of the core ECM product, BPF is definitely being pushed to the side and IBM won’t be encouraging the creation of any new applications based on BPF. There’s no direct migration path from BPF to ACM; BPF technology is a bit old, and the time has come for it to be abandoned in favor of a more modern architecture, even if some of the functionality is replicated in the new system.

The step editor used to define the tasks associated with cases provides swimlanes for roles or workgroups (for underlying queue assignment, I assume), then allows the designer to add steps into the lanes and assign properties to the steps. The step properties are a simplified version of a step definition in P8 BPM, so I assume that this is actually a shared model (as opposed to export/import) that can be opened directly by the more technical BPM Process Designer. In P8 BPM 4.5, they introduced a “diagram mode” for business analysts in the Process Designer; this appears to be an even simpler process diagramming environment. It’s not BPMN compliant, which I think is a huge mistake; since it’s a workflow-style model with lanes, activities and split/merge are supported, this would have been a great opportunity to use the standard BPMN shapes to start getting BAs used to it.

I still have my notes from last week’s analyst briefing and my meeting with Ken Bisconti from yesterday which I will publish; these are more aligned with the “official” announcement that will be coming out today in conjunction with the press release.

Gartner MQ for BPMS Leaders

Gartner is pretty sticky about allowing anyone to publish anything about their magic quadrants (even if you could argue that excerpts, such as the MQ graph, constitute fair use). However, three of the leaders have a lot to say about it:

  • Pegasystems Positioned as a Leader in Prominent Analyst Firm’s 2010 Magic Quadrant for Business Process Management Suites (which includes a mini version of the graph)
  • Software AG Named in the Leaders Quadrant for Business Process Management Suites
  • IBM Lombardi positioned in the Leaders Quadrant of Gartner Magic Quadrant for Business Process Management Suites (also containing a mini graph)

At some point, you could probably reconstruct the Leaders quadrant based on press releases; many of the vendors in the other quadrants don’t bother to do a release about it (do they have to pay Gartner for that?): consider that IBM placed all three of its major BPM products in this MQ, but I only saw a press release about the one in the Leaders quadrant.

Adam Deane published something about it, which I missed before he had to pull it; the comments on his post are particularly interesting, especially the one from a vendor who believes that they were dropped from the MQ because they stopped being a Gartner customer.

Integrating BPM and Enterprise Architecture

Michael zur Muehlen presented this morning on integrating BPM and enterprise architecture, based on work that he’s done with the US Department of Defense. Although they use the DoDAF architecture framework in particular, the concepts are applicable to other similar EA frameworks. Like the Zachman framework, DoDAF prescribes the perspectives that are required, but doesn’t specify the artifacts (models) required for each of those perspectives; this is particularly problematic in DoD EA initiatives where there are likely to be many contractors and subcontractors involved, all of whom may use different model types to represent the same EA perspective.

He talked briefly about what makes a good model: the information must be correct, relevant (and complete) and economical (with respect to level of detail), as well as clear, comparable (linked to reality) and systematic. From there, he moved on to their selection of BPMN as the dominant standard for process modeling, since it has better event handling than UML activity diagrams, better organizational modeling than IDEF0, and better cross-organizational modeling than simple flowcharts. However, many tools support only a subset of BPMN – particularly those intended for process execution rather than just process modeling – and some tools have non-standard enhancements to BPMN that inhibit interoperability. Another issue is that the BPMN specification is enormous, with over 100 elements, with some different constructs that mean the same thing, such as explicit versus implicit gateways.

They set out to design primitives for the use of BPMN: where they “outlawed” the use of certain symbols such as complex gateways, and developed best practices for BPMN usage. They also mapped the frequency of BPMN symbol usage from internal DoD models, those that Michael sees in his practice as a professor of BPM at Stevens Institute of Technology, as well as samples found on the web, and came up with a distribution of the BPMN elements by frequency of usage. This research led to the creation of the subsets that are now part of the BPMN standard, as well as usage guidelines for BPMN in terms of both primitives and patterns.

In addition to the BPMN subsets (e.g., the most commonly implemented Descriptive subclass), they developed naming conventions to use within models, driven by the vocabulary related to their domain content. This idea of separating the control of model structure from the vocabulary makes sense: the first is more targeted at an implementer, while the second is targeted at a domain/business expert; this in turn led to vocabulary-driven development, where the relationship between capabilities, activities, resources and performers (CARP analysis) is established as a starting point for the labels used in process models, data models (or ontologies/taxonomies), security models and more as the enterprise architecture artifacts are built out.

Having defined how to draw the right models and how to select the right words to put in the models, they looked at different levels of models to be used for different purposes: models focused on milestones, handoffs, decisions and procedures. These are not just more detailed versions of the same, but rather different views on the process. The milestones view is a high-level view of the major process phases; handoffs looks at transitions between lanes with all activities with a lane rolled up to single activity, primarily showing the happy path; decisions look at major decision points and exception/escalation paths; and procedures showing a full requirements-level view of the process, i.e., the greatest level of detail that a business analyst is likely to create before involving technical resources to add things such as service calls.

To finish up, he tied this back to the six measures of model quality and how this approach based on primitives conforms to these measures. They’ve achieved a number of benefits, including minimizing modeling errors, ensuring that models are clear and consistent, and ensuring that the models can be converted to an executable form. I’m seeing an increased interest with my clients and in the marketplace on how BPM and EA can work together, so this was a great example of how one large organization manages to do it.

Michael posted earlier this year on the DoDAF subset of BPMN (in response to a review that I wrote of a BPMN update presentation by Robert Shapiro). If we go back a couple of years before that, there was quite a dust-up in the BPMN community when Michael first published the usage distribution statistics – definitely worth following the links to see where all this came from.

Building Process Skills To Scale Transformation

Connie Moore (or “Reverend Connie” as we now think of her 😉 ) gave a session this afternoon on process skills at multiple levels within your organization, and how entire new process-centric career paths are emerging. Process expertise isn’t necessarily something that can be quickly learned and overlaid on existing knowledge; it requires a certain set of underlying skills, and a certain amount of practical experience. Furthermore, process skills are migrating out of IT into the business areas, such as process improvement specialists and business architects.

Forrester recently did a role deep dive to take a look at the process roles that exist within organizations, and found that different organizations have very different views of business process:

  • Immature, usually smaller organizations with a focus on automation, not the process; these follow a typical build cycle with business analysts as traditional requirements gatherers.
  • Aspiring organizations that understand the importance of process but don’t really know fully what to do with it: they’ve piloted BPM projects and may have started a center of excellence, but are still evolving the roles of business analysts and other participants, and searching for the right methodologies.
  • Mature organizations already have process methodologies, and the process groups sit directly in the business areas, with clear roles defined for all of the participants. They will have robust process centers of excellence with well-defined methodologies such as Lean, offering internal training on their process frameworks and methods.

She talked about the same five roles/actors that we saw in the Peters/Miers talk, and she talked about how different types of business process professionals learn and develop skills in different ways. She mentioned the importance of certification and training programs, citing ABPMP as the up-and-coming player here with about 200 people certified to date (I’m also involved in a new effort to build a more open process body of knowledge), and listed the specific needs of the five actors in terms of their skills, job titles and business networks using examples from some of the case studies that we’ve been hearing about such as Medco. The job titles, as simple as that seems, are pretty important: it’s part of the language that you create around process improvement within your organization.

Process roles are often concentrated in a process center of excellence, which can start small: Moore told the story of one organization that started with four developers, one business analysts and one enterprise architect. Audience members echoed that, with CoE’s usually in the under-10 size, and many without a CoE at all. You also need to have a mix of business and IT skills in a CoE: as one of her points stated, you can do this without coding, but that doesn’t mean that a business person can do it, which is especially true as you start using more complete versions of BPMN, for example. There’s definitely a correlation (although not necessarily causation) between CoE and BPM project success; I talked about this and some other factors in building a BPM CoE in a webinar and white paper that I did for Appian last year.

She had a lot of great quotes from companies that they interviewed in their process roles study:

“These suites still required you to have [a] software engineering skill set”

“The biggest challenge is how to develop really good process architects”

“They [process/business analysts] usually analyze one process and have limited ability to see beyond the efforts in front of them”

“Process experts are a rare type of talent”

“We thought the traditional business analyst would be the right source, but we were horribly disappointed”

A number of these comments are focused on the shortcomings of trying to retrain more traditionally-skilled people, such as business analysts, for process work: it’s not as easy as it sounds, and requires significantly better tooling that they are likely using now. You probably don’t need the 20+ years of experience that I have in process projects, but you’re not going to just be able to take one of your developers or business analysts, send them on a 3-day course, and have them instantly become a process professional. There are ways to jump-start this: for example, looking at cloud-based BPM so that you need less of the back-end technical skills to get things going, and consider alternatives for mentoring and pairing with existing process experts (either internal or external) to speed the process.

Phil Gilbert On The Next Decade Of BPM

I missed Phil’s keynote at BPM 2010 in Hoboken a few weeks ago (although Keith Swenson very capably blogged it), so I was glad to be able to catch it here at the Forrester BP&AD forum. His verdict: the next decade of BPM will be social, visible and turbulent.

Over the past 40-50 years, the hard-core developers have become highly leveraged such that one developer can support about five other IT types, which in turn support 240 business end users. Most of the tools to build business technology, however, are focused on those 6 people on the technical side rather than the 240 business people. One way to change this is to allow for self-selected collaboration and listening: allowing anyone to “follow” whoever or whatever that they’re interested in to create a stream of information that is customized to their needs and interests.

Earlier today, I received an email about IBM’s new announcement on IBM Blueworks Live, and Phil talked about how it incorporates this idea of stream communication to allow you to both post and follow information. It will include information from a variety of sources, such as BPM-related Twitter hashtags and links to information written by BPM thought leaders. Launching on November 20th, Blueworks Live will include both the current BPM BlueWorks site as well as the IBM BluePrint cloud-based process modeling capability. From their announcement email that went out to current Blueprint users:

The new version will be called IBM Blueworks Live and you’ll be automatically upgraded to it.  Just like in past releases, all your process data and account settings are preserved. All of the great Blueprint features you use today will be there, plus some new capabilities that I think you’ll be very excited to use.

Blueworks Live will allow your team to not only collaborate on daily tasks, but also gain visibility into the status of your work. You’ll be able to automate processes that you run over e-mail today using the new checklist and approval Process App templates. Plus, you’ll have real-time access to expert online business process communities right on your desktop, so you can participate in the conversation, share best practices, or ask questions.

It’s good to see IBM consolidating these social BPM efforts; the roadmap for doing this wasn’t really clear before this, but now we’re seeing the IBM Blueworks community coming together with the Lombardi Blueprint tools. I’m sure that there will still be some glitches in integration, but this is a good first step. Also, Phil told me in the hallway before the session that he’s been made VP of BPM at IBM, with both product management and development oversight, which is a good move in general and likely required to keep a high-powered individual like Phil engaged.

With the announcement out of the way, he moved on with some of the same material from his BPM 2010 talk: a specific large multi-national organization has highly repeatable processes representing about 2.5% of their work, somewhat repeatable processes are 22.5%, while barely repeatable processes form the remaining 75%, and are mostly implemented with tools like Excel over email. Getting back to the issue from the beginning of the presentation, we need to have more and better tooling for those 75% of the processes that impact many more people than the highly repeatable processes that we’re spending so much time and money implementing.

With Blueworks Live, of course, you can automate these long tail processes in a matter of seconds 😉 but I think that the big news here is the social stream generated by these processes rather than the ease of creating the processes, which mostly already existed in Blueprint. Instant visibility through activity streams.