IBM BPM: Merging the Paths

“Is there any point to which you would wish to draw my attention?” “To the curious incident of the dog in the night-time.” “The dog did nothing in the night-time.” “That was the curious incident,” remarked Sherlock Holmes.

Silver Blaze, Sir Arthur Conan Doyle

And so the fact of me (and others) not yet blogging about the IBM BPM release has itself become a point of discussion. 😉

To recount the history, I was briefed on the new IBM BPM strategy and product offerings a few weeks before the Impact conference, with a strict embargo until the first day of the conference when the announcements would be made. Then, the week before Impact, IBM updated their online product pages and the sharp-eyed Scott Francis noticed this and jumped to the obvious – and correct – conclusion: IBM was about to integrate their WebSphere BPM offerings. That prerelease of information certainly diffused the urgency about writing about the release at the moment of announcement, and gave many of us the chance to sit back and think about it a bit more. I only had a brief day and a half at Impact before making my way back east for another conference where I was giving a workshop, and here I am a week later finally finishing up my thoughts on IBM BPM.

There’s been some written about it already by others who were there: Clay Richardson and his now-infamous “fresh coat of paint” post, which I’m sure did not make him any friends in some IBM circles, Neil Ward-Dutton with his counterpoint to Clay’s opinion, some quick notes from Scott Francis in the context of his keynote blogging (which also links to the video of Phil Gilbert making the announcement), and Tony Baer as part of his post on a week of BPM announcements.

It’s important to look at how the IBM organization has realigned to allow for the new product release: Phil Gilbert, former president and CTO of Lombardi, now has overall responsibility for all of WebSphere BPM – including both the former Lombardi and WebSphere BPM products – plus ILOG rules management. Neil Ward-Dutton referred to this as the reverse takeover of IBM by Lombardi; when I had a chance for a 1:1 with Phil at Impact, I told him that we’d all bet that he would be gone from IBM after a year. He admitted that he originally thought so too, until they gave him the opportunity to do exactly what he knew needed to be done: bring together all of the IBM BPM offerings into a unified offering. This new product announcement is the beginning of that unification, but they still have a ways to go.

Let’s take a look at the product offering, then. They’ve take pretty much everything in the WebSphere BPM portfolio (Lombardi Edition, Dynamic Process Edition, Process Server, Integration Developer, Business Modeler, Business Compass, Business Fabric) and mostly rolled it into IBM  BPM or replaced its functionality with something similar; there are a few exceptions, such as Business Compass, that have just disappeared. This reduces the entire IBM BPM portfolio to the following:

  • IBM Business Process Manager (which I’m covering here)
  • IBM Case Manager (the rebranding of some specialized functionality built on the IBM FileNet BPM platform, which is separate from the above IBM BPM offering)
  • IBM Blueworks Live
  • IBM Business Monitor
  • IBM BPM Industry Packs

Combining most of the WebSphere BPM components into IBM BPM V7.5, the new product offering has both a BPMN Process Designer and a BPEL Integration Designer, a common repository, and a process server that includes both the BPMN and BPEL engines. Now you can see where Clay Richardson is coming from with the “new coat of paint” characterization: the issue of one versus two process “servers” seemed to occupy an inordinate amount of time in discussions with IBM representatives, who stoically recited the party line that it’s one server. For those of us who actually used to write code like this for a living, it’s clear that it’s two engines: one BPMN and one BPEL. However, from the customer/user standpoint, it’s wrapped into a single Process Server, so if IBM ever gets around to refactoring into a single engine, that could be made fairly transparent to their customers, but would likely have the benefit of reducing IBM’s internal engineering costs around maintaining one versus two engines. Personally, I believe that there is enough commonality between process design and service orchestration that both the designers and the engines could be combined into something that offers the full spectrum of functionality while reducing the underlying product complexity.

In addition to the core process functionality, the ILOG rules engine is also present, plus monitoring tools and user interface options with both the process portal and the Business Space composite application environment.

I don’t want to understate their achievements in this product offering: the (Lombardi-flavored) Process Center with its shared repository and process governance is significant, allowing users to reuse artifacts from the two different sides of the BPM house: you can add a BPEL process orchestration created in Integration Designer to your BPMN process created in Process Designer, or you can include a business object created in Process Designer as a data definition in your BPEL service orchestration in Integration Designer, or call a BPMN process for human task handling. The fact remains, however, that this is still a slightly uneasy combination of the two major BPM platforms, and it will likely take another version or two to work out the bumps.

Since this is IBM, they can’t just have one product configuration, but offer three:

  • The Express edition, offered at a price point that is probably less than your last car, is for starter BPM projects: full functionality of the Process Designer to build and run BPMN processes, but only one server with no clustering, so unlikely to be used for any mission-critical applications. If you’re just getting started and are doing human-centric BPM, then this is for you.
  • The Standard edition, which is pretty much the same human BPM and lightweight integration functionality as the former Lombardi Edition BPMS. Existing Lombardi Edition customers will be able to upgrade to this version seamlessly.
  • The Advanced edition, which adds the Integration Designer and its ability to create a SOA layer of BPEL service/process orchestrations that can then be called from the BPMN processes or run independently.

In the product architecture diagram above, the Advanced edition is the whole thing, whereas the Standard and Express editions are missing the Integration Designer; to complicate that further, current WebSphere Process Server/Integration Designer customers will be transitioned to the Advanced edition but with the Process Designer disabled, a fourth shadow configuration that will not be available for new customers but is offered only as an upgrade. Both engines are still there in all editions, but it appears that without both designers, you can’t actually design anything that will run in one of the engines. For current customers, IBM has published information on migrating your existing configuration to the new BPM; there is a license migration path for all customers who currently have BPM products, but for some coming from the traditional WebSphere products, the actual migration of their applications may be a bit rocky.

The web-based Process Center is used for managing, deploying and interacting with processes of both types, although the Process Designer and Integration Designer are still applications that must be downloaded and installed locally. Within the Process Designer, there’s the familiar Lombardi “iTunes-style” view of the assets and dependencies. It’s important to point out that the Toolkits are assets that could have originated in either the Process Designer or the Integration Designer; in other words, they could be human workflows running on the BPMN engine or service orchestrations running on the BPEL engine, and can just be dragged and dropped onto BPMN processes as activities. The development environment includes versioning, shared concurrent editing to view what assets that other developers are editing that might impact your project, playback of previous process versions, and all versions of processes viewable for deployment in Process Center. The Process Center view is identical from either design tool, providing an initial common view between these two environments. Linking these two environments through sharing of assets in the Process Center also eases deployment: everything that a process application depends upon, regardless of its origin, can be deployed as a single package.

Not everything comes from the former Lombardi Edition, however: the user interface builder in BPM BPM is based on Business Space, IBM’s composite application development tool, instead of the old Lombardi forms and UI technology; this allows for easy reuse of widgets in portals, and there’s also a REST interface to roll your own UI. Also, the proprietary rules engine in Lombardi is being replaced with ILOG, with the rules editor built right in to the design environments; the ILOG engine is included in the Process Server, but can only be called from processes, not by external applications, so as to not cannibalize the standalone ILOG BRMS business. I’m sure that they will be supporting the old UI and rules for a while, but if you’re using those, you’re going to be encouraged to start migrating at some point.

There is currently no (announced) plan for IBM BPM process execution in the cloud (except for the simple user-created workflows in Blueworks Live), which I think will impact IBM BPM at some point: I understand that many of the large IBM customers are unlikely to go off premise for a production system, but more and more organizations that I work with are considering cloud-based solutions that they can provision and decommission near-instantaneously as a platform for development and testing, at the very least. They need to rethink their strategy on this, and stop offering expensive custom hosted or private “cloud” platforms as their only cloud alternatives.

Finally, there is the red-headed stepchild in the IBM BPM portfolio: IBM FileNet BPM, which has mostly been made over as the IBM Case Manager product. Interestingly, some of the people from the FileNet product side were present at Impact (usually they would only attend the IOD conference, which covers the Information Management software portfolio in which FileNet BPM is entombed), and there was talk about how Case Manager and the rest of the BPM suite could work together. In my opinion, bringing FileNet BPM into the overall IBM BPM fold makes a lot of sense; as I blogged back in 2006 at the time of the acquisition, and in 2008 when comparing it to the Oracle acquisition, they should have done that from the start, but there seemed (at the time) to be some fundamental misunderstandings about the product capabilities, and they chose to refocus it on content-centric BPM rather than combining it with WebSphere Process Server. Of course, if they had done the latter, we likely would be seeing a very different IBM BPM product mix today.

Client-side Service Composition Using Generic Service Representative

I’m back for a second day at CASCON, attending the technical papers session focused on service oriented systems.

First of the three papers was Client-side Service Composition Using Generic Service Representative by Mehran Najafi and Kamran Sartipi of McMaster University; this concept is presented to prevent the possible privacy and bandwidth problems that can occur when data is passed to a server-side composition. This relies on a client-side stateless task service and service representative, and doing whatever processing can be done locally before resorting to a remote web service call. Using the task services approach rather than a Javascript or RIA-based approach provides more flexibility in terms of local service composition. McMaster has a big medical school, and the example that he discussed was based on clinical data, where privacy is a big concern; being able to maintain the patient data only on the client rather than having it flow through a server-side composition reduces the privacy concerns as well as improving performance.

I’ve been seen this paradigm in use in a couple of different BPM systems that provide client-side screen flow (usually in Javascript at the client or in the web tier) within a single process activity; I’ve seen this from TIBCO AMX/BPM’s Page Flow, Salesforce’s Visual Process Manager and Outsystems. Obviously, the service composition presented in the paper today is a more flexible approach, and is true client-side rather than on the web tier, but the ideas of local process management are already appearing in some BPM products.

There were some interesting questions about support for this approach on mobile platforms (possible as the mobile OS’s become more capable) and a discussion on what we’re giving up in terms of loose coupling by having a particular orchestration of multiple services bound to a single client-side activity.

CASCON Keynote: 20th Anniversary, Big Data and a Smarter Planet

With the morning workshop (and lunch) behind us, the first part of the afternoon is the opening keynote, starting with Judy Huber, who oversees the 5,000 people at the IBM Canada software labs, which includes the Centre for Advanced Studies (CAS) technology incubation lab that spawned this conference. This is the 20th year of CASCON, and some of the attendees have been here since the beginning, but there are a lot of younger faces who were barely born when CASCON started.

To recognize the achievements over the years, Joanna Ng, head of research at CAS, presented awards for the high-impact papers from the first decade of CASCON, one each for 1991 to 2000 inclusive. Many of the authors of those papers were present to receive the award. Ng also presented an award to Hausi MĂŒller from University of Victoria for driving this review and selection process. The theme of this year’s conference is smarter technology for a smarter planet – I’ve seen that theme at all three IBM conferences that I’ve attended this year – and Ng challenged the audience to step up to making the smarter planet vision into reality. Echoing the words of Brenda Dietrich that I heard last week, she stated that it’s a great time to be in this type of research because of the exciting things that are happening, and the benefits that are accruing.

Following the awards, Rod Smith, VP of IBM emerging internet technologies and an IBM fellow, gave the keynote address. His research group, although it hasn’t been around as long as CAS, has a 15-year history of looking at emerging technology, with a current focus on “big data” analytics, mobile, and browser application environments. Since they’re not a product group, they’re able to take their ideas out to customers 12-18 months in advance of marketplace adoption to test the waters and fine-tune the products that will result from this.

They see big data analytics as a new class of application on the horizon, since they’re hearing customers ask for the ability to search, filter, remix and analyze vast quantities of data from disparate sources: something that the customers thought of as Google’s domain. Part of IBM’s BigInsights project (which I heard about a bit last week at IOD)  is BigSheets, an insight engine for enabling ad hoc discovery for business users, on a web scale. It’s like a spreadsheet view on the web, which is a metaphor easily understood by most business users. They’re using the Hadoop open source project to power all of the BigInsights projects.

It wouldn’t be a technical conference in 2010 if someone didn’t mention Twitter, and this is no exception: Smith discussed using BigSheets to analyze and visualize Twitter streams related to specific products or companies. They also used IBM Content Analytics to create the analysis model, particularly to find tweets related to mobile phones with a “buy signal” in the message. They’ve also done work on a UK web archive for the British Library, automating the web page classification and making 128 TB of data available to researchers. In fact, any organization that has a lot of data, mostly unstructured, and wants to open it up for research and analysis is a target for these sort of big data solutions. It stands to reason that the more often you can generate business insights from the massive quantity of data constantly being generated, the greater the business value.

Next up was Christian Couturier, co-chair of the conference and Director General of the Institute of Information Technology at the Canada’s National Research Council. NRC provides some of the funding to IBM Canada CAS Research, driven by the government’s digital economy strategy which includes not just improving business productivity but creating high-paying jobs within Canada. He mentioned that Canadian businesses lag behind other countries in adoption of certain technologies, and I’m biting my tongue so that I don’t repeat my questions of two years ago at IT360 where I challenged the Director General of Industry Canada on what they were doing about the excessively high price of broadband and complete lack of net neutrality in Canada.

The program co-chairs presented the award for best paper at this show, on Testing Sequence Diagram to Colored Petri Nets Transformation, and the best student paper, on Integrating MapReduce and RDBMSs; I’ll check these out in the proceedings as well as a number of other interesting looking papers, even if I don’t get to the presentations.

Oh yeah, and in addition to being a great, free conference, there’s birthday cake to celebrate 20 years!

CASCON Workshop: Accelerate Service Integration In Your BPM and SOA Applications

I’m attending a workshop at the first morning of CASCON, the conference on software research hosted by IBM Canada. There’s quite a bit of good work done at the IBM Toronto software lab, and this annual conference gives them a chance to engage the academic and corporate community to present this research.

The focus of this workshop is service integration, including enabling new services from existing applications and creating new services by composing from existing services. Hacking together a few services into a solution is fairly simple, but your results may not be all that predictable; industrial-strength service integration is a bit more complex, and is concerned with everything from reusability to service level agreements. As Allen Chan of IBM put it when introducing the session: “How do we enable mere mortals to create a service integration solution with predictable results and enterprise-level reliability?”

The first presentation was by Mannie Kagan, an IBMer who is working with TD Bank on their service strategy and implementation; he walked us through a real-life example of how to integrate services into a complex technology environment that includes legacy systems as well as newer technologies. Based on this, and a large number of other engagements by IBM, they are able to discern patterns in service integration that can greatly aid in implementation. Patterns can appear at many levels of granularity, which they classify as primitive, subflow, flow, distributed flow, and connectivity topology. From there, they have created an ESB framework pattern toolkit, an Eclipse-based toolkit that allows for the creation of exemplars (templates) of service integration that can then be adapted for use in a specific instance.

He discussed two particular patterns that they’ve found to be particularly useful: web service notification (effectively, pub-sub over web services), and SCRUD (search, create, read, updated, delete); think of these as some basic building blocks of many of the types of service integrations that you might want to create. This was presented in a specific IBM technology context, as you might imagine: DataPower SOA appliances for processing XML messages and legacy message transformations, and WebSphere Services Registry and Repository (WSRR) for service governance.

In his wrapup, he pointed out that not all patterns need to be created at the start, and that patterns can be created as required when there is evidence of reuse potential. Since patterns take more resources to create than a simple service integration, you need to be sure that there will be reuse before it is worth creating a template and adding it to the framework.

Next up was Hans-Arno Jacobsen of University of Toronto discussing their research in managing SLAs across services. He started with a business process example of loan application processing that included automated credit check services, and had an SLA in terms of parameters such as total service subprocess time, service roundtrip time, service cost and service uptime. They’re looking at how the SLAs can guide the efficient execution of processes, based in a large part on event processing to detect and determine the events within the process (published state transitions). He gave quite a detailed description of content-based routing and publish-subscription models, which underlie event-driven BPM, and their PADRES ESB stack that hides the intricacies of the underlying network and system events from the business process execution by creating an overlay of pub-sub brokers that filters and distributes those events. In addition to the usual efficiencies created by the event pub-sub model, this allows (for example) the correlation of network slowdowns with business process delays, so that the root cause of a delay can be understood. Real-time business analytics can also be driven from the pub-sub brokers.

He finished by discussing how business processes can actually be guided by SLAs, that is, runtime use of SLAs rather than just for monitoring processes. If the process can be allocated to multiple resources in a fine-grained manner, then the ESB broker can dynamically determine the assignment of process parts to resources based on how well those resources are meeting their SLAs, or expected performance based on other factors such as location of data or minimization of traffic. He gave an example of optimization based on minimizing traffic by measuring message hops, which takes into account both rate of message hops and distance between execution engines. This requires that the distributed execution engines include engine profiling capabilities that allows an engine to determine not only its own load and capacity, but that of other engines with which it communicates, in order to minimize cost over the entire distribute process. To fine-tune this sort of model, process steps that have a high probability of occurring in sequence can be dynamically bound to the same execution engine. In this situation, they’ve seen a 47% reduction in traffic, and a 50% reduction in cost relative to the static deployment model.

After a brief break, Ignacio Silva-Lepe from IBM Research presented on federated SOA. SOA today is mostly used in a single domain within an organization, that is, it is fairly siloed in spite of the potential for services to be reused across domains. Whereas a single domain will typically have its own registry and repository, a federated SOA can’t assume that is the case, and must be able to discover and invoke services across multiple registries. This requires a federation manager to establish bridges across domains in order to make the service group shareable, and inject any cross-domain proxies required to invoke services across domains.

It’s not always appropriate to have a designated centralized federation manager, so there is also the need for domain autonomy, where each domain can decide what services to share and specify the services that it wants to reuse. The resulting cross-domain service management approach allows for this domain autonomy, while preserving location transparency, dynamic selection and other properties expected from federated SOA. In order to enable domain autonomy, the domain registry must not only have normal service registry functionality, but also references to required services that may be in other domains (possibly in multiple locations). The registries then need to be able to do a bilateral dissemination and matching of interest and availability information: it’s like internet dating for services.

They have quite a bit of work planned for the future, beyond the fairly simple matching of interest to availability: allowing domains to restrict visibility of service specifications to authorized parties without using a centralized authority, for example.

Marsha Checkik, also from University of Toronto, gave a presentation on automated integration determination; like Jacobsen, she collaborates with the IBM Research on middleware and SOA research; unlike Jacobsen, however, she is presenting on research that is at a much earlier stage. She started with a general description of integration, where a producer and a consumer share some interface characteristics. She went on to discuss interface characteristics (what already exists) and service exposition characteristics (what we want): the as-is and to-be state of service interfaces. For example, there may be a requirement for idempotence, where multiple “submit” events over an unreliable communications medium would result in only a single result. In order to resolve the differences in characteristics between the as-is and to-be, we can consider typical service interface patterns, such as data aggregation, mapping or choreography, to describe the resolution of any conflicts. The problem, however, is that there are too many patterns, too many choices and too many dependencies; the goal of their research is to identify essential integration characteristics and make a language out of them, identify a methodology for describing aspects of integration, identify the order in which patterns can be determined, identify decision trees for integration pattern determination, and determine cases where integration is impossible.

Their first insight was to separate pattern-related concerns between physical and logical characteristics; every service has elements of both. They have a series of questions that begin to form a language for describing the service characteristics, and a classification for the results from those questions. The methodology contains a number of steps:

  1. Determine principle data flow
  2. Determine data integrity data flow, e.g., stateful versus stateless
  3. Determine reliability flow, e.g., mean time between failure
  4. Determine efficiency, e.g., response time
  5. Determine maintainability

Each of these steps determines characteristics and mapping to integration patterns; once a step is completed and decisions made, revisiting it should be minimized while performing later steps.

It’s not always possible to provide a specific characteristic for any particular service; their research is working on generating decision trees for determining if a service requirement can be fulfilled. This results in a pattern decision tree based on types of interactions; this provides a logical view but not any information on how to actually implement them. From there, however, patterns can be mapped to implementation alternatives. They are starting to see the potential for automated determination of integration patterns based on the initial language-constrained questions, but aren’t seeing any hard results yet. It will be interesting to see this research a year from now to see how it progresses, especially if they’re able to bring in some targeted domain knowledge.

Last up in the workshop was Vadim Berestetsky of IBM’s ESB tools development group, presenting on support for patterns in IBM integration offerings. He started with a very brief description of an ESB, and WebSphere Message Broker as an example of an ESB that routes messages from anywhere to anywhere, doing transformations and mapping along the way. He basically walked through the usage of the product for creating and using patterns, and gave a demo (where I could see vestiges of the MQ naming conventions). A pattern specification typically includes some descriptive text and solution diagrams, and provides the ability to create a new instance from this pattern. The result is a service integration/orchestration map with many of the properties already filled in; obviously, if this is close to what you need, it can save you a lot of time, like any other template approach.

In addition to demonstrating pattern usage (instantiation), he also showed pattern creation by specifying the exposed properties, artifacts, points of variability, and (developer) user interface. Looks good, but nothing earth-shattering relative to other service and message broker application development environments.

There was an interesting question that goes to the heart of SOA application development: is there any control over what patterns are created and published to ensure that they are useful as well as unique? The answer, not surprisingly, is no: that sort of governance isn’t enforced in the tool since architects and developers who guide the purchase of this tool don’t want that sort of control over what they do. However, IBM may see very similar patterns being created by multiple customer organizations, and choose to include a general version of that pattern in the product in future. A discussion about using social collaboration to create and approve patterns followed, with Berestetsky hinting that something like that might be in the works.

That’s it for the workshop; we’re off to lunch. Overall, a great review of the research being done in the area of service integration.

This afternoon, there’s the keynote and a panel that I’ll be attending. Tomorrow, I’ll likely pop in for a couple of the technical papers and to view the technology showcase exhibits, then I’m back Wednesday morning for the workshop on practical ontologies, and the women in technology lunch panel. Did I mention that this is a great conference? And it’s free?

IOD Keynote: Computational Mathematics and Freakonomics

I attended the keynote this morning, on the theme of looking forward: first we heard from Mike Rhodin, an exec in the IBM Software group, then Brenda Dietrich, a mathematician (and VP – finally, a female IBM exec on stage) from the analytics group in IBM Research. IBM Research has nine labs around the world, including a new one just launched in Brazil, and a number of collaborative research facilities, or “collaboratories”, where they work with universities, government agencies and private industries on research that can be leveraged into the market more quickly. I’ve met a few of the BPM researchers from the Zurich lab at the annual academic BPM conference, but the range of the research across the IBM labs is pretty vast: from nanotechnology, to the cloud, to all of the event generation that leads to the “smarter planet” that IBM has been promoting. She’s here from the analytics group because analytics is at the top of this pyramid of research areas, especially in the context of the smarter planet: all of our devices are generating a flood of events and data, and some pretty smart analytics have to be in place to be able to make sense of all this.

The future of analytics is moving from today’s static model of collect-analyze-present results, to more predictive analytics that can create models of the future based on what’s happened in the past, and use that flood of data (such as Twitter) as input to these analytical models.

I have a lot of respect for IBM for trying out their own ideas on systems on themselves as one big guinea pig, and this analytics research is no exception. They’re using data from all sorts of internal systems, from manufacturing plants to software development processes to human resources, to feed into this research, and benefit from the results. When this starts to hit the outside market, it has impacts on a much wider variety of industries, such as telco and oil field development. Not surprisingly, this ties in with master data management, since you need to deal with common data models if you’re going to perform complex analytics and queries across all of this data, and their research on using the data stream to actually generate the queries is pretty cool.

She showed a short video ciip on Watson, an AI “question answering system” that they’ve built, and showed it playing Jeopardy, interpreting the natural language questions – including colloquialisms – and responding to them quickly, beating out some top human Jeopardy players. She closed with a great quote that is inspirational in so many ways, especially to girls in mathematics: “It’s a great time to be a computational mathematician”.

The high-profile speakers of the keynote were up next: Steven Levitt and Stephen Dubner, authors of Freakonomics and Superfreakonomics, with some interesting anecdotes about how they started working together (Levitt’s the genius economist, and Dubner’s the writer who collaborated with him on the books). They talked about turning data into ideas, tying in with the analytics theme; they had lots of interesting and humorous stories on an economic theme, such as teaching monkeys about money as a token to be exchanged for goods and (ahem) services, and what that teaches us about risk and loss aversion in people.

I have a noon flight home to Toronto, so this ends my time at IOD 2010. This is my first IOD: I used to attend FileNet’s UserNet conference before the acquisition, but have never been to IOD or Impact until this year. With over 10,000 people registered, this is a massive conference that covers a pretty wide range of information management technologies, including the FileNet ECM, BPM and now Case Manager software that is my main focus here. I’ve had a look at the new IBM Case Manager, as you’ve read in my posts from yesterday, and give it a bit of a mixed review, although it’s still not even released. I’m hoping for an in-depth demo sometime in the coming weeks, and will be watching to see how IBM launches itself into the case management space.

Customizing the IBM Case Manager UI

Dave Perman and Lauren Mayes had the unenviable position of presenting at the end of the day, and at the same time as the expo reception was starting (a.k.a. “open bar”), but I wanted to round out my view of the new Case Manager product by looking at how the user interfaces are built. This is all about the Mashup Center and the Case Manager widgets; I’ve played around with the ECM widgets in the past, which provide an easy way to build a composite application that includes FileNet ECM capabilities.

Perman walked through the Case Manager Builder briefly to show how everything hangs together – or at least, the parts that are integrated into the Builder environment, which are the content and process parts, but not rules or analytics – then described the mashup environment. The composite application development (mashup) environment is pretty standard functionality in BPM and ACM these days, but Case Manager comes with a pre-configured set of pages that make it easy to build case application UIs. A business analyst can easily customize the standard Case Manager pages, selecting which widgets are included and their placement on the page, including external (non-Case Manager) widgets.

The designer can also override the standard case view pages either for all users or for specific roles; this requires creating the page in the mashup environment and registering it for use in Case Manager, then using the Case Manager Builder to assign that page to the specific actions associated with a case. In other words, the UI design is not integrated into the Case Builder environment, although the end result is linked within that environment.

Mayes then went through the process of building and integrating 3rd party widgets; there’s a lot of material on the IBM website now on how to build widgets, and this was just a high-level view of that process and the architecture of integrating between the Mashup Center and the ACM widgets, themes and ECM services on the application server. This uses lightweight REST services that return JSON, hence easier to deal with in the browser, including CMIS REST services for content access, PE REST services for process access, and some custom case-specific REST services. Since there are widgets for Sametime presence and chat functionality, they link through to a Sametime proxy server on the application server. For you FileNet developer geeks, know that you also have to have an instance of Workplace XT running on the application server as well. I’m not going to repeat all the gory details, but basically once you have your custom widget built, you can deploy it so that it appears on the Mashup Center palette, and can be used like any other pre-existing widget. There’s also a command widget that retrieves all the case information so that it’s not loaded multiple times by all of the other widgets; it’s also a controller for moving between list and detail pages.

This is a bit more information that I was counting on absorbing this late in the day, and I ducked out early when the IBM partner started presented about what they’ve done with custom widgets.

That’s it for today; tomorrow will be a short day since I fly home mid-day, but I’ll likely be at one or two sessions in the morning.

IBM FileNet BPM Product Update

All this news this week about Case Manager, my old friend BPM seems like it’s been left on the sidelines, although partially hidden within the new Case Manager offering. However, we have one session by Mike Fannon, BPM product manager, giving us the update on what’s happening with BPM.

The first thing is new OS platform support for Linux and zLinux; although this is important for many customers who have standardized on Linux – or want to integrate BPM with CM8 on their mainframes – you can imagine this is not the most exciting announcement to me. Yes, I have customers who will love this. Now move on. 🙂

Next is the port of the Process Engine to a standalone Java app (not J2EE), from its original C++ beginnings. Although this seems on the surface to be not a lot more exciting than the Linux support, this is pretty significant, and not just for the performance boost that they’re seeing. This means improvements to the complexity of the APIs and database interfaces, better standardization, and also brings PE in line architecturally with the Content Engine and even allows PE and CE to share the same database. In the future, they’re considering moving it to a J2EE container, which provides a lot more flexibility for things like server farming.

They’re also supporting multi-tenancy, allowing multiple PEs to run on the same virtual server with separate application environment, user space, backup and restore for each tenant. These PE Stores (analogous to CE Object Stores) seem to be replacing the old isolated regions paradigm, and there are procedures for moving isolated regions to separate PE Stores. If you’re an old BPM hack, then all your old VW-prefixed admin commands will be replaced as the vestiges of Visual WorkFlo are finally purged. As the owner of a small systems integration firm, I designed and wrote one of the first VW apps back in 1994, so this does bring a small tear to my eye, although this has obviously being too long coming.

From an upgrade standpoint, there are supported upgrade paths (some staged, some direct) from eProcess 5.2 (can’t believe that’s still out there) as well as BPM 3.53 and later, including migration tools for in-flight process instances. There are changes to the data model of the underlying database tables, so if you’ve built any applications such as advanced analytics that directly hit the operational database, you’re going to have some refactoring to do.

Process Analyzer has been extended to add capabilities for Case Manager, such as aggregation based on case properties. In addition to using Excel pivot tables, which has always been done in the past for PA, you can use Cognos BI instead. Of course, since PA is based on a set of cubes in a MS SQL Server/MS Analysis Services engine that is trickle-fed from the PE database, this has always been possible, but I assume that it’s just better integrated now. Unsurprisingly, they to have a direction to eliminate Microsoft technology dependencies, so at some point in the future, I expect that you’ll see PA data store ported off the SQL Server/Analysis Services platform. The Process Monitor dashboard has also been updated to handle Case Manager data, and better integrated with Cognos.

There were a number of enhancements to the ECM Widgets in March and June, such as support for Business Space instead of the Mashup Center, and some new widgets for process history and get next work item (finally). It looks like they’re building out the widget functionality to the point where it’s actually usable for real applications; without the get next work item, you couldn’t use it to build any sort of heads-down processing functionality.

There are really few functionality improvements to BPM; most of this is refactoring and platform porting. I think that a lot of the BPM creative juices are going towards Case Manager, and if you look at the direction of the 100% Java PE port and ability to share databases with CE, it’s possible that we’ll see some sort of merging of ECM, BPM and Case Manager into a single engine in the future. IBM, of course, did not say that.

IBM Case Manager Technical Roundtable

Bill Lobig, Mike Marin, Peggy (didn’t catch her last name) and Lauren Mayes hosted a freeform roundtable for any technical questions about the new Case Manager product.

I had a chat with Mike prior to the talk, and he reinforced this during the session, about the genesis of Case Manager: although there were a lot of ideas that came from the old BPF product, Mike and his team spent months interviewing the people who had used BPF to find out what worked and what didn’t work, then built something new that incorporated the features most needed by customers. The object model for the case is now part of the basic server classes rather than being a higher-level (and therefore less efficient) custom object, there are new process classes to map properties between case folders and processes, and a number of other significant architectural changes and upgrades to make this happen. I see TIBCO going through this same pain right now with the lack of upgrade path from iProcess to AMX BPM, and to the guy in the audience who said that it’s not fair that IBM gives you a crappy product, you use it and provide feedback on how to improve it, then they charge you for the new product: well, that’s just how software works sometimes, and vendors will never have true innovation if they always have to be supporting their (and your) entire legacy. There does need to be some sort of migration path at least for the completed case folder objects from BPF to Case Manager native case objects, although that hasn’t been announced, since these are long-term corporate assets that have to be managed the same as any other content; however, I would not expect any migration of the BPF apps themselves.

More process functionality is being built right into the content engine; this is significant in that you’ve always required both ECM and BPM to do any process management, but it sounds like some functionality is being drawn into the content engine. Does this mean that the content and process engines eventually be merged into a single platform and a single product? That would drive further down the road of repositioning FileNet BPM as content-centric – originally done at the time of the FileNet acquisition, I believe, to avoid competition with WebSphere BPM – since if it’s truly content-centric, then why not just converge the engines, including the ACM capabilities? That would certainly make for a more seamless and consistent development environment, especially around issues like object modeling and security.

One consistent message that’s coming across in all the Case Manager sessions is accelerating the development time by allowing a business analyst to create a large part of a case application without involving IT; this is part of what BPF was trying to provide, and even BPM prior to that. I was FileNet’s evangelist for the launch of the eProcess product, which was the first version of the current generation of BPM, and we put forward the idea back in 2000 that a non-technical (or semi-technical) analyst could do some amount of the model-driven application development.

There are obviously still some rough edges in Case Manager still, since version 1.0 isn’t even out yet. In a previous session, we saw some of the kludges for content analytics, dashboarding and business rules, and it sounds like role-based security and e-forms isn’t really fully integrated either. The implications of these latter two are tied up with the ease in which you can migrate a case application from one environment to another, such as from development to test to production: apparently, not completely seamless, although they are able to bundle part of a case application/template and move it between environments in a single operation. Every vendor needs to deal with this issue, and those that have a more tightly integrated set of objects making up an application have a much easier time with this, especially if they also offer a cloud version of their software and need to migrate easily between on premise and cloud environments, such as TIBCO, Fujitsu and Appian. IBM is definitely playing catchup in the area of moving defined applications between environments, as well as their overall integration strategy within Case Manager.

IOD ECM Keynote

Ron Ercanbrack, VP at IBM (my old boss from my brief tenure at FileNet in 2000-1, who once introduced me at a FileNet sales kickoff conference as the “Queen of BPM”), gave a brief ECM-focused keynote this morning. He covered quite a bit of the information that I was briefed on last week, including Case Manager, Content Analytics, improved content integration including CMIS, the Datacap and PSS acquisitions, enhancements to Content Collector, and more. He positioned Case Manager as a product “running on top of BPM”, which is a bit different than the ECM-centric message that I’ve heard so far, but likely also accurate: there are definitely significant components of each in there.

He was followed by Carl Kessler, VP of Development, to give a Case Manager demo; this covered the end-user case management environment (pretty much what we’ve seen in previous sessions, only live), plus Content Analytics for text mining which is not really integrated with Case Manager: it’s a separate app with a different look and feel. I missed the launch point, so I don’t know whether he launched this from a property value in Case Manager or had to start from scratch using the terms relevant to that case. It has some very nice text mining capabilities for searching through the content repository for correlation of terms, including some pretty graphs, but it’s a separate app.

We then went off to the Cognos Real-time Monitoring Dashboard, which is yet again another non-integrated app with a different look and feel. He showed a dashboard that had a graph for average age of cases and allowed drill-down on different parameters such as industry type and dispute type, but that’s not really the same as a fully integrated product suite. Although all of the components applications are functional, this needs a lot more integration at the end-user level.

I did get a closer look at some of the Case Builder functionality than I’ve seen already: in the tasks definition section, there are required tasks, optional tasks and user-created tasks, although it’s not clear what user-created tasks are since this is design-time, not runtime.

Ercanbrack came back to the stage for a brief panel with three customers – Bank of America, State of North Dakota, and BlueCross BlueShield of Tennessee – talking about their ECM journeys. This was not specific to case management at all, but using records/retention management to reduce storage costs and risks in financial services, using e-discovery as part of a legal action in healthcare, and content management with a case management approach for allowing multiple state government agencies to share documents more effectively.

Advanced Case Management Empowering The Business Analyst

We’re still a couple of hours away from the official announcement about the release of IBM Case Manager, and I’m at a session on how business analysts will work with Case Manager to build solutions based on templates.

Like the other ACM sessions, this one starts with an overview of IBM’s case management vision as well as the components that make up the Case Manager product: ECM underlying it all, with Lotus Sametime for real-time presence and chat, ILOG JRules for business rules, Cognos Real Time Monitor for dashboards, IBM Content Analytics for unstructured content analysis, IBM (Lotus) Mashup Center for user interface and some new case management task and workflow functionality that uses P8 BPM under the covers. Outside the core of Case Manager, WebSphere Process Server can be invoked for integration/SOA applications, although it appears that this is done by calling it from P8 BPM, which was existing functionality. On top of this, there are pre-built solutions and solution templates, as well as a vast array of services from IBM GBS and partners.

IBM Case Management Vision

The focus in this session is on the tools for the business analyst in the design-time environment, either based on a template or from scratch, including the user interface creation in the Mashup Center environment, analytics for both real-time and historical views of cases, and business rules. This allows a business analyst to capture requirements from the business users and create a working prototype that will form the shell of the final case application, if not the full executing application. The Case Builder environment that a business analyst works in to design case solutions also allows for testing and deploying the solution, although in most cases you won’t have your BAs deploying directly to a production environment.

Defining a case solution stats with the top-level case solution creation, including name, description and properties, then completing the following:

  • Define case types
  • Specify roles
    • Define role inbasket
  • Define personal inbasket
  • Define document types
  • Associate runtime UI pages

We didn’t see the ILOG JRules integration, and for good reason: in the Q&A, they admitted that this first version of Case Manager didn’t quite have that up to scratch, so I imagine that you have to work in both design environments, then call JRules from a BPM step or something of that nature.

The more that I see of Case Manager, the more I see the case management functionality that was starting to migrate into the FileNet ECM/BPM product from the Business Process Framework (BPF); I predicted that BPF would become part of the core product when I reviewed P8 BPM v4.5 a year and a half ago, and while this is being released as a separate product rather than part of the core ECM product, BPF is definitely being pushed to the side and IBM won’t be encouraging the creation of any new applications based on BPF. There’s no direct migration path from BPF to ACM; BPF technology is a bit old, and the time has come for it to be abandoned in favor of a more modern architecture, even if some of the functionality is replicated in the new system.

The step editor used to define the tasks associated with cases provides swimlanes for roles or workgroups (for underlying queue assignment, I assume), then allows the designer to add steps into the lanes and assign properties to the steps. The step properties are a simplified version of a step definition in P8 BPM, so I assume that this is actually a shared model (as opposed to export/import) that can be opened directly by the more technical BPM Process Designer. In P8 BPM 4.5, they introduced a “diagram mode” for business analysts in the Process Designer; this appears to be an even simpler process diagramming environment. It’s not BPMN compliant, which I think is a huge mistake; since it’s a workflow-style model with lanes, activities and split/merge are supported, this would have been a great opportunity to use the standard BPMN shapes to start getting BAs used to it.

I still have my notes from last week’s analyst briefing and my meeting with Ken Bisconti from yesterday which I will publish; these are more aligned with the “official” announcement that will be coming out today in conjunction with the press release.