IBM IOD Keynote Day 3: New Possibilities (When They’re Not Blacked Out)

So there I was, in my hotel room – where the wifi actually works – watching the IOD keynote online, when the video went offline during the Michael Lewis/Billy Beane talk.

I understand (now) that there are copyright issues around broadcasting Michael Lewis and Billy Beane talking about how analytics are used in baseball, but it would have been great to know that it advance: I may have headed on the long walk down to the crowded, noisy, wifi-challenged events center to watch it in person. Instead, I’m hanging out, hoping for a speedy return of the video feed, and really not knowing if it’s coming back at all. Kind of like a scheduled system outage that your sys admin forgot to tell everyone about.

I’m headed for the airport shortly, so this was my last (and somewhat unsatisfying) session from IOD 2011. Regardless, there is definitely good content at IOD, a great conference for customers, partners and industry watchers alike. I also had the chance to meet up with many of my old FileNet colleagues (where I worked in 2000-2001 as the eProcess evangelist, in what I usually refer to as the longest 16 months of my life), some of whom are still at IBM following the 2006 acquisition, and some of whom are now at IBM business partners.

My major disappointment, this morning’s keynote blackout aside, was the cancellation of the 1:1 interviews with ECM executives that were scheduled. I think that being here under the blogger program (which designates me as “press”) rather than the analyst program (which is how I attend the IBM Impact conference, and most other vendor conferences) somehow has me seen as being less influential, although obviously my output and take-aways for my clients are identical either way.

IBM FileNet BPM Product Update

Last session of the day, and Mike Fannon and Dave Yockelson are giving an update on FileNet BPM, particularly the 5.x release. The highlights:

  • The Process Engine (PE) was ported completely to a standard Java application, with some dramatic performance increases: 60% improvement in response time through the Java API, 70% (or more) reduction in CPU utilization, near-linear growth in CPU utilization for vertical scaling (i.e., more processes on a single server), and constant CPU utilization on horizontal scaling (e.g., twice as many processes on twice as many servers).
  • Linux and zLinux support.
  • Multi-tenancy, allowing multiple PE instances to run on the same virtual server, so that different isolated regions can be tied to separate PE database stores. If you have multiple isolated regions in a single store now, there will be a procedure for migrating this for better multi-tenancy.
  • Simplified installation, configuration and operation.
  • Deployment/upgrade paths directly from pretty much any currently supported FileNet BPM environment to 5.x, going all the way back to eProcess (there was one person in the audience who admitted to still using it), as well as v3.53, 4.03, 4.50 and 4.51.
  • Process Analyzer is now Case Analyzer, having been extended to add capabilities for Case Manager. Case Analyzer reporting is now supported through Cognos BI in addition to the old-school Excel pivot tables.
  • Process Monitor is now Case Monitor (I seem to be seeing  a trend here), with Cognos Real-time Monitoring 10.1 (previously called Cognos Now) bundled in as an interactive dashboard solution.
  • Integration of IBM Forms (as we saw in the Case Manager product update) to be used in the same way as FileNet eForms are used in FileNet BPM today, namely, for a richer UI replacement that provides functionality such as digital signatures.

We moved on to yet another presentation on Case Manager; I could probably have skipped the previous session and just come to this one, but there was no indication on the conference materials that that would be a good idea.

Time for a quick sprint through the vendor expo, then off to the evening networking event, which promises displays highlighting 100 years of the history of IBM and the computing industry. We’ll also have a concert by Train, which is the third Train concert at the three large vendor conferences that I’ve attended in the last six weeks: Progress, TIBCO and now IBM. Not sure if the corporate gig is a new market strategy for Train; maybe I’ll actually make it to tonight’s conference after missing the previous two.

IBM Case Manager Product Update

The nice thing about IBM Case Manager (shortened to ICM in some of their material, and ACM in others) being so new is that you can show up late to the technical product briefing and not miss anything, since the product managers spend the first 10 15 minutes re-explaining what case management and ICM are to the crowd of legacy FileNet customers. (Yes, it’s been a long day.)

This session with Dave Yockelson and Jake Lavirne discussed some of the customers that they have gained since last year’s initial product release, including banking, insurance, government and energy industry examples. They listed the integrated/bundled products that make up ICM (CM, BPM, ILOG, etc.) plus those things created specifically for ICM (case object model, task object model, case analytics) and the ease with which it is used as a framework for solution construction.

The upcoming release, v5.1, will be available within the next month or so, and includes a number of new features based on feedback from the early customers:

  • Enhanced case design, including improved data integration, enhanced widget customization, solution templates, and separate solution project areas. Specifically, the data integration framework allows data from a third-party system of record to be used directly in the ICM UI or as case metadata.
  • Direct IBM CM8 integration, with the CM8 documents staying in CM8 without requiring repository federation. This means that CM8 content can initiate cases and launch tasks, as well as being used natively in tasks, completely transparently to the case worker.
  • Improved case worker user experience, including integration of IBM Forms (in addition to the existing support for FileNet eForms) in the ICM UI for adding cases, adding tasks, or viewing task details. This provides a relatively easy way to replace the standard UI with a richer forms-based interface for the case worker. There will also be a simplified UI layout, resizing and custom theming, and the ability to email and share direct links to a case. A case can also be split to multiple cases.
  • Improved support for IBM BPM, including tighter design-time integration, universal inbox, and support for Business Space.

The session wrapped up with a review of some of the vertical applications built on ICM by partners or GBS. There are a number of IBM partners working on ICM applications; I’m sure that a lot of partners weren’t thrilled to find out that IBM had essentially made much of their custom work obsolete, but this does provide an opportunity for partners to build vertical solutions much more quickly based on the ICM framework.

What’s New in IBM ECM Products

Feri Clayton gave an update on the ECM product portfolio and roadmap, in a bit more depth than yesterday’s Bisconti/Murphy ECM product strategy session. She reinforced the message that the products are made up of suites of capabilities and components, so that you’re not using different software silos. I’m not sure I completely buy into IBM’s implementation of this message as long as there are still quite different design environments for many of these tools, although they are making strides in consolidating the end user experience.

She showed the roadmap for what has been released in 2011, plus the remainder of this year and 2012: on the BPM side, there will be a 5.1 release of both BPM and Case Manager in Q4, which I’ll be hearing more about in separate BPM and Case Manager product sessions this afternoon. The new Nexus UI will previous in Q4, and be released in Q2 of 2012. There’s another Case Manager release projected for Q4 2012.

There was a question about why BPM didn’t appear in the ECM portfolio diagram, and Clayton stated that “BPM is now considered part of Case Manager”. Unlike the BPM vendors who think of ACM as a part of BPM, I think that she’s right: BPM (that is, structured process management that you would do with IBM FileNet BPM) is a functionality within ACM, not the other way around.

She went through the individual products in the portfolio, and some of the updates:

  • Production Imaging and Capture now includes remote capture, which is nice for organizations that don’t want to centralize their scanning/capture. It’s not clear how much of this is the Datacap platform versus the heritage FileNet Capture, but I imagine that the Datacap technology is going to be driving the capture direction from here on. They’ve integrated the IBM Classification Module for auto recognition and classification of documents.
  • Content Manager OnDemand (CMOD) for report storage and presentment will see a number of enhancements including CMIS integration.
  • Social Content Management uses an integration of IBM Connections with ECM to allow an ECM library to access and manage content from within Connections, display ECM content within a Connections Community and a few other cross-product integrations. There are a couple of product announcements about this, but they seem to be in the area of integration between Connections and ECM as opposed to adding any native social content management to ECM.
  • FileNet P8, the core content management product, had a recent release (August) with such enhancements as bidirectional replication between P8 and Image Services, content encryption, and a new IBM-created search engine (replacing Verity).
  • IBM Content Manager (a.k.a., the product that used to compete with P8) has a laundry list of enhancements, although it still lags far behind P8 in most areas.

We had another short demo of Nexus, pretty much the same as I saw yesterday: the three-pane UI dominated by an activity stream with content-related events, plus panes for favorites and repositories. They highlighted the customizability of Nexus, including lookups and rules applied to metadata field entry during document import, plus some nice enhancements to the content viewer. The new UI also includes a work inbasket for case management tasks; not sure if this also includes other types of tasks such as BPM or even legacy Content Manager content lifecycle tasks (if those are still supported).

Nexus will replace all of the current end-user clients for both content and image servers, providing a rich and flexible user experience that is highly customizable and extensible. They will also be adding more social features to this; it will be interesting to see how this develops as they expand from a simple activity stream to more social capabilities.

Clayton then moved on to talk about ACM and the Case Manager product, which is now coming up to its second release (called v5.1, naturally). Given that much of the audience probably hasn’t seem it before, she wen through some of the use cases for Case Manager across a variety of industries. Even more than the base content management, Case Manager is a combination of a broad portfolio of IBM products within a common framework. She listed some of the new features, but I expect to see these in more detail in this afternoon’s dedicated Case Manager session so will wait to cover them then.

She discussed FileNet P8 BPM version 5.x: now Java-based for significant performance and capacity improvements (also due to a great deal of refactoring to remove old code sludge, as I have heard). As I wrote about last month, it provides Linux and zLinux support, and also allows for multi-tenancy.

With only a few minutes to go, she whipped through information lifecycle governance (records and retention management), including integration of the PSS Atlas product; IBM Content Collector; and search and content analytics. Given the huge focus on analytics in the morning keynote, it’s kind of funny that it gets about 30 seconds at the end of this session.

IBM IOD Day 2 Opening Keynote: Transformation in the Era of Big Data and Analytics

Today’s morning keynote kicked off with Steve Mills talking about big data – “as if data weren’t big before”, he joked – and highlighted that the real challenge is not necessarily the volume of data, but what we need to do in order to make use of that data. A huge application for this is customer service and sentiment analysis: figuring out what your customers are saying to you (and about you), and using that to figure out how to deliver better service. Another significant application area is that of the smarter planet: sensing and responding to events triggered by instrumentation and physical devices. He discussed a number of customer examples, pointing out that no two situations are the same and that a variety of technologies are required, but there are reusable patterns across industries.

Doug Hunt was up next to talk about content analytics – another type of big data – and the impact on transforming business processes. He introduced Randy Sumrall, CIO of Education Service Center Region 10 (State of Texas), to talk about the impact of technology on education and the “no child left behind” policy. New technology can be overwhelming for teachers, who are often required to select what technologies are to be used without sufficient information or skills to do so; there needs to be better ways to empower the educator directly rather than just having information available at the administrative level. For example, they’ve developed an “early dropout warning” tool to be used by teachers, analyzing a variety of factors in order to alert the teachers about students who are at risk of dropping out of school. The idea is to create tools for completely customized learning for each student, covering assessment, design and delivery; this is more classical BI than big data. Some interesting solutions, but as some people pointed out on the Twitter stream, there’s a whole political and cultural element to education as well. Just as some doctors will resist diagnostic assistance from analytics, so too will some teachers resist student assessments based on analytics rather than their own judgment.

Next was Frank Kern to talk about organizations’ urgency to transform their businesses, for competitive differentiation but also for basic survival in today’s fast-moving, social, data-driven world. According to a recent MIT Sloan study, 60% of organizations are differentiating based on analytics, and outperform their competitors by 220%. It’s all about speed, risk and customers; much of the success is based on making decisions and taking actions in an automated fashion, based on the right analysis of the right data.

Some of IBM’s future of big data analytics is Watson, and Manoj Saxena presented on how Watson is being applied to healthcare – being demonstrated at IOD – as well as future applications in financial services and other industries. In healthcare, consider that medical information is doubling every five years, and about 20% of diagnoses in the US have some sort of preventable error. Using Watson as a diagnostic tool puts all healthcare information into the mix, not just what your doctor has learned (and remembers). Watson understands human speech, including puns, metaphors and other colloquial speech; it generates hypotheses based on the information that it absorbs; then it understands and learns from how the system is used. A medical diagnosis, then, can include information about symptoms and diseases, patient healthcare and treatment history, family healthcare history, and even patient lifestyle and travel choices to detect those nasty tropical bugs that your North American doctor is unlikely to know about. Watson’s not going to replace your doctor, but provide decision support during diagnosis and treatment.

Dr. Carolyn McGregor of UOIT was interviewed about big data for capturing health informatics, particularly the flood of information generated by the instrumentation hooked up to premature babies in NICU: some medical devices generating several thousand readings per second. Most of these devices may have a couple of days of memory to store the measurements; after that, the data is lost if not captured into some external system. Being able to analyze patterns over several days’ data can detect problems as they are forming, allowing for early preventative measures to be taken: saving lives and reducing costs by reducing the time that the baby spends in NICU. A pilot is being done at Toronto’s world-class Hospital for Sick Children, providing analysis of 90 million data points each day. This isn’t just for premature babies, but is easily applicable to any ICU instrumentation where the patients require careful monitoring for changing conditions. This can even be extended to any sort of medical monitoring, such as home monitoring of blood glucose levels. Once this level of monitoring is commonplace, the potential for detecting early warning signals for a wide variety of conditions becomes available.

Interesting themes for day 2 of IOD. However, as much as they are pushing that this is about big data and analytics, it’s also about the decision management and process management required to take action based on that analysis.

IBM ECM Product Strategy

I finished the first day of IOD in the ECM product strategy session with Ken Bisconti and John Murphy. I was supposed to have a 1:1 interview with Bisconti at this same time, so now I know why that cancelled – the room is literally standing room only, and the same session (or, at least, a session with the identical name) is scheduled for tomorrow morning so there’s obviously a great deal of interest in what’s coming up in ECM.

They started with a summary of their 2011-2012 priorities:

  • Intelligent, distributed capture based on the DataCap acquisition
  • Customer self-service and web presentment of reports and statements
  • Rich user experiences and mobile device support
  • Whole solutions through better product integration and packaging as well as vertical applications and templates

The key deliverables in this time frame:

  • IBM Production Imaging Edition
  • DataCap Taskmaster expansion
  • CM8, FileNet CM updates
  • Project “Nexus”, due in 2012, which is the next generation of web-based user experience across the IBM software portfolio

They stressed that customers’ investments in their repositories is maintained, so the focus is on new ways to capture, integrate and access that data, such as bidirectional replication (including annotations and metadata) between older Image Services repositories and P8 Content Manager, and content repository federation.

Nexus is intended to address the classic problems with FileNet UI components: either they were easy to maintain or easy to customize, but never both. As someone who spent a lot of time in the 90s customizing UIs with the early versions of those components, I’d have to agree wholeheartedly with that statement. We saw a demo of the under-development version of Nexus, which shows three panes: a filterable activity stream for content and related processes, a favorites list, and a list of repositories. Searching in this environment can be restricted to a subset of the repositories, or across repositories: including non-IBM repositories such as SharePoint. Navigating to a repository provides a fairly standard folder-based view of the repository – cool for demos but often useless for very large repositories – with drag-and-drop capabilities for adding documents to the repository. The property dialog that appears for a new document can access external data sources in order to restrict the input to specific metadata fields.

This also provides access to teamspaces, which are sort of like a restricted version of an object store/library, where a user can create a teamspace (optionally based on a template), specify the folder structure, metadata and predefined searches, then add other users who can collaborate within this space. When a teamspace is opened, it looks pretty much like a regular library, except that it’s a user-created space rather than something that a system admin needs to set up.

Because of the underlying technology, Nexus can be surfaced in a number of different ways, including different types of widgets as well as on mobile devices. This style of user experience is a bit behind the curve of some other vendors, but is at least moving in the right direction. I look forward to seeing how this rolls out next year.

They moved on to discuss social content management, which covers a variety of social use cases:

  • Accessing and sharing content in the context of communities
  • Finding and navigating social content and social networks
  • Managing and governing social business information
  • Delivering social content business solutions

This obviously covers a lot of ground, and they’re really going to have to leverage the skills and lessons learned over in the Lotus group to jumpstart some of the social areas.

Next was Case Manager; I’m looking forward to a more in-depth product briefing on this alone, rather than just five minutes as part of the entire ECM strategy, but their content-centric view of case management seems to be resonating with their customers. That’s not to say that this is the only way to do case management, as we see from a number of other ACM vendors, but rather than IBM customers with big FileNet content repositories can really see additional value in the functionality that Case Manager provides on top of these repositories.

The newly announced Case Manager v5.1 aims to make it simpler to create and deliver case-based solutions, and includes a number of new integration capabilities including BPM (as we saw this morning) and data integration. They are also focusing on vertical industry case-based accelerators, and we saw a demo of a healthcare claims case management application that brings together case management, content and analytics to help a case worker to detect fraud. Like most case management scenarios, this is not focused on the actual automated detection of fraud, but on surfacing information to the user that will allow them to make that determination. Doing this in the context of content repositories and content analytics provides an rich view of the situation, allowing the case worker to make better decisions in much less time.

The case worker can create and assign tasks to others, including field workers who use a native iPad app to perform their field review (in the demo, this was a fraud investigator visiting a healthcare practitioner) including capturing new content using the iPad’s camera. Although the version that they demonstrated requires a live connection, they do expect to be delivering apps for disconnected remote devices as well, which is truly critical for supporting remote workers who may wander far beyond the range of their data network.

Moving on to information lifecycle management and governance, some of which is based on last year’s acquisition of PSS Systems, the portfolio includes smart archive (e.g., archiving SAP and other structured data), legal eDiscovery, records management and retention, and disposal and governance management. They’re now providing smart archive as a cloud offering, as well as on premise. The buzz-phrase of this entire area is “defensible disposition”, which sounds a bit like something that happens on The Sopranos, but is really about having an overall information governance plan for how data of all types area maintained, retained and destroyed.

They finished with a bit about IBM Watson for integrating search with predictive analytics, and the industry solutions emerging from this such as IBM Content and Predictive Analytics for Healthcare which is being shown here at IOD this week. We heard a bit about what this combination of technologies can do in the Seton Healthcare presentation earlier this afternoon, and we’ll see a demo of the actual packaged solution in the Wednesday morning keynote.

Streaming Video from IBM IOD

You can watch live streaming video of the IOD keynotes, such as this afternoon’s EDM keynote at 2:15PT/5:15ET, plus interviews from the show floor (and hopefully, soon, a replay of this morning’s keynote with Jeff Jonas) here:

Watch live streaming video from ibmsoftware at livestream.com

 

The good news is that I can now watch the keynotes from the comfort of my hotel room, if I want, where the wifi doesn’t suck.

Better Together: IBM Case Manager, IBM Content Manager and IBM BPM

Dave Yockelson from ECM product marketing and Amy Dickson from IBM BPM product management talked about something that I’m sure is on the minds of all FileNet customers who are doing anything with process: how do the (FileNet-based) Case Manager and Content Manager fit together with the WebSphere BPM products?

They started with a description of the IBM BPM portfolio – nothing new here – and how ACM requires an integrated approach that addresses repeatable patterns. Hmmmm, not completely sure I agree with that. Yockelson went through the three Forrester divisions of case management from their report on the ACM space, then went through a bit more detail on IBM Case Manager (ICM) and how it knits together functionality from the entire IBM software portfolio: content, collaboration, workflow, rules, events, integration, and monitoring and analytics. He positioned it as a rapid application development environment for case-based solutions, which is probably a good description. Dickson then went through IBM BPM (the amalgam of Lombardi and WebSphere Process Server that I covered at Impact), which she promised would finish up the “background” part and allow them to move on to the “better together” part.

So, in the aforementioned better together area:

  • Extend IBM BPM processes with content, using document and list widgets that can be integrated in a BPM application. This does not include content event processes, e.g., spawning a specific process when a document event such as check-in occurs, so is no different than integrating FileNet content into any BPMS.
  • Extend IBM BPM Advanced (i.e., WPS) processes with content through a WebSphere CMIS adapter into the content repository. Ditto re: any BPMS (or other system) that supports CMIS being able to integrate with FileNet content.
  • Invoke an IBM BPM Advanced process from an ICM case task. Assuming that this is via a web service call (since WPS allows processes to be exposed as web services), not specifically an IBM-to-IBM integration.

Coming up, we’ll see some additional integration points:

  • Invoke an IBM BPM Express/Standard process from an ICM case task. This, interestingly, implies that you can’t expose a BPM Express/Standard process as a web service, or it could have been done without additional integration, doesn’t it? The selection of the process and mapping of case to process variables is built right into the ICM Builder, which is definitely a nice piece of integration to make it relatively seamless to integrate ICM and BPM.
  • Provide a federated inbox for ICM and BPM (there was already an integrated inbox for the different types of BPM processes) so that you see all of your tasks in a single list, based on the Business Space Human Tasks widget. When you click on a task in the list, the appropriate widgets are spawned to handle that type of work.
  • Interact with ICM cases directly from a BPM process through an integration service that allows cases to be created, retrieved and updated (metadata only, it appears) as part of a BPM process.

This definitely fits IBM’s usual modus operandi of integrating rather than combining products with similar functionality; this has a lot of advantages in terms of reducing the time to releasing something that looks (sort of) like a single product, but has some disadvantages in the underlying software complexity as I discussed in my IBM BPM review from Impact. A question from the audience asked about consolidation of the design environment; as expected, the answer is “yes, over time”, which is similar to the answer I received at Impact about consolidation of the process engines. I expect that we’ll see a unified design environment at some point for ICM and both flavors of BPM by pulling ICM design into the Process Center, but there might still be three engines under the covers for the foreseeable future. Given the multi-product mix that makes up ICM, there will also be separate engines (and likely design environments) for non-process functions such as rules, events and analytics, too; the separate engines are inevitable in that case, but there could definitely be some better integration on the design side.

IBM IOD Keynote: Turn Insight Into Action

This is a big conference. We’re in the Mandalay Bay Events Center, which is a stadium that would probably hold a hockey rink, and although all the seats are not full, it’s a pretty big turnout. This is IBM’s centennial, which is a theme throughout the conference, and the opening session started with some key points in the history of IBM’s products. IBM might seem like a massive, slow-moving ship at times, but there is no doubt that they’ve been an innovator through the entire age of modern computing. I just hope to be seeing some of that innovation in their ECM and ACM products this week.

The keynote session was hosted by Katty Kay, a BBC news journalist in the Washington bureau, who added a lot of interesting business and social context to the presentations.

Jeff Jonas spoke about analytics, pointing out that with the massive amounts of data available to enterprises, enterprises are actually getting dumber because they’re not analyzing and correlating that data in context. He used a jigsaw puzzle metaphor: you don’t know what any particular piece means until you see it in relation to the others with which it fits. You also don’t need all of the pieces in the puzzle to understand the big picture: context accumulates with each new observation, and at some point, confidence improves while computational effort decreases.

He looked at two sides of analytics – sense and respond, and explore and reflect – and how they fit into the activity of achieving insight. If the keynotes are available online, definitely watch Jonas’ presentation: he’s funny and insightful in equal measure, and has a great example of a test he ran with jigsaw puzzles and human cognition. He went much too fast for me to keep up in these notes, and I’ll be watching it again if I can find it. The only problem was that his presentation ruined me for the rest of the keynotes, which seemed dull in comparison. 🙂

Sarah Diamond was up next to talk about the challenges facing financial institutions, and how analytics can support the transformation of these organizations by helping them to manage risk more effectively. She introduced a speaker from SunTrust, and IBM customer, who spoke about their risk management practices based around shared data warehousing and reporting services. Another SunTrust speaker then talked about how they use analytics in the context of other activities, such as workflow. A good solid case study, but not sure that this was worth such a big chunk of the main keynote.

Mike Rhodin spoke about how innovation across industries is opening new possibilities for business optimization, particularly where analytics create a competitive advantage. Analytics are no longer a nice-to-have, but an imperative for even staying in business: the performance gap between the winners and losers in business is growing, and is fueled in part by the expedient use of analytics to generate insights that allow for business optimization. Interestingly, marketing and finance are the big users of analytics; only 25% of HR leaders are using analytics to help them with hiring an effective workforce.

Robert LeBlanc discussed how the current state of information from everywhere, radical flexibility and extreme scalability impacts organizations’ information strategy, and challenged the audience to consider if their information strategy is bold enough to live in this new environment. Given that 30% of organizations surveyed reported that they don’t even know what to do with analytics, it’s probably safe to say that there are some decidedly meek information strategies out there. Information – both data and unstructured content – can come from anywhere, both inside and outside your organization, meaning that the single-repository dream is really just a fantasy: repositories need to be federated and integrated so that analytics can be applied on all of the sources where they live, allowing you to exploit information from everywhere. He pointed out the importance of leveraging your unstructured information as part of this.

The keynote finished with Arvind Krishna – who will be giving another full keynote later today – encouraging the audience to take the lead on turning insight into action. He summarized this week’s product announcements: DB2 Analytics Accelerator, leveraging Netezza; IMS 12; IBM Content and Predictive Analytics for Healthcare; IBM Case Manager v5.1, bringing together BPM and case management; InfoSphere MDM 10; InfoSphere Information Server 8.7; InfoSphere Optim Test Data Management Self Service Center; Cognos native iPad support; Cognos BI v10.1.1. He also announced that they closed the Algorithmics acquisition last week, and that they will be acquiring Q1 Labs for security intelligence and risk management. He spoke about their new products, InfoSphere BigInsights and InfoSphere Streams, which we’ll be hearing about more in tomorrow’s keynote.

IBM BPM: Merging the Paths

“Is there any point to which you would wish to draw my attention?” “To the curious incident of the dog in the night-time.” “The dog did nothing in the night-time.” “That was the curious incident,” remarked Sherlock Holmes.

Silver Blaze, Sir Arthur Conan Doyle

And so the fact of me (and others) not yet blogging about the IBM BPM release has itself become a point of discussion. 😉

To recount the history, I was briefed on the new IBM BPM strategy and product offerings a few weeks before the Impact conference, with a strict embargo until the first day of the conference when the announcements would be made. Then, the week before Impact, IBM updated their online product pages and the sharp-eyed Scott Francis noticed this and jumped to the obvious – and correct – conclusion: IBM was about to integrate their WebSphere BPM offerings. That prerelease of information certainly diffused the urgency about writing about the release at the moment of announcement, and gave many of us the chance to sit back and think about it a bit more. I only had a brief day and a half at Impact before making my way back east for another conference where I was giving a workshop, and here I am a week later finally finishing up my thoughts on IBM BPM.

There’s been some written about it already by others who were there: Clay Richardson and his now-infamous “fresh coat of paint” post, which I’m sure did not make him any friends in some IBM circles, Neil Ward-Dutton with his counterpoint to Clay’s opinion, some quick notes from Scott Francis in the context of his keynote blogging (which also links to the video of Phil Gilbert making the announcement), and Tony Baer as part of his post on a week of BPM announcements.

It’s important to look at how the IBM organization has realigned to allow for the new product release: Phil Gilbert, former president and CTO of Lombardi, now has overall responsibility for all of WebSphere BPM – including both the former Lombardi and WebSphere BPM products – plus ILOG rules management. Neil Ward-Dutton referred to this as the reverse takeover of IBM by Lombardi; when I had a chance for a 1:1 with Phil at Impact, I told him that we’d all bet that he would be gone from IBM after a year. He admitted that he originally thought so too, until they gave him the opportunity to do exactly what he knew needed to be done: bring together all of the IBM BPM offerings into a unified offering. This new product announcement is the beginning of that unification, but they still have a ways to go.

Let’s take a look at the product offering, then. They’ve take pretty much everything in the WebSphere BPM portfolio (Lombardi Edition, Dynamic Process Edition, Process Server, Integration Developer, Business Modeler, Business Compass, Business Fabric) and mostly rolled it into IBM  BPM or replaced its functionality with something similar; there are a few exceptions, such as Business Compass, that have just disappeared. This reduces the entire IBM BPM portfolio to the following:

  • IBM Business Process Manager (which I’m covering here)
  • IBM Case Manager (the rebranding of some specialized functionality built on the IBM FileNet BPM platform, which is separate from the above IBM BPM offering)
  • IBM Blueworks Live
  • IBM Business Monitor
  • IBM BPM Industry Packs

Combining most of the WebSphere BPM components into IBM BPM V7.5, the new product offering has both a BPMN Process Designer and a BPEL Integration Designer, a common repository, and a process server that includes both the BPMN and BPEL engines. Now you can see where Clay Richardson is coming from with the “new coat of paint” characterization: the issue of one versus two process “servers” seemed to occupy an inordinate amount of time in discussions with IBM representatives, who stoically recited the party line that it’s one server. For those of us who actually used to write code like this for a living, it’s clear that it’s two engines: one BPMN and one BPEL. However, from the customer/user standpoint, it’s wrapped into a single Process Server, so if IBM ever gets around to refactoring into a single engine, that could be made fairly transparent to their customers, but would likely have the benefit of reducing IBM’s internal engineering costs around maintaining one versus two engines. Personally, I believe that there is enough commonality between process design and service orchestration that both the designers and the engines could be combined into something that offers the full spectrum of functionality while reducing the underlying product complexity.

In addition to the core process functionality, the ILOG rules engine is also present, plus monitoring tools and user interface options with both the process portal and the Business Space composite application environment.

I don’t want to understate their achievements in this product offering: the (Lombardi-flavored) Process Center with its shared repository and process governance is significant, allowing users to reuse artifacts from the two different sides of the BPM house: you can add a BPEL process orchestration created in Integration Designer to your BPMN process created in Process Designer, or you can include a business object created in Process Designer as a data definition in your BPEL service orchestration in Integration Designer, or call a BPMN process for human task handling. The fact remains, however, that this is still a slightly uneasy combination of the two major BPM platforms, and it will likely take another version or two to work out the bumps.

Since this is IBM, they can’t just have one product configuration, but offer three:

  • The Express edition, offered at a price point that is probably less than your last car, is for starter BPM projects: full functionality of the Process Designer to build and run BPMN processes, but only one server with no clustering, so unlikely to be used for any mission-critical applications. If you’re just getting started and are doing human-centric BPM, then this is for you.
  • The Standard edition, which is pretty much the same human BPM and lightweight integration functionality as the former Lombardi Edition BPMS. Existing Lombardi Edition customers will be able to upgrade to this version seamlessly.
  • The Advanced edition, which adds the Integration Designer and its ability to create a SOA layer of BPEL service/process orchestrations that can then be called from the BPMN processes or run independently.

In the product architecture diagram above, the Advanced edition is the whole thing, whereas the Standard and Express editions are missing the Integration Designer; to complicate that further, current WebSphere Process Server/Integration Designer customers will be transitioned to the Advanced edition but with the Process Designer disabled, a fourth shadow configuration that will not be available for new customers but is offered only as an upgrade. Both engines are still there in all editions, but it appears that without both designers, you can’t actually design anything that will run in one of the engines. For current customers, IBM has published information on migrating your existing configuration to the new BPM; there is a license migration path for all customers who currently have BPM products, but for some coming from the traditional WebSphere products, the actual migration of their applications may be a bit rocky.

The web-based Process Center is used for managing, deploying and interacting with processes of both types, although the Process Designer and Integration Designer are still applications that must be downloaded and installed locally. Within the Process Designer, there’s the familiar Lombardi “iTunes-style” view of the assets and dependencies. It’s important to point out that the Toolkits are assets that could have originated in either the Process Designer or the Integration Designer; in other words, they could be human workflows running on the BPMN engine or service orchestrations running on the BPEL engine, and can just be dragged and dropped onto BPMN processes as activities. The development environment includes versioning, shared concurrent editing to view what assets that other developers are editing that might impact your project, playback of previous process versions, and all versions of processes viewable for deployment in Process Center. The Process Center view is identical from either design tool, providing an initial common view between these two environments. Linking these two environments through sharing of assets in the Process Center also eases deployment: everything that a process application depends upon, regardless of its origin, can be deployed as a single package.

Not everything comes from the former Lombardi Edition, however: the user interface builder in BPM BPM is based on Business Space, IBM’s composite application development tool, instead of the old Lombardi forms and UI technology; this allows for easy reuse of widgets in portals, and there’s also a REST interface to roll your own UI. Also, the proprietary rules engine in Lombardi is being replaced with ILOG, with the rules editor built right in to the design environments; the ILOG engine is included in the Process Server, but can only be called from processes, not by external applications, so as to not cannibalize the standalone ILOG BRMS business. I’m sure that they will be supporting the old UI and rules for a while, but if you’re using those, you’re going to be encouraged to start migrating at some point.

There is currently no (announced) plan for IBM BPM process execution in the cloud (except for the simple user-created workflows in Blueworks Live), which I think will impact IBM BPM at some point: I understand that many of the large IBM customers are unlikely to go off premise for a production system, but more and more organizations that I work with are considering cloud-based solutions that they can provision and decommission near-instantaneously as a platform for development and testing, at the very least. They need to rethink their strategy on this, and stop offering expensive custom hosted or private “cloud” platforms as their only cloud alternatives.

Finally, there is the red-headed stepchild in the IBM BPM portfolio: IBM FileNet BPM, which has mostly been made over as the IBM Case Manager product. Interestingly, some of the people from the FileNet product side were present at Impact (usually they would only attend the IOD conference, which covers the Information Management software portfolio in which FileNet BPM is entombed), and there was talk about how Case Manager and the rest of the BPM suite could work together. In my opinion, bringing FileNet BPM into the overall IBM BPM fold makes a lot of sense; as I blogged back in 2006 at the time of the acquisition, and in 2008 when comparing it to the Oracle acquisition, they should have done that from the start, but there seemed (at the time) to be some fundamental misunderstandings about the product capabilities, and they chose to refocus it on content-centric BPM rather than combining it with WebSphere Process Server. Of course, if they had done the latter, we likely would be seeing a very different IBM BPM product mix today.