IOD Keynote: Computational Mathematics and Freakonomics

I attended the keynote this morning, on the theme of looking forward: first we heard from Mike Rhodin, an exec in the IBM Software group, then Brenda Dietrich, a mathematician (and VP – finally, a female IBM exec on stage) from the analytics group in IBM Research. IBM Research has nine labs around the world, including a new one just launched in Brazil, and a number of collaborative research facilities, or “collaboratories”, where they work with universities, government agencies and private industries on research that can be leveraged into the market more quickly. I’ve met a few of the BPM researchers from the Zurich lab at the annual academic BPM conference, but the range of the research across the IBM labs is pretty vast: from nanotechnology, to the cloud, to all of the event generation that leads to the “smarter planet” that IBM has been promoting. She’s here from the analytics group because analytics is at the top of this pyramid of research areas, especially in the context of the smarter planet: all of our devices are generating a flood of events and data, and some pretty smart analytics have to be in place to be able to make sense of all this.

The future of analytics is moving from today’s static model of collect-analyze-present results, to more predictive analytics that can create models of the future based on what’s happened in the past, and use that flood of data (such as Twitter) as input to these analytical models.

I have a lot of respect for IBM for trying out their own ideas on systems on themselves as one big guinea pig, and this analytics research is no exception. They’re using data from all sorts of internal systems, from manufacturing plants to software development processes to human resources, to feed into this research, and benefit from the results. When this starts to hit the outside market, it has impacts on a much wider variety of industries, such as telco and oil field development. Not surprisingly, this ties in with master data management, since you need to deal with common data models if you’re going to perform complex analytics and queries across all of this data, and their research on using the data stream to actually generate the queries is pretty cool.

She showed a short video ciip on Watson, an AI “question answering system” that they’ve built, and showed it playing Jeopardy, interpreting the natural language questions – including colloquialisms – and responding to them quickly, beating out some top human Jeopardy players. She closed with a great quote that is inspirational in so many ways, especially to girls in mathematics: “It’s a great time to be a computational mathematician”.

The high-profile speakers of the keynote were up next: Steven Levitt and Stephen Dubner, authors of Freakonomics and Superfreakonomics, with some interesting anecdotes about how they started working together (Levitt’s the genius economist, and Dubner’s the writer who collaborated with him on the books). They talked about turning data into ideas, tying in with the analytics theme; they had lots of interesting and humorous stories on an economic theme, such as teaching monkeys about money as a token to be exchanged for goods and (ahem) services, and what that teaches us about risk and loss aversion in people.

I have a noon flight home to Toronto, so this ends my time at IOD 2010. This is my first IOD: I used to attend FileNet’s UserNet conference before the acquisition, but have never been to IOD or Impact until this year. With over 10,000 people registered, this is a massive conference that covers a pretty wide range of information management technologies, including the FileNet ECM, BPM and now Case Manager software that is my main focus here. I’ve had a look at the new IBM Case Manager, as you’ve read in my posts from yesterday, and give it a bit of a mixed review, although it’s still not even released. I’m hoping for an in-depth demo sometime in the coming weeks, and will be watching to see how IBM launches itself into the case management space.

Customizing the IBM Case Manager UI

Dave Perman and Lauren Mayes had the unenviable position of presenting at the end of the day, and at the same time as the expo reception was starting (a.k.a. “open bar”), but I wanted to round out my view of the new Case Manager product by looking at how the user interfaces are built. This is all about the Mashup Center and the Case Manager widgets; I’ve played around with the ECM widgets in the past, which provide an easy way to build a composite application that includes FileNet ECM capabilities.

Perman walked through the Case Manager Builder briefly to show how everything hangs together – or at least, the parts that are integrated into the Builder environment, which are the content and process parts, but not rules or analytics – then described the mashup environment. The composite application development (mashup) environment is pretty standard functionality in BPM and ACM these days, but Case Manager comes with a pre-configured set of pages that make it easy to build case application UIs. A business analyst can easily customize the standard Case Manager pages, selecting which widgets are included and their placement on the page, including external (non-Case Manager) widgets.

The designer can also override the standard case view pages either for all users or for specific roles; this requires creating the page in the mashup environment and registering it for use in Case Manager, then using the Case Manager Builder to assign that page to the specific actions associated with a case. In other words, the UI design is not integrated into the Case Builder environment, although the end result is linked within that environment.

Mayes then went through the process of building and integrating 3rd party widgets; there’s a lot of material on the IBM website now on how to build widgets, and this was just a high-level view of that process and the architecture of integrating between the Mashup Center and the ACM widgets, themes and ECM services on the application server. This uses lightweight REST services that return JSON, hence easier to deal with in the browser, including CMIS REST services for content access, PE REST services for process access, and some custom case-specific REST services. Since there are widgets for Sametime presence and chat functionality, they link through to a Sametime proxy server on the application server. For you FileNet developer geeks, know that you also have to have an instance of Workplace XT running on the application server as well. I’m not going to repeat all the gory details, but basically once you have your custom widget built, you can deploy it so that it appears on the Mashup Center palette, and can be used like any other pre-existing widget. There’s also a command widget that retrieves all the case information so that it’s not loaded multiple times by all of the other widgets; it’s also a controller for moving between list and detail pages.

This is a bit more information that I was counting on absorbing this late in the day, and I ducked out early when the IBM partner started presented about what they’ve done with custom widgets.

That’s it for today; tomorrow will be a short day since I fly home mid-day, but I’ll likely be at one or two sessions in the morning.

IBM FileNet BPM Product Update

All this news this week about Case Manager, my old friend BPM seems like it’s been left on the sidelines, although partially hidden within the new Case Manager offering. However, we have one session by Mike Fannon, BPM product manager, giving us the update on what’s happening with BPM.

The first thing is new OS platform support for Linux and zLinux; although this is important for many customers who have standardized on Linux – or want to integrate BPM with CM8 on their mainframes – you can imagine this is not the most exciting announcement to me. Yes, I have customers who will love this. Now move on. 🙂

Next is the port of the Process Engine to a standalone Java app (not J2EE), from its original C++ beginnings. Although this seems on the surface to be not a lot more exciting than the Linux support, this is pretty significant, and not just for the performance boost that they’re seeing. This means improvements to the complexity of the APIs and database interfaces, better standardization, and also brings PE in line architecturally with the Content Engine and even allows PE and CE to share the same database. In the future, they’re considering moving it to a J2EE container, which provides a lot more flexibility for things like server farming.

They’re also supporting multi-tenancy, allowing multiple PEs to run on the same virtual server with separate application environment, user space, backup and restore for each tenant. These PE Stores (analogous to CE Object Stores) seem to be replacing the old isolated regions paradigm, and there are procedures for moving isolated regions to separate PE Stores. If you’re an old BPM hack, then all your old VW-prefixed admin commands will be replaced as the vestiges of Visual WorkFlo are finally purged. As the owner of a small systems integration firm, I designed and wrote one of the first VW apps back in 1994, so this does bring a small tear to my eye, although this has obviously being too long coming.

From an upgrade standpoint, there are supported upgrade paths (some staged, some direct) from eProcess 5.2 (can’t believe that’s still out there) as well as BPM 3.53 and later, including migration tools for in-flight process instances. There are changes to the data model of the underlying database tables, so if you’ve built any applications such as advanced analytics that directly hit the operational database, you’re going to have some refactoring to do.

Process Analyzer has been extended to add capabilities for Case Manager, such as aggregation based on case properties. In addition to using Excel pivot tables, which has always been done in the past for PA, you can use Cognos BI instead. Of course, since PA is based on a set of cubes in a MS SQL Server/MS Analysis Services engine that is trickle-fed from the PE database, this has always been possible, but I assume that it’s just better integrated now. Unsurprisingly, they to have a direction to eliminate Microsoft technology dependencies, so at some point in the future, I expect that you’ll see PA data store ported off the SQL Server/Analysis Services platform. The Process Monitor dashboard has also been updated to handle Case Manager data, and better integrated with Cognos.

There were a number of enhancements to the ECM Widgets in March and June, such as support for Business Space instead of the Mashup Center, and some new widgets for process history and get next work item (finally). It looks like they’re building out the widget functionality to the point where it’s actually usable for real applications; without the get next work item, you couldn’t use it to build any sort of heads-down processing functionality.

There are really few functionality improvements to BPM; most of this is refactoring and platform porting. I think that a lot of the BPM creative juices are going towards Case Manager, and if you look at the direction of the 100% Java PE port and ability to share databases with CE, it’s possible that we’ll see some sort of merging of ECM, BPM and Case Manager into a single engine in the future. IBM, of course, did not say that.

IBM Case Manager Technical Roundtable

Bill Lobig, Mike Marin, Peggy (didn’t catch her last name) and Lauren Mayes hosted a freeform roundtable for any technical questions about the new Case Manager product.

I had a chat with Mike prior to the talk, and he reinforced this during the session, about the genesis of Case Manager: although there were a lot of ideas that came from the old BPF product, Mike and his team spent months interviewing the people who had used BPF to find out what worked and what didn’t work, then built something new that incorporated the features most needed by customers. The object model for the case is now part of the basic server classes rather than being a higher-level (and therefore less efficient) custom object, there are new process classes to map properties between case folders and processes, and a number of other significant architectural changes and upgrades to make this happen. I see TIBCO going through this same pain right now with the lack of upgrade path from iProcess to AMX BPM, and to the guy in the audience who said that it’s not fair that IBM gives you a crappy product, you use it and provide feedback on how to improve it, then they charge you for the new product: well, that’s just how software works sometimes, and vendors will never have true innovation if they always have to be supporting their (and your) entire legacy. There does need to be some sort of migration path at least for the completed case folder objects from BPF to Case Manager native case objects, although that hasn’t been announced, since these are long-term corporate assets that have to be managed the same as any other content; however, I would not expect any migration of the BPF apps themselves.

More process functionality is being built right into the content engine; this is significant in that you’ve always required both ECM and BPM to do any process management, but it sounds like some functionality is being drawn into the content engine. Does this mean that the content and process engines eventually be merged into a single platform and a single product? That would drive further down the road of repositioning FileNet BPM as content-centric – originally done at the time of the FileNet acquisition, I believe, to avoid competition with WebSphere BPM – since if it’s truly content-centric, then why not just converge the engines, including the ACM capabilities? That would certainly make for a more seamless and consistent development environment, especially around issues like object modeling and security.

One consistent message that’s coming across in all the Case Manager sessions is accelerating the development time by allowing a business analyst to create a large part of a case application without involving IT; this is part of what BPF was trying to provide, and even BPM prior to that. I was FileNet’s evangelist for the launch of the eProcess product, which was the first version of the current generation of BPM, and we put forward the idea back in 2000 that a non-technical (or semi-technical) analyst could do some amount of the model-driven application development.

There are obviously still some rough edges in Case Manager still, since version 1.0 isn’t even out yet. In a previous session, we saw some of the kludges for content analytics, dashboarding and business rules, and it sounds like role-based security and e-forms isn’t really fully integrated either. The implications of these latter two are tied up with the ease in which you can migrate a case application from one environment to another, such as from development to test to production: apparently, not completely seamless, although they are able to bundle part of a case application/template and move it between environments in a single operation. Every vendor needs to deal with this issue, and those that have a more tightly integrated set of objects making up an application have a much easier time with this, especially if they also offer a cloud version of their software and need to migrate easily between on premise and cloud environments, such as TIBCO, Fujitsu and Appian. IBM is definitely playing catchup in the area of moving defined applications between environments, as well as their overall integration strategy within Case Manager.

IOD ECM Keynote

Ron Ercanbrack, VP at IBM (my old boss from my brief tenure at FileNet in 2000-1, who once introduced me at a FileNet sales kickoff conference as the “Queen of BPM”), gave a brief ECM-focused keynote this morning. He covered quite a bit of the information that I was briefed on last week, including Case Manager, Content Analytics, improved content integration including CMIS, the Datacap and PSS acquisitions, enhancements to Content Collector, and more. He positioned Case Manager as a product “running on top of BPM”, which is a bit different than the ECM-centric message that I’ve heard so far, but likely also accurate: there are definitely significant components of each in there.

He was followed by Carl Kessler, VP of Development, to give a Case Manager demo; this covered the end-user case management environment (pretty much what we’ve seen in previous sessions, only live), plus Content Analytics for text mining which is not really integrated with Case Manager: it’s a separate app with a different look and feel. I missed the launch point, so I don’t know whether he launched this from a property value in Case Manager or had to start from scratch using the terms relevant to that case. It has some very nice text mining capabilities for searching through the content repository for correlation of terms, including some pretty graphs, but it’s a separate app.

We then went off to the Cognos Real-time Monitoring Dashboard, which is yet again another non-integrated app with a different look and feel. He showed a dashboard that had a graph for average age of cases and allowed drill-down on different parameters such as industry type and dispute type, but that’s not really the same as a fully integrated product suite. Although all of the components applications are functional, this needs a lot more integration at the end-user level.

I did get a closer look at some of the Case Builder functionality than I’ve seen already: in the tasks definition section, there are required tasks, optional tasks and user-created tasks, although it’s not clear what user-created tasks are since this is design-time, not runtime.

Ercanbrack came back to the stage for a brief panel with three customers – Bank of America, State of North Dakota, and BlueCross BlueShield of Tennessee – talking about their ECM journeys. This was not specific to case management at all, but using records/retention management to reduce storage costs and risks in financial services, using e-discovery as part of a legal action in healthcare, and content management with a case management approach for allowing multiple state government agencies to share documents more effectively.

IBM Announcements: Case Manager, CMIS and More

I had a pre-IOD analyst briefing last week from IBM with updates to their ECM portfolio, given by Ken Bisconti, Dave Caldera and Craig Rhinehart. IOD – Information on Demand – is IBM’s conference covering business analytics and information management, the latter of which includes data management and content management. The former FileNet products fall into their content management portfolio (including FileNet BPM, which was repositioned as document-centric BPM following the acquisition so as to not compete with the WebSphere BPM products), and includes case management capabilities in their Business Process Framework (BPF). I also had a one-to-one session with Bisconti while at IOD to get into a bit more detail.

The big announcement, at least to me, was the new Case Manager product, to ship in Q4 (probably November, although IBM won’t commit to that). IBM has been talking about an advanced case management strategy for several months now, and priming the pump about what “should” be in a case management product, but this is the first that we’ve seen a real product as part of that strategy; I’m sure that the other ACM vendors with products already released are ROFL over IBM’s statement in the press release that this is the “industry’s first advanced case management product”. With FileNet Content Manager at the core for managing the case file and the associated content, they’ve drawn on a variety of offerings across different software groups and brands to create this product: ILOG rules, Cognos realtime monitoring, Lotus collaboration and social networking, and WebSphere Process Server to facilitate integration to multiple systems. This is one of their “industry solutions” that spans multiple software groups, and I can just imagine the internal political wrangling that went on to make this happen. As excited as they sounded about bringing all these assets together in a new product, they’ll need to demonstrate a seamless integration and common user experience so that this doesn’t end up looking like some weird FrankenECM. Judging from the comments at the previous session that I attended, it sounds like the ILOG integration, at the very least, is a bit shaky in the first release.

They’re providing analytics – both via the updated Content Analytics offering (discussed below) and Cognos – to allow views of individual case progression as well as analysis of persistent case information to detect patterns in case workload. It sounds like they’re using Cognos for analyzing the case metadata, and Content Analytics for analyzing the unstructured information, e.g., documents and emails, associated with the case.

A key capability of any case management system, and this is no exception, is the ability to handle unstructured work, allowing a case worker to use their own experience to determine the next steps to progress the case towards outcome. Workers can create tasks and activities that use the infrastructure of queues and inboxes; this infrastructure is apparently new as part of this offering, and not based on FileNet BPM. Once a case is complete, it remains in the underlying Content Manager repository, where it is subject to retention policies like any other content. They’ve made the case object and its tasks native content types, so like any other content class in FileNet Content Manager, you can trigger workflows (in BPM) based on the native event types of the content class, such as when the object is created or updated. The old Business Process Framework (BPF), which was the only prior IBM offering in the case management arena, isn’t being discontinued, but customers will definitely be encouraged to create any new case management applications on Case Manager rather than BPF, and eventually to rewrite their BPF applications to take advantage of new features.

As we’re seeing in many other BPM and case management products, they’ve created the ability to deploy reusable templates for vertical solutions in order to reduce the time required to deploy a solution from months down to days. IBM’s focus will initially be on the horizontal platform, and they’re relying on partners and customers to build the industry-specific templates. Partners in the early adoption program are already providing templates for claims, wealth management and other solutions. The templates are designed for use by business analysts, so that a BA can use a pre-defined template to create and deploy a case management solution with minimal IT involvement.

For user experience, they’re providing three distinct interfaces:

  • A workbench for BAs to create case solutions, based on the afore-mentioned templates, using a wizard-based interface. This includes building the end user portal environment with the IBM iWidget component (mashup) environment.
  • A role-based portal for end users, created by the BAs in the workbench, with personalization options for the case worker.
  • Analytics/reporting dashboards reporting on case infrastructure for managers and case workers, leveraging Cognos and Content Analytics.

They did have some other news aside from the Case Manager announcement; another major content-related announcement is support for the CMIS standard, allowing IBM content repositories (FileNet CM, IBM CM8 and CMOD) to integrate more easily with non-IBM systems. This is in a technology preview only at this point, but since IBM co-authored the standard, you can expect full support for it in the future. I had a recent discussion with Pega indicating that they were supporting CMIS in their case management/BPM environment, and we’re seeing the same from other vendors, meaning that you’ll be able to integrate an industrial strength repository like FileNet CM into the BPM or ACM platform of your choice.

They had a few other announcements and points to discuss on the call:

  • IBM recently acquired Datacap, a document capture (scanning) product company, which refreshes their high-performance document scanning and automated recognition capabilities. This integrates with FileNet CM, but also with the older IBM CM8 Content Manager and (soon) CMOD, plus other non-IBM content repositories. Datacap uses a rules-based capability for better content capture, recognition and classification.
  • There are improvements to Office Document Services; this is one of the areas where CMIS will help as well, allowing IBM to hold its nose and improve their integration with SharePoint and Exchange. There’s a big focus on content governance, such as managing retention lifecycles, including content federation across multiple heterogeneous repositories.
  • There are updates to the information lifecycle governance (ILG) portfolio, including Content Collector and eDiscovery. Content Collector has better content collection, analysis and management capabilities for office documents, email and SAP data. eDiscovery now provides better support for legal discovery cases, with enhanced security roles for granular content access, redaction APIs and better keyword identification. This ties back into governance, content lifecycle management and retention management: disposal of information at the appropriate times is key to reducing legal discovery costs, since you’re not having to retrieve, distribution and review a lot of content that is no longer legally required.
  • IBM’s recent acquisition of PSS Systems complements the existing records management and eDiscovery capabilities with retention-related analytics and policy solutions.
  • The relatively new IBM Content Analytics (ICA) product has been updated, providing analytics on content retention management (i.e., find what you need to decommission) as well as more general “BI for content” for advanced analytics on what’s in your content repositories and related contextual data from other sources. This integrates out of the box with Cognos (which begs the question, why isn’t this actually just Cognos) as well as the new Case Manager product to provide analytics for the manager dashboard views. The interesting thing is that “content” in this situation is more than just IBM content repositories, it’s also competitive content repositories and even things like Twitter feeds via IBM’s new BigInsights offering. They have a number of ICA technology demos here at IOD, including the BigInsights/Twitter analysis, and ICA running on Hadoop infrastructure for scalability.
  • The only announcement for FileNet BPM seemed to be expanding to some new Linux platforms, and I’ve heard that they’re refactoring the process engine to improve performance and maintenance but no whiff of new functionality aside from the Case Manager announcement. I plan to attend the BPM technical briefing this afternoon, and should have some more updates after that.

I still find the IBM ECM portfolio – much like their BPM and other portfolios – to contain too many products: clearly, some of these should be consolidated, although IBM’s strategy seems to be to never sunset a product if they have a couple of others that do almost the same thing and there’s a chance that they can sell you all of them.

Advanced Case Management Empowering The Business Analyst

We’re still a couple of hours away from the official announcement about the release of IBM Case Manager, and I’m at a session on how business analysts will work with Case Manager to build solutions based on templates.

Like the other ACM sessions, this one starts with an overview of IBM’s case management vision as well as the components that make up the Case Manager product: ECM underlying it all, with Lotus Sametime for real-time presence and chat, ILOG JRules for business rules, Cognos Real Time Monitor for dashboards, IBM Content Analytics for unstructured content analysis, IBM (Lotus) Mashup Center for user interface and some new case management task and workflow functionality that uses P8 BPM under the covers. Outside the core of Case Manager, WebSphere Process Server can be invoked for integration/SOA applications, although it appears that this is done by calling it from P8 BPM, which was existing functionality. On top of this, there are pre-built solutions and solution templates, as well as a vast array of services from IBM GBS and partners.

IBM Case Management Vision

The focus in this session is on the tools for the business analyst in the design-time environment, either based on a template or from scratch, including the user interface creation in the Mashup Center environment, analytics for both real-time and historical views of cases, and business rules. This allows a business analyst to capture requirements from the business users and create a working prototype that will form the shell of the final case application, if not the full executing application. The Case Builder environment that a business analyst works in to design case solutions also allows for testing and deploying the solution, although in most cases you won’t have your BAs deploying directly to a production environment.

Defining a case solution stats with the top-level case solution creation, including name, description and properties, then completing the following:

  • Define case types
  • Specify roles
    • Define role inbasket
  • Define personal inbasket
  • Define document types
  • Associate runtime UI pages

We didn’t see the ILOG JRules integration, and for good reason: in the Q&A, they admitted that this first version of Case Manager didn’t quite have that up to scratch, so I imagine that you have to work in both design environments, then call JRules from a BPM step or something of that nature.

The more that I see of Case Manager, the more I see the case management functionality that was starting to migrate into the FileNet ECM/BPM product from the Business Process Framework (BPF); I predicted that BPF would become part of the core product when I reviewed P8 BPM v4.5 a year and a half ago, and while this is being released as a separate product rather than part of the core ECM product, BPF is definitely being pushed to the side and IBM won’t be encouraging the creation of any new applications based on BPF. There’s no direct migration path from BPF to ACM; BPF technology is a bit old, and the time has come for it to be abandoned in favor of a more modern architecture, even if some of the functionality is replicated in the new system.

The step editor used to define the tasks associated with cases provides swimlanes for roles or workgroups (for underlying queue assignment, I assume), then allows the designer to add steps into the lanes and assign properties to the steps. The step properties are a simplified version of a step definition in P8 BPM, so I assume that this is actually a shared model (as opposed to export/import) that can be opened directly by the more technical BPM Process Designer. In P8 BPM 4.5, they introduced a “diagram mode” for business analysts in the Process Designer; this appears to be an even simpler process diagramming environment. It’s not BPMN compliant, which I think is a huge mistake; since it’s a workflow-style model with lanes, activities and split/merge are supported, this would have been a great opportunity to use the standard BPMN shapes to start getting BAs used to it.

I still have my notes from last week’s analyst briefing and my meeting with Ken Bisconti from yesterday which I will publish; these are more aligned with the “official” announcement that will be coming out today in conjunction with the press release.

IBM’s New Case Manager Product Overview

The day before the official announcement of IBM’s Case Manager product, Jake Levirne, Senior Product Manager, walked us through the capabilities. He started by defining case management, and discussing how it is about providing context to enable better outcomes rather than prescribing the exact method for achieving that outcome. For those of you who have been following ACM for a while, this wasn’t anything new, although I’m imagining that it is for some of the audience here at IOD.

Case Manager is an extension of the core (FileNet) ECM product through the integration of functionality from several other software products across multiple IBM software groups, specifically analytics, rules and collaboration. There is a new design tool targeted at business analysts, and a user interface environment that is the next generation of the old ECM widgets. There’s a new case object model in the repository, allowing the case construct to exist purely in the content repository, and be managed using the full range of content management capabilities including records management. Case tasks can be triggered by a number of different event types: user actions, new content, or updates to the case metadata. By having tasks as objects within the case, each task can then correspond to a structured subprocess in FileNet BPM, or just be part of a checklist of actions to be completed by the case worker (further discussion left it unclear whether even the simple checklist tasks were implemented as a single-step BPM workflow). A task can also call a WebSphere Process Server task; in fact, from what I recall of how the Content Manager objects work, you can call pretty much anything if you want to write a Java wrapper around it, or possibly this is done by triggering a BPM process that in turn calls a web service. The case context – a collection of all related metadata, tasks, content, comments, participants and other information associated with the case – is available to any case worker, giving them a complete view of the history and the current state of the case. Some collaboration features are built in to the runtime, including presence and synchronous chat, as well as simple asynchronous commenting; these collaborations are captured as part of the case context.

As you would expect, cases are dynamic and allow case workers to add new tasks for the case at any time. Business rules, although they may not even be visible to the end user, can be defined during design time in order to set properties and trigger events in the case. Rules can be changed at runtime, although we didn’t see an example of how that would be done or why it might be necessary.

There are two perspectives in the Case Manager Builder design environment: a simplified view for the business analysts to define the high level view of the case, and a more detailed view for the technologists to build in more complex integrations and complex decision logic. This environment allows for either start-from-scratch or template-based case solution definitions, and is targeted at the business analyst with a wizard-based interface. Creating a case solution includes defining the following from the business analyst’s view:

  • case properties (metadata)
  • roles that will work on this case, which will be bound to users at runtime
  • case types that can exist within the same case solution
  • document types that can be included in the case or may even trigger the case
  • case data and search views
  • which case views that each role will see
  • default folders to be included in the case
  • tasks that can be added to this case, each of which is a process (even if only a one-step process), and any triggering events for the tasks
  • the process behind each of the tasks, which is a simple step editor directly in Case Builder; a system lane in this editor can represent the calling of a web service or a WPS process

All of these can be defined on an ad hoc basis, or stubbed out initially using a wizard interface that walks the business analyst through and prompts for which of these things needs to be included in the case solution. Comments can be added on the objects during design time, such as tasks, allowing for collaboration between designers.

As was made clear in an audience question, the design that a business analyst is doing will actually create object classes in both Content Manager and BPM; this is not a requirements definition that then needs to be coded by a developer. From that standpoint, you’ll need to be sure that you don’t let them do this in your production environment since you may want to have someone ensure that the object definitions aren’t going to cause performance problems (that seemed screamingly obvious to me, but maybe wasn’t to the person asking the question).

From what Levirne said, it sounds as if the simple step editor view of the task process can then be opened in the BPM Process Designer by someone more technical to add other information, implying that every task does have a BPM process behind it. It’s not clear if this is an import/export to Process Designer, or just two perspectives on the same model, or if a task always generates a BPM process or if it can exist without one, e.g., as a simple checklist item. There were a lot of questions during the session and he didn’t have time to take them all, but I’m hoping for a more in-depth demo/briefing in the weeks to come.

Case analytics, including both dashboards (Cognos BAM) and reports (Excel and Cognos BI reports) based on case metadata, and more complex analytics based on the actual content (Content Analytics), are provided to allow you to review operational performance and determine root causes of inefficiencies. From a licensing standpoint, you would need a Cognos BI license to use that for reporting, and a limited-license Content Analytics version is included out of the box that can only be used for analyzing cases, not all your content. He didn’t cover much about the analytics in this session, it was primarily focused on the design time and runtime of the case management itself.

The end-user experience for Case Manager is in the IBM Mashup Center, a mashup/widget environment that allows the inclusion of both IBM’s widgets and any other that support the iWidget standard and expose their properties via REST APIs. IBM has had the FileNet ECM widgets available for a while to provide some standard ECM and BPM capabilities; the new version provides much more functionality to include more of the case context including metadata and tasks. A standard case widget provides access to the summary, documents, activities and history views of the case, and can link to a case data widget, a document viewer widget for any given document related to the case, and e-forms for creating more complex user interfaces for presenting and entering data as part of the case.

Someone I know who has worked with FileNet for years commented that Case Manager looks a lot like the integrated demos that they’ve been building for a couple of years now; although there’s some new functionality here and the whole thing is presented as a neat package, it’s likely that you could have done most of this on your own already if you were proficient with FileNet ECM and some of the other products involved.

We also heard from Brian Benoit of Pyramid Solutions, a long-time FileNet partner who has been an early adopter of Case Manager and responsible for building some of the early templates that will be available when the product is released. He demonstrated a financial account management template, including account opening, account maintenance, financial transaction requests and correspondence handling. In spite of IBM’s claim that there is no migration path from Business Process Framework (BPF), there is a very BPF-like nature to this application; clearly, the case management experience that they gained from BPF usage has shaped the creation of Case Manager, or possibly Pyramid was so familiar with BPF that they built something similar to what they knew already. Benoit said that the same functionality could be built out of the box with Case Manager, but that what they have provided is an accelerator for this sort of application.

Levirne assured me that everything in his presentation could be published immediately, although I’ve had analyst briefings on Case Manager that are under embargo until the official announcement tomorrow so I’ll give any of the missing details then.

IBM IOD Opening Session: ACM and Analytics

I’m at IBM’s Information On Demand (IOD) conference this week, attending the opening session. There are 10,000 attendees here (including, I assume, IBM employees) for a conference that covers information management of all sorts: databases, analytics and content management. As at other large vendor conferences, they feel obligated to assault our senses in the morning with loud performance art: today, it’s Japanese drummers (quite talented, and thankfully short). From a logistics standpoint, the wifi fell to its knees before the opening session even started (what, like you weren’t expecting this many people??); IBM could learn a few lessons about supporting social media attendees from SAP, which provided a social media section with tables, power and wired internet to ensure that our messages got out in a timely fashion.

Getting back to the session, it was hosted by Mark Jeffries, who provides some interesting and amusing commentary between sessions, told us the results of the daily poll, and moderated some of the Q&A sessions; I’ve seen him at other conferences and he does a great job. First up from IBM is Robert LeBlanc (I would Google his title, but did I mention that there’s no wifi in here as I type?), talking about how the volume of information is exploding, and yet people are starved for the right information at the right time: most business people say that it’s easier to get information on the internet than out of their own internal systems. Traditional information management – database and ECM – is becoming tightly tied with analytics, since you need analytics to make decisions based on all that information, and gain insights that help to optimize business.

They ran some customer testimonial videos, and the term “advanced case management” came up early and often: I sense that this is going to be a theme for this conference, along with the theme of being analytics-driven to anticipate and shape business outcomes.

LeBlanc was then joined on stage by two customers: Mike Dreyer of Visa and Steve Pratt of CenterPoint Energy. In both cases, these organizations are leveraging information in order to do business better, for example, Visa used analytics to determine that “swipe-and-go” for low-value neighborhood transactions such as Starbucks were so low risk that they didn’t need immediate verification, speeding each transaction and therefore getting your morning latte to you faster. CenterPoint, an energy distributor, uses advanced metering and analytics not only for end-customer metering, but to monitor the health of the delivery systems so as to avoid downtimes and optimize delivery costs. They provided insights into how to plan and implement an information management strategy, from collecting the right data to analyzing and acting on that information.

We then heard from Arvind Krishna, IBM’s GM of Information Management, discussing the cycle of information management and predictive analytics, including using analytics and event processing to optimize real-time decisions and improve enterprise visibility. He was then joined on a panel by Rob Ashe, Fred Balboni and Craig Hayman, moderated by Mark Jeffries; this started to become more of the same message about the importance of information management and analytics. I think that they put the bloggers in the VIP section right in front of the stage so that we don’t bail out when it starts to get repetitive. I’m looking forward to attending some of the more in-depth sessions to hear about the new product releases and what customers are doing with them.

Since the FileNet products are showcased at IOD, this is giving me a chance to catch up with a few of my ex-FileNet friends from when I worked there in 2000-1: last night’s reception was like old home week with lots of familiar faces, and I’m looking forward to meeting up with more of them over the next three days. Looking at the all-male group of IBM executives speaking at the keynote, however, reminded me why I’m not there any more.

Disclosure: In addition to providing me with a free pass to the conference, IBM paid my travel expenses to be here this week. I flew Air Canada coach and am staying at the somewhat tired Luxor, so that’s really not a big perq.