Kofax Capture Product Portfolio

I finished the first day at Kofax Transform with a briefing on the Capture product portfolio from Bruce Orcutt. A key trend in content capture is that content can really come from anywhere, in any format: paper, web, data, fax, etc.; Kofax Capture is attempting to be that universal gateway for all content capture, not just the scanning that they’re best known for.

Some Kofax Capture feature updates:

  • Thin client indexing and validation, so that the capabilities of the desktop client that would normally be used by an indexing/validation operator can now be done with a lower TCO.
  • Access to KTM capabilities, including structured forms processing for extraction and other document and page processing. There are still some functions that require a full KTM implementation, such as content-based classification and separation, but a big chunk of KTM functionality is now right in KC.
  • Import connector is now a separate product, handling import from fax, email, file import, SMS and other sources. This isn’t just a simple import; VRS can be applied to enhance images before downstream recognition. No more printing of faxes and emails so that they can be scanned!
  • Kofax Front Office Server (KFS) allows KC to be extended to the front panel of an MFP, so that KC processes can be initiated there. I covered this in more detail in my post about the MFP session earlier today, although I missed noting that only Ricoh MFPs support card swipes for authentication.
  • Centralized configuration of VRS, which is then pushed out to the individual scanning stations running KC and VRS.
  • Detection and reporting of device problems based on image quality degradation from within VRS, e.g., stretched images may indicate ADF transport roller wear, allowing maintenance to be performed before catastrophic equipment failure occurs.

This was more of an incremental update than a review of the entire portfolio, but worthwhile nonetheless.

Kofax and MFPs

Lots of interesting content this afternoon; I had my eye on integrating BPM and Kofax Capture, but ended up at the session on turning MFPs (multi-function printers, aka MFDs or multi-function devices) for point of origination document capture using Kofax Front Office Server (KFS). Rather than collecting documents at remote offices or branches and shipping them to central locations, KFS puts scanning capabilities on the MFP that already exists in many offices to get the documents captured hours or days earlier, and eliminate some of the paper movement and handling. This isn’t just about scanning, however: it’s necessary to extract metadata from the documents in order to make them actionable.

They presented several scenarios, starting with the first simple touchless pattern:

  • Branch user authenticates at MFP using login, which can be synchronized with Active Directory
  • Branch user scans batch of documents using a button on the panel corresponding to the workflow that these documents belong to; these buttons are configured on Kofax Front Office Server to correspond to specific scan batch classes
  • VRS and KTM process the documents, doing image correction and auto-indexing if possible
  • The documents are committed to a content repository
  • The user can receive a confirmation email when the batch is created, or a link to the document in the repository after completion

Different buttons/options can be presented on the MFP panel for different users, depending on which shortcuts that are set up for them during KFS configuration; this means that the MFP panel doesn’t have to be filled up with a bunch of buttons that are used by only a few users, but is effectively tailored for each user role. There are also global shortcuts that can be seen on the MFP panel before login, and are available to all logged-in users.

A more complex scenario had them scan at the MFD, then return to their computer and use a web client to do validation and correction required before committing to the repository; this is the thin client version of the KTM validation rather than a function of KFS, I believe. This has the advantage of not requiring any software to be installed at the desktop clients, but this is still fundamentally a data entry functionality, not something that you want a lot of branch users to be doing regardless of how slick the interface is.

The speaker stated that KFS is designed for ad hoc capture, not batch capture, so there are definite limitations on the volume passing through a single KFS server. In particular, it does not appear to be suitable (or optimized) for large batches, but really for scanning a small set of documents quickly, such as a handful of documents related to a particular customer file. Also, the images need to pass to KFS in color or greyscale mode for processing, then are thresholded by VRS to pure black and white before passing on to KTM, so it may be better to locate KFS at the branches where there are multiple MFPs in order to reduce bandwidth requirements. Fundamentally, KFS is a glorified scanning module; it doesn’t do any of the recognition or auto-indexing, although you can use it to capture manual index values at the MFD.

It’s also possible to do some controlled ad hoc scanning: instead of just uncontrolled scan to email (which many of the MFPs support natively, but ends up being turned off by organizations who are nervous about that), you can scan to an email, with the advantage that KFS can convert the document to searchable PDF rather than just an image format. However, it’s not clear that you can restrict the recipients – only the originators, since the user has to have this function in their profile – so organizations that don’t currently allow scan to email (if that function exists on the MFP) may not enable this either.

There is also a KFS web client for managing documents after MFP scanning before they go to Capture and KTM, which allows for pages to be reviewed, (permanently) redacted, reordered, documents split and merged, burn text notes into the document, and other functions. Since this allows for document editing – changing the actual images before committal to the content management system – you couldn’t enable this functionality in scenarios that are concerned with legal admissibility of the documents. The web client has some additional functions, such as generating a cover page that you pre-fill (on the screen) with the batch index fields, then print and use as a batch cover page that will be recognized by KTM. It also supports thin client scanning with a desktop scanner, which is pretty cool – as long as Windows recognizes the scanner (TWAIN), the web client can control it.

As he pointed out, all of the documentation is available online without having to register – you can find the latest KFS documentation here and support notes here. I wish all vendors did this.

They finished up with some configuration information; there appears to be two different configuration options that correspond to some of their scenarios:

  • The “Capture to Process” scenario, where you’re using the MFP as just another scanner for your existing capture process, has KFS, VRS and KC on the KFS server. KTM, KTM add-ons and Kofax Monitor are on the centralized server where presumably dedicated KC workstations also feed into it.
  • Touchless Processing scenario moves KTM from the centralized server to the KFS server, so that the images are completely processed by the time that they leave that server. I think.

I need to get a bit more clarity on configuration alternatives, but one key point for distributed capture via MFP is that documents scanned in greyscale/color at the MFP move to KFS in that resolution (hence much larger images); the VRS module that is co-located with KFS does the image enhancement and thresholding to a binary image. That means that you want to ensure fast/cheap connectivity between the MFP and the KFS server, but that the bandwidth can be considerably lower for the link from KFS to KTM.

Kofax Capture Technical Session

It’s been a long time since I looked at much of the Kofax technology, so I took the opportunity of their Transform conference to attend a two-hour advanced technical session with Bret Hassler, previously the Capture product manager but now responsible for BPM, and Bruce Orcutt from product marketing. They started by asking the attendees about areas of interest so that they could tailor the session, and thereby rescue us from the PowerPoint deck that would be the default. This session contained a lot more technical detail than I will ever use (such as the actual code used to perform some of the functions), but that part went by fairly quickly and overall it was a useful session for me. I captured some of the capabilities and highlights following.

Document routing allows large scan batches to be broken up into sub-batches that can be tracked and routed independently, and move documents and pages between the child batches. This makes sense both for splitting work to create manageable sizes for human processing, but also so that there doesn’t need to be as much presorting of documents prior to scanning. For my customers who are considering scanning at the point of origination, this can make a lot of sense where, for example, a batch loaded on an MFD in a regional office may contain multiple types of transactions that go to different types of users in the back office. Child batch classes to be changed independently of the main batch, so that properties and rules to be applied are based on the child batch class rather than the original class. A reference batch ID, which can be exported to an ECM repository as metadata on the resulting documents, can be used to recreate the original batch and the child batch that a document belonged to during capture. Batch splitting, and the ability to change routing and permissions on the child batch, makes particular sense for processing that is done in the Capture workflow, so that the child batches follow a specific processing path and is available to specific roles. This will also feed well when they start to integrate TotalAgility (the Singularity product that they acquired last year) for full process management, as described in this morning’s keynote. Integrating TotalAgility for capture workflow will also, as Hassler pointed out, will bring in a graphical process modeler; currently, this is all done in code.

Disaster recovery allows remote capture sites connected to a centralized server to fail over to a DR site with no administrative intervention. In addition to supporting ongoing operations, batches in flight are replicated between the central sites (using, in part, third-party replication software) and held at remote capture locations until replication is confirmed, so that batches can be resumed on the DR server. The primary site manages batch classes and designated/manages the alternate sites. There’s some manual cleanup to do after a failure, but that’s to be expected.

Kofax has just released a Pega connector; like other custom connectors, they ship it with source code so that you can make changes to it (that, of course, is not necessarily a good idea since it might compromise your upgradability). The Kofax Export Connector for PRPC does not send the images to Pega, since Pega is not a content repository; instead, it exports the document to an IBM FileNet, EMC Documentum or SharePoint repository, gets the CMIS ID back again, then creates a Pega work object that has that document ID as an attachment. Within Pega, a user can then open the document directly from that link attachment. You have to configure Pega to create a web service method that allows a work object to be created for a specific work class (which will be invoked from Kofax), and create the attribute that will hold the CMIS document ID (which will be specified in the invocation method parameters). There are some technicalities around the data transformation and mapping, but it looks fairly straightforward. The advantage of doing this rather than pushing documents into Pega directly as embedded attachments is that the chain of custody of documents is preserved and the documents are immediately available to other users of the ECM system.

Good updates, although I admit to doing some extracurricular reading during the parts with too much detail.

Kofax Transform Keynote: Craig Le Clair

Craig Le Clair from Forrester gave a keynote to discuss the role of capture and dynamic case management. He co-authored the Forrester Wave for Dynamic Case Manager published in January 2011, in which Singularity (acquired by Kofax last year) places in the leaders section. If I had wifi right now, I’d look up and link to his Forrester profile, but I recall that he also does a lot of CRM and enterprise software of various sorts.

I have little respect for middle-aged people (many younger than me) who just don’t make the effort to get plugged into this century, and tell cute anecdotes about “digital natives” – usually children under 10 who do something clever with an iPad – as an introduction to talking about social media and mobile applications in business environments. After that initial misstep, however, Le Clair laid out how the shift of consumer power to mobile devices will drive functions such as mobile capture, which Kofax provides by allowing the Atalasoft portal and mobile devices to become the point of origin for captured content.

He continued on to talk about managing untamed business processes, the topic of presentations that I’ve seen him do on webinars, and how case management can help knowledge workers deal with unstructured processes within an information-rich context. This was a bit of an introduction to case management, which it probably appropriate for most of the audience who come from the Kofax customer/partner side, including the three main use cases that Forrester is predicting for case management: investigations, service requests, and incidents.

He then went completely off on a tangent to talk about SharePoint and content frameworks, recommending that targeting SharePoint in your organization requires a view of its strengths and weaknesses. Duh, yeah. This appeared to be some sort of weak lead-in to a division between SharePoint targets and capture-driven process targets, but didn’t really make sense, or possibly there just wasn’t sufficient time to develop the idea. Not sure why the discussion of content ecosystems was even in this presentation.

He finished with a comparison between “Process 2011” (meaning today, so the slide should be updated to “Process 2012”) and “Process 2020”: in today’s world, processes are dictated by the business, not the customers, and mobile is just  a pretty face on a traditional process that keeps peeking out at the most inopportune moments. There is a shift happening that puts customers in control in business processes, and enterprise software needs to adapt to accommodate that.

Kofax Transform Keynote: Reynolds Bish

I’m in San Diego today and tomorrow for Kofax’s annual user and partner conference, Transform. It’s been a while since I’ve had to complain about no conference wifi, so I’ll just get that out of the way now – seriously, it’s 2012, wifi should be ubiquitous at conferences. The session moderator just pointed out all of the countries from which international attendees have traveled, and how many of them do you think have US data coverage for their smartphones? I can’t even pick up the Hilton room wifi (which I paid for) or the lobby wifi (which is free) in the main conference area. Grrr. Also very little social media promotion: although there is a Foursquare venue, the conference guide doesn’t mention a hashtag, Twitter account to follow or any other social media links, plus no app or mobile-friendly website.

This is a bit of a FileNet reunion here, since the west coast location attracted a lot of people from FileNet who didn’t stay on with IBM after the acquisition. Many people that I know from my short period working at FileNet in 2000-1 (plus the work that I’ve done with their customers over the years) are Kofax employees or partners, and it seems to be a good fit for them especially since the acquisition of Singularity to give more weight to their “capture-enabled BPM” message. I’ve always thought of Kofax as the “gold standard” for document capture from the time that I first met them over 20 years ago, but they have fallen off my radar recently and it’s good to get caught up.

Back to the keynotes, Reynolds Bish, Kofax CEO, was up to talk about their timeline and future. He headed Captiva before their acquisition by EMC, and came on to Kofax about five years ago when it was in a bit of a slump in terms of vision. There’s been quite a bit of transformation since then – the Atalasoft and Singularity acquisitions, divestiture of their hardware business, trimming of the underperforming partners – and their financial results are starting to show, with increased revenues, no debt and cash in the bank. Their Europe numbers are down, as I expect many enterprise software vendors’ are, and Bish talked quite openly about the global economic issues causing that and what they are expecting for the coming months. They’re closing a number of deals over $1M, showing that they’ve grown far beyond their document scanning origins. They own 35% of the batch image capture market (the largest position), and hold significant market share in batch and ad hoc content capture, primarily in the enterprise market.

Their Atalasoft acquisition adds the capability to add internet portals as a point of origin for their capture platform, allowing consumers to capture their own documents and submit them through a secure portal. The Singularity acquisition, adding BPM and dynamic case management, will allow them to extend their capture workflow into full downstream process and case management. He stated that this allows them to double their addressable market, and showed statistics comparing the capture and BPM market sizes; he implied that they could achieve a similar market share in BPM as they have in capture, which is clearly not going to happen, but combining capture and BPM/case management does provide some compelling capabilities. Although other vendors, such as IBM (with the Datacap acquisition) have capture and BPM products, Kofax is pushing to have a single unified product that will be a competitive differentiator in both spaces, and through both on-premise and cloud licensing. He stated that Kofax Capture and KTM are strong products that are continuing to develop, but I assume that this new combined product will eventually offer an alternative platform for capture that extends into full BPM functionality. This is going to be very interesting to watch, since Kofax can potentially define and own this market, but also risks splitting their customer and partner base between the existing and new platforms. I also think that the existing Kofax partner base may not be the best channel for BPM, much as FileNet found in 2000 when they tried to push their new eProcess product through the existing document imaging partners (and their own sales teams) and found the results less than satisfactory.

They plan to continue with organic growth but also make some additional strategic acquisitions, and eventually augment their London Stock Exchange listing with a NASDAQ listing. I might just buy some of those shares if they can keep to their vision and work their way through some of the challenges.

What’s New in IBM ECM Products

Feri Clayton gave an update on the ECM product portfolio and roadmap, in a bit more depth than yesterday’s Bisconti/Murphy ECM product strategy session. She reinforced the message that the products are made up of suites of capabilities and components, so that you’re not using different software silos. I’m not sure I completely buy into IBM’s implementation of this message as long as there are still quite different design environments for many of these tools, although they are making strides in consolidating the end user experience.

She showed the roadmap for what has been released in 2011, plus the remainder of this year and 2012: on the BPM side, there will be a 5.1 release of both BPM and Case Manager in Q4, which I’ll be hearing more about in separate BPM and Case Manager product sessions this afternoon. The new Nexus UI will previous in Q4, and be released in Q2 of 2012. There’s another Case Manager release projected for Q4 2012.

There was a question about why BPM didn’t appear in the ECM portfolio diagram, and Clayton stated that “BPM is now considered part of Case Manager”. Unlike the BPM vendors who think of ACM as a part of BPM, I think that she’s right: BPM (that is, structured process management that you would do with IBM FileNet BPM) is a functionality within ACM, not the other way around.

She went through the individual products in the portfolio, and some of the updates:

  • Production Imaging and Capture now includes remote capture, which is nice for organizations that don’t want to centralize their scanning/capture. It’s not clear how much of this is the Datacap platform versus the heritage FileNet Capture, but I imagine that the Datacap technology is going to be driving the capture direction from here on. They’ve integrated the IBM Classification Module for auto recognition and classification of documents.
  • Content Manager OnDemand (CMOD) for report storage and presentment will see a number of enhancements including CMIS integration.
  • Social Content Management uses an integration of IBM Connections with ECM to allow an ECM library to access and manage content from within Connections, display ECM content within a Connections Community and a few other cross-product integrations. There are a couple of product announcements about this, but they seem to be in the area of integration between Connections and ECM as opposed to adding any native social content management to ECM.
  • FileNet P8, the core content management product, had a recent release (August) with such enhancements as bidirectional replication between P8 and Image Services, content encryption, and a new IBM-created search engine (replacing Verity).
  • IBM Content Manager (a.k.a., the product that used to compete with P8) has a laundry list of enhancements, although it still lags far behind P8 in most areas.

We had another short demo of Nexus, pretty much the same as I saw yesterday: the three-pane UI dominated by an activity stream with content-related events, plus panes for favorites and repositories. They highlighted the customizability of Nexus, including lookups and rules applied to metadata field entry during document import, plus some nice enhancements to the content viewer. The new UI also includes a work inbasket for case management tasks; not sure if this also includes other types of tasks such as BPM or even legacy Content Manager content lifecycle tasks (if those are still supported).

Nexus will replace all of the current end-user clients for both content and image servers, providing a rich and flexible user experience that is highly customizable and extensible. They will also be adding more social features to this; it will be interesting to see how this develops as they expand from a simple activity stream to more social capabilities.

Clayton then moved on to talk about ACM and the Case Manager product, which is now coming up to its second release (called v5.1, naturally). Given that much of the audience probably hasn’t seem it before, she wen through some of the use cases for Case Manager across a variety of industries. Even more than the base content management, Case Manager is a combination of a broad portfolio of IBM products within a common framework. She listed some of the new features, but I expect to see these in more detail in this afternoon’s dedicated Case Manager session so will wait to cover them then.

She discussed FileNet P8 BPM version 5.x: now Java-based for significant performance and capacity improvements (also due to a great deal of refactoring to remove old code sludge, as I have heard). As I wrote about last month, it provides Linux and zLinux support, and also allows for multi-tenancy.

With only a few minutes to go, she whipped through information lifecycle governance (records and retention management), including integration of the PSS Atlas product; IBM Content Collector; and search and content analytics. Given the huge focus on analytics in the morning keynote, it’s kind of funny that it gets about 30 seconds at the end of this session.

IBM ECM Product Strategy

I finished the first day of IOD in the ECM product strategy session with Ken Bisconti and John Murphy. I was supposed to have a 1:1 interview with Bisconti at this same time, so now I know why that cancelled – the room is literally standing room only, and the same session (or, at least, a session with the identical name) is scheduled for tomorrow morning so there’s obviously a great deal of interest in what’s coming up in ECM.

They started with a summary of their 2011-2012 priorities:

  • Intelligent, distributed capture based on the DataCap acquisition
  • Customer self-service and web presentment of reports and statements
  • Rich user experiences and mobile device support
  • Whole solutions through better product integration and packaging as well as vertical applications and templates

The key deliverables in this time frame:

  • IBM Production Imaging Edition
  • DataCap Taskmaster expansion
  • CM8, FileNet CM updates
  • Project “Nexus”, due in 2012, which is the next generation of web-based user experience across the IBM software portfolio

They stressed that customers’ investments in their repositories is maintained, so the focus is on new ways to capture, integrate and access that data, such as bidirectional replication (including annotations and metadata) between older Image Services repositories and P8 Content Manager, and content repository federation.

Nexus is intended to address the classic problems with FileNet UI components: either they were easy to maintain or easy to customize, but never both. As someone who spent a lot of time in the 90s customizing UIs with the early versions of those components, I’d have to agree wholeheartedly with that statement. We saw a demo of the under-development version of Nexus, which shows three panes: a filterable activity stream for content and related processes, a favorites list, and a list of repositories. Searching in this environment can be restricted to a subset of the repositories, or across repositories: including non-IBM repositories such as SharePoint. Navigating to a repository provides a fairly standard folder-based view of the repository – cool for demos but often useless for very large repositories – with drag-and-drop capabilities for adding documents to the repository. The property dialog that appears for a new document can access external data sources in order to restrict the input to specific metadata fields.

This also provides access to teamspaces, which are sort of like a restricted version of an object store/library, where a user can create a teamspace (optionally based on a template), specify the folder structure, metadata and predefined searches, then add other users who can collaborate within this space. When a teamspace is opened, it looks pretty much like a regular library, except that it’s a user-created space rather than something that a system admin needs to set up.

Because of the underlying technology, Nexus can be surfaced in a number of different ways, including different types of widgets as well as on mobile devices. This style of user experience is a bit behind the curve of some other vendors, but is at least moving in the right direction. I look forward to seeing how this rolls out next year.

They moved on to discuss social content management, which covers a variety of social use cases:

  • Accessing and sharing content in the context of communities
  • Finding and navigating social content and social networks
  • Managing and governing social business information
  • Delivering social content business solutions

This obviously covers a lot of ground, and they’re really going to have to leverage the skills and lessons learned over in the Lotus group to jumpstart some of the social areas.

Next was Case Manager; I’m looking forward to a more in-depth product briefing on this alone, rather than just five minutes as part of the entire ECM strategy, but their content-centric view of case management seems to be resonating with their customers. That’s not to say that this is the only way to do case management, as we see from a number of other ACM vendors, but rather than IBM customers with big FileNet content repositories can really see additional value in the functionality that Case Manager provides on top of these repositories.

The newly announced Case Manager v5.1 aims to make it simpler to create and deliver case-based solutions, and includes a number of new integration capabilities including BPM (as we saw this morning) and data integration. They are also focusing on vertical industry case-based accelerators, and we saw a demo of a healthcare claims case management application that brings together case management, content and analytics to help a case worker to detect fraud. Like most case management scenarios, this is not focused on the actual automated detection of fraud, but on surfacing information to the user that will allow them to make that determination. Doing this in the context of content repositories and content analytics provides an rich view of the situation, allowing the case worker to make better decisions in much less time.

The case worker can create and assign tasks to others, including field workers who use a native iPad app to perform their field review (in the demo, this was a fraud investigator visiting a healthcare practitioner) including capturing new content using the iPad’s camera. Although the version that they demonstrated requires a live connection, they do expect to be delivering apps for disconnected remote devices as well, which is truly critical for supporting remote workers who may wander far beyond the range of their data network.

Moving on to information lifecycle management and governance, some of which is based on last year’s acquisition of PSS Systems, the portfolio includes smart archive (e.g., archiving SAP and other structured data), legal eDiscovery, records management and retention, and disposal and governance management. They’re now providing smart archive as a cloud offering, as well as on premise. The buzz-phrase of this entire area is “defensible disposition”, which sounds a bit like something that happens on The Sopranos, but is really about having an overall information governance plan for how data of all types area maintained, retained and destroyed.

They finished with a bit about IBM Watson for integrating search with predictive analytics, and the industry solutions emerging from this such as IBM Content and Predictive Analytics for Healthcare which is being shown here at IOD this week. We heard a bit about what this combination of technologies can do in the Seton Healthcare presentation earlier this afternoon, and we’ll see a demo of the actual packaged solution in the Wednesday morning keynote.

IBM IOD ECM Keynote: Content In Motion

Content at rest = cost

Content in motion = value

That was the message that kicked off the ECM keynote, then Kevin Painter took the stage to introduce the winners of the four ECM customer innovation awards – Novartis, Tejon Ranch, US Nuclear Regulatory Commission and Wells Fargo – before turning things over to Doug Hunt.

IBM defines unstructured data, or content, as pretty much everything that doesn’t fit in a database table. Traditionally, this type of information is seen as inaccessible, cumbersome, expensive, unmanageable and risky by business, IT, records managers and legal. However, with the right management of that content, including putting it into motion to augment systems of record, it can become accessible and relevant, providing a competitive advantage.

We heard from Wells Fargo about their ECM implementation, where they are moving from having scanned documents as merely an archival practice to having those documents be an active part of the business transactions. [This sounds just like moving from post-processing scanning to pre-processing scanning and workflow, which we’ve been doing for 30+ years, but maybe it’s more complex than that.] For them, ECM is a fundamental part of their mortgage processing architecture and business transaction enabling, supporting multiple lines of business and processes. This, I think, is meant to represent the “Capture” slice of the pie.

Novartis was on stage next to talk about their records management (the “Govern” slice), particularly around retention management of their records to reduce legal risk across their multi-national organization.

Next, Hunt addressed “Analyze” with content analytics, joined by someone from Seton Healthcare to discuss how they’re using Watson analytics to proactively identify certain high-risk patients with congestive heart failure to allow early treatment that can reduce the rate of hospital readmissions. With 80% of their information being unstructured, they need something beyond standard analytics to address this.

Case management was highlighted as addressing the “Activate” slice, and Hunt was joined by someone from SEB, a Nordic bank, to discuss how they are using IBM Case Manager as an exception handling platform (i.e., for those processes that kick out of the standard straight-through process), replacing their existing workflow engine.

Hunt did briefly address the “Socialize” slice, but he was so clued out about anything to do with social content, it was a bit embarrassing. Seriously, I don’t want to hear the executive in charge of IBM’s ECM strategy talk about social as something that his wife and kids do, but he doesn’t.

He finished up talking about the strength of the IBM ECM industry accelerators and business partners, both of which help to get systems up and running at their customers’ sites as quickly as possible.

Better Together: IBM Case Manager, IBM Content Manager and IBM BPM

Dave Yockelson from ECM product marketing and Amy Dickson from IBM BPM product management talked about something that I’m sure is on the minds of all FileNet customers who are doing anything with process: how do the (FileNet-based) Case Manager and Content Manager fit together with the WebSphere BPM products?

They started with a description of the IBM BPM portfolio – nothing new here – and how ACM requires an integrated approach that addresses repeatable patterns. Hmmmm, not completely sure I agree with that. Yockelson went through the three Forrester divisions of case management from their report on the ACM space, then went through a bit more detail on IBM Case Manager (ICM) and how it knits together functionality from the entire IBM software portfolio: content, collaboration, workflow, rules, events, integration, and monitoring and analytics. He positioned it as a rapid application development environment for case-based solutions, which is probably a good description. Dickson then went through IBM BPM (the amalgam of Lombardi and WebSphere Process Server that I covered at Impact), which she promised would finish up the “background” part and allow them to move on to the “better together” part.

So, in the aforementioned better together area:

  • Extend IBM BPM processes with content, using document and list widgets that can be integrated in a BPM application. This does not include content event processes, e.g., spawning a specific process when a document event such as check-in occurs, so is no different than integrating FileNet content into any BPMS.
  • Extend IBM BPM Advanced (i.e., WPS) processes with content through a WebSphere CMIS adapter into the content repository. Ditto re: any BPMS (or other system) that supports CMIS being able to integrate with FileNet content.
  • Invoke an IBM BPM Advanced process from an ICM case task. Assuming that this is via a web service call (since WPS allows processes to be exposed as web services), not specifically an IBM-to-IBM integration.

Coming up, we’ll see some additional integration points:

  • Invoke an IBM BPM Express/Standard process from an ICM case task. This, interestingly, implies that you can’t expose a BPM Express/Standard process as a web service, or it could have been done without additional integration, doesn’t it? The selection of the process and mapping of case to process variables is built right into the ICM Builder, which is definitely a nice piece of integration to make it relatively seamless to integrate ICM and BPM.
  • Provide a federated inbox for ICM and BPM (there was already an integrated inbox for the different types of BPM processes) so that you see all of your tasks in a single list, based on the Business Space Human Tasks widget. When you click on a task in the list, the appropriate widgets are spawned to handle that type of work.
  • Interact with ICM cases directly from a BPM process through an integration service that allows cases to be created, retrieved and updated (metadata only, it appears) as part of a BPM process.

This definitely fits IBM’s usual modus operandi of integrating rather than combining products with similar functionality; this has a lot of advantages in terms of reducing the time to releasing something that looks (sort of) like a single product, but has some disadvantages in the underlying software complexity as I discussed in my IBM BPM review from Impact. A question from the audience asked about consolidation of the design environment; as expected, the answer is “yes, over time”, which is similar to the answer I received at Impact about consolidation of the process engines. I expect that we’ll see a unified design environment at some point for ICM and both flavors of BPM by pulling ICM design into the Process Center, but there might still be three engines under the covers for the foreseeable future. Given the multi-product mix that makes up ICM, there will also be separate engines (and likely design environments) for non-process functions such as rules, events and analytics, too; the separate engines are inevitable in that case, but there could definitely be some better integration on the design side.

IBM Case Manager In Depth

I had a chance to see IBM’s new Case Manager product at IOD last month, and last week Jake Levirne, the product manager, gave me a more complete demo. If you haven’t read my earlier product overview from IOD as well as the pre-IOD briefing on Case Manager and related products, the business analyst view, a quick bit on customizing the UI and the technical roundtable, you may want to do so now since I’ll try not to repeat too much of what’s there already.

Runtime

IBM Case Manager Runtime - CSR role view in portalWe started by going through the end-user view of an application for insurance claims. There’s a role-based portal interface, and this user role (CSR) sees a list of cases, can search for a case based on any of the properties, or add a new case – fairly standard functionality. In most cases, as we’ll see later, cases are created automatically on the receipt of a specific document type, but there needs to be the flexibility to have users create their own as well. Opening a case, the case detail view shows case data (metadata) and case information, which comprises documents, tasks and history that are contained within the case. There’s also a document viewer, reminding us that case management is content-centric; the entire view is a bit reminiscent of the previous Business Process Framework (BPF) case management add-on, which has definitely contributed to Case Manager in a philosophical sense if not any of the actual underlying technology.

For those FileNet geeks in the crowd, a case is now a native content type in the FileNet content repository, rather than a custom object type as was used in the BPF; logically, you can think of this as a case folder that contains everything related to the case. The Documents tab is pretty straightforward – a list of documents attached to the case – and the History tab shows a list of events on the case, including documents being added and tasks started/completed. The interesting part, as you might have guessed, is in the Tasks tab, which shows the tasks (small structured processes, in reality) assigned to this case, either as required or optional tasks. Tasks can be added to a case at design time or runtime; when added at runtime, these are predefined processes, although there may be customizable parameters that the user can modify, but the end user can’t change the definition of a task. This gives some flexibility to the user – they can choose whether or not to execute the optional tasks, they can execute tasks in any order, and they can add new tasks to a case – but doesn’t allow the user to create new tasks: they are always selecting from a predefined list of tasks. Depending on the task definition, tasks for their case may end up assigned to them or to someone else, or to a shared queue corresponding to a role. This results in the two lists that we saw back in the first portal view: one is a list of cases based on search criteria, and the other is a list of tasks assigned to this user or a shared queue on which they are working.

IBM Case Manager Runtime - case task viewCreating a new case is fairly simple for the user: they click to add a case, and are presented with a list of instructions for filling out the initial case data, such as the date of loss and policy number in our insurance claim example. The data that can be entered using the standard metadata widget is pretty limited and the form isn’t customizable, however, and often there is an e-form included in the case that is used to capture more information. In this situation, there is a First Notice of Loss e-form that the user fills out to gather the claim data; this e-form is contained as a document in the case, but also synchronizes some of its fields with the case metadata. This ability to combine capabilities of documents, e-forms and folders has been in FileNet for quite a while, so it’s no surprise that they’re leveraging it here. It is important to note, however that this e-form would have to be designed in the Lotus forms designer, not in the Case Manager design tools: a reminder that the IBM Case Manager solution is a combination of multiple tools, not a single monolithic system. Whether this is a good or bad thing is a bit of a philosophical discussion: in the case of e-forms, for example, you may want to use this same form in other applications besides Case Manager, so it may make sense that it is defined independently, but it will require additional design skills.

Once the case is created, it will follow any initial process flows that are assigned to it, and can kick off manual tasks. For example, there could be automated activities that update a claims systems with the data captured on the FNOL form, and manual tasks created and assigned to a CSR to call the third parties’ insurance carrier. The underlying FileNet content engine has a lot of content-centric event handling baked right into it, so being able to do things such as trigger processes or other actions based on content or metadata updates have been there all along and are being used for any changes to a case or its contents.

Design Time

We moved over to the Case Manager Builder to look at how designers – business analysts, in IBM’s view – define new case types. At the highest level, you first define a “solution”, which can include multiple case types. Although the example that we went through used one case type per solution, we discussed some situations where you might want to have multiple case types in a single solution: for example, a solution for a customer service desktop, where there was a different case type defined for each type of request. Since case types within a single solution can share user interface designs, document types and properties, this can reduce the amount of design work if you plan ahead a bit.

IBM Case Manager Builder - define solution propertiesFor each solution, you define the following:

  • Properties (metadata)
  • Roles and the in-baskets (shared work queues) to which they have access
  • Document types
  • In-baskets associated with this solution
  • Case types that make up this solution.

Then, for each case type within a solution, you define the following:

  • The document type that will be used to trigger the creation of a case of this type, if any. Cases can be added manually, as we saw in the runtime example, or can be triggered by other events, but the heavily content-centric focus of Case Manager assumes that you might usually want to kick off a case automatically when a certain document type is added to the content repository.
  • The default Add Case page, which is a link to a previously-defined page in the IBM Mashup Center that will be used as the user interface on selecting the Add Case button.
  • The default Case Details page, which is a link to the Mashup Center page for displaying a case.
  • Optionally, overrides for the case details page for each role, which allows different roles to see different views of the case details.
  • Properties for this case type, which can be manually inherited from the solution level or defined just at this level. All solution properties are not automatically inherited by each case type, since it was felt that this would make it unnecessarily confusing, but any of the solution properties can be selected for exposure at the case level.
  • The property views (subsets) that are displayed in the case summary, case details and case search views. If more than about a dozen properties are used, then IBM recommends using an e-form instead of the standard views, which are pretty limited in terms of display customization. A view can include a group of properties for visual grouping.
  • Case folders to organize the content within a case.
  • Tasks associated with the case, grouped by required and optional tasks. Unlike the user interfaces, document types and properties, task definitions are not shared across case types within a solution, which requires that similar or identical tasks will require redefining for each case type. This is definitely an area that they can improve in the future; if their claim of loosely-coupled cases and processes is to be fully realized, then task/process definitions should be reusable at least across case types within a solution, if not across solutions.

IBM Case Manager Builder - Step EditorAlthough part of the case type definition, I’ll separate out the task definition for clarity. For each task within a case type, you define:

  • As noted above, whether it is required or optional for this case type.
  • Whether the task starts automatically or manually, or if the user optionally adds the task to the case at runtime.
  • Inclusion of the task in a set. Sets provide visual grouping of tasks within a case, but also control execution: a set can be specified as all-inclusive (all tasks execute if any of the tasks execute) or mutually exclusive (only one of the tasks in the set can be executed). The mutually exclusive situation could be used to create a manner of case subtypes, instead of using multiple case types within a solution, where the differences between the subtypes are minimal.
  • Preconditions for the task to execute, that is, the task triggers. In many cases, this will be the case start, but could also be when a document of a specific type is added to the case, or a case property value is updated to meet certain conditions, including combinations of property values.
  • Design comments, which could be used simply as documentation, but are primarily intended for use by a non-technical business analyst who created the case type definition up to this point but wants to pass of the creation of the actual process flow to someone more technical.
  • The process flow associated with this task, using the visual Step Editor. This allows the roles defined for the solution to be added as swimlanes, and the human-facing steps to be plotted out. This supports branching as well as sequential flow, but no automated steps; however, any automated steps that are added via the full Process Designer will appear in the uneditable grey lanes at the top of the Step Editor map. If you’ve used the Process Designer before, the step properties at the left of the Step Editor will appear familiar: they’re a subset of the step properties that you would see in the full Process Designer, such as step deadlines and allowing reassignment of the step to another user.

Being long acquainted with FileNet BPM, a number of my questions were around the connection between the Step Editor and the full BPM Process Designer; Levirne handled some of these, and I also had a few technical discussions at IOD that shed light on this. In short, the Step Editor creates a full XPDL process definition and stores it in the content repository, which is the same as what happens for any process definition created in the Process Designer. However, if you open this process definition with the Process Designer, it recognizes that it was created using the Case Manager Step Editor and performs some special handling. From the Process Designer, a more technical designer can add any system steps required (which will appear, but not be editable, in the Step Editor): in other words, they’ve implemented a fully shared model used by two different tools: the Case Builder Step Editor for a less technical business analyst, and the BPM Process Designer for a developer. IBM Case Manager Builder - deploy solutionAs with any process definition, the Case Manager task process definitions must be transferred to the process engine before they can be used to instantiate new processes: this is done automatically when the solution is deployed.

Deploying a solution to a test environment is a one-click operation from the Case Manager Builder main screen, although moving that to another environment isn’t quite as easy: the new release of the P8 platform allows a Case Manager solution to be packaged in order to move it between servers, but there’s still some manual work involved.

We wrapped up with a discussion of the other IBM products that integrate with Case Manager, some easier than others:

  • Case Manager includes a limited license of ILOG JRules, but it’s not integrated in the Case Manager Builder environment: it must be called as a web service from the Process Designer. There are already plans for better integration here, which is essential.
  • Content Analytics for data mining and analytics on the case metadata and case content, including the content of attached documents.
  • Case Analyzer, which is a version of the old BPM Process Analyzer, with enhancements to show analytics at the case level and the inclusion of custom case properties to provide a business view as well as an operational view in dashboard and reports.

They’re working on better integration between Case Manager and the the WebSphere product line, including both WebSphere Process Server and Lombardi; this will be necessary to combat the competition who have a single solution that covers the full range of BPM functionality from structured processes to completely dynamic case management.

Built on one of the best industrial-strength enterprise content management products around, IBM Case Manager will definitely see some adoption in the existing IBM/FileNet client base: adding this capability onto an existing FileNet Content Manager repository could provide a lot of value with a minimal amount of work for the customer, assuming that they actually allowed their business analysts to do the work that IBM intends them to. In spite of the power, however, there is a lack of flexibility in the runtime task definition that may make them less competitive in the open market.

IBM Case Manager demo