Client-side Service Composition Using Generic Service Representative

I’m back for a second day at CASCON, attending the technical papers session focused on service oriented systems.

First of the three papers was Client-side Service Composition Using Generic Service Representative by Mehran Najafi and Kamran Sartipi of McMaster University; this concept is presented to prevent the possible privacy and bandwidth problems that can occur when data is passed to a server-side composition. This relies on a client-side stateless task service and service representative, and doing whatever processing can be done locally before resorting to a remote web service call. Using the task services approach rather than a Javascript or RIA-based approach provides more flexibility in terms of local service composition. McMaster has a big medical school, and the example that he discussed was based on clinical data, where privacy is a big concern; being able to maintain the patient data only on the client rather than having it flow through a server-side composition reduces the privacy concerns as well as improving performance.

I’ve been seen this paradigm in use in a couple of different BPM systems that provide client-side screen flow (usually in Javascript at the client or in the web tier) within a single process activity; I’ve seen this from TIBCO AMX/BPM’s Page Flow, Salesforce’s Visual Process Manager and Outsystems. Obviously, the service composition presented in the paper today is a more flexible approach, and is true client-side rather than on the web tier, but the ideas of local process management are already appearing in some BPM products.

There were some interesting questions about support for this approach on mobile platforms (possible as the mobile OS’s become more capable) and a discussion on what we’re giving up in terms of loose coupling by having a particular orchestration of multiple services bound to a single client-side activity.

CASCON Keynote: 20th Anniversary, Big Data and a Smarter Planet

With the morning workshop (and lunch) behind us, the first part of the afternoon is the opening keynote, starting with Judy Huber, who oversees the 5,000 people at the IBM Canada software labs, which includes the Centre for Advanced Studies (CAS) technology incubation lab that spawned this conference. This is the 20th year of CASCON, and some of the attendees have been here since the beginning, but there are a lot of younger faces who were barely born when CASCON started.

To recognize the achievements over the years, Joanna Ng, head of research at CAS, presented awards for the high-impact papers from the first decade of CASCON, one each for 1991 to 2000 inclusive. Many of the authors of those papers were present to receive the award. Ng also presented an award to Hausi Müller from University of Victoria for driving this review and selection process. The theme of this year’s conference is smarter technology for a smarter planet – I’ve seen that theme at all three IBM conferences that I’ve attended this year – and Ng challenged the audience to step up to making the smarter planet vision into reality. Echoing the words of Brenda Dietrich that I heard last week, she stated that it’s a great time to be in this type of research because of the exciting things that are happening, and the benefits that are accruing.

Following the awards, Rod Smith, VP of IBM emerging internet technologies and an IBM fellow, gave the keynote address. His research group, although it hasn’t been around as long as CAS, has a 15-year history of looking at emerging technology, with a current focus on “big data” analytics, mobile, and browser application environments. Since they’re not a product group, they’re able to take their ideas out to customers 12-18 months in advance of marketplace adoption to test the waters and fine-tune the products that will result from this.

They see big data analytics as a new class of application on the horizon, since they’re hearing customers ask for the ability to search, filter, remix and analyze vast quantities of data from disparate sources: something that the customers thought of as Google’s domain. Part of IBM’s BigInsights project (which I heard about a bit last week at IOD)  is BigSheets, an insight engine for enabling ad hoc discovery for business users, on a web scale. It’s like a spreadsheet view on the web, which is a metaphor easily understood by most business users. They’re using the Hadoop open source project to power all of the BigInsights projects.

It wouldn’t be a technical conference in 2010 if someone didn’t mention Twitter, and this is no exception: Smith discussed using BigSheets to analyze and visualize Twitter streams related to specific products or companies. They also used IBM Content Analytics to create the analysis model, particularly to find tweets related to mobile phones with a “buy signal” in the message. They’ve also done work on a UK web archive for the British Library, automating the web page classification and making 128 TB of data available to researchers. In fact, any organization that has a lot of data, mostly unstructured, and wants to open it up for research and analysis is a target for these sort of big data solutions. It stands to reason that the more often you can generate business insights from the massive quantity of data constantly being generated, the greater the business value.

Next up was Christian Couturier, co-chair of the conference and Director General of the Institute of Information Technology at the Canada’s National Research Council. NRC provides some of the funding to IBM Canada CAS Research, driven by the government’s digital economy strategy which includes not just improving business productivity but creating high-paying jobs within Canada. He mentioned that Canadian businesses lag behind other countries in adoption of certain technologies, and I’m biting my tongue so that I don’t repeat my questions of two years ago at IT360 where I challenged the Director General of Industry Canada on what they were doing about the excessively high price of broadband and complete lack of net neutrality in Canada.

The program co-chairs presented the award for best paper at this show, on Testing Sequence Diagram to Colored Petri Nets Transformation, and the best student paper, on Integrating MapReduce and RDBMSs; I’ll check these out in the proceedings as well as a number of other interesting looking papers, even if I don’t get to the presentations.

Oh yeah, and in addition to being a great, free conference, there’s birthday cake to celebrate 20 years!

CASCON Workshop: Accelerate Service Integration In Your BPM and SOA Applications

I’m attending a workshop at the first morning of CASCON, the conference on software research hosted by IBM Canada. There’s quite a bit of good work done at the IBM Toronto software lab, and this annual conference gives them a chance to engage the academic and corporate community to present this research.

The focus of this workshop is service integration, including enabling new services from existing applications and creating new services by composing from existing services. Hacking together a few services into a solution is fairly simple, but your results may not be all that predictable; industrial-strength service integration is a bit more complex, and is concerned with everything from reusability to service level agreements. As Allen Chan of IBM put it when introducing the session: “How do we enable mere mortals to create a service integration solution with predictable results and enterprise-level reliability?”

The first presentation was by Mannie Kagan, an IBMer who is working with TD Bank on their service strategy and implementation; he walked us through a real-life example of how to integrate services into a complex technology environment that includes legacy systems as well as newer technologies. Based on this, and a large number of other engagements by IBM, they are able to discern patterns in service integration that can greatly aid in implementation. Patterns can appear at many levels of granularity, which they classify as primitive, subflow, flow, distributed flow, and connectivity topology. From there, they have created an ESB framework pattern toolkit, an Eclipse-based toolkit that allows for the creation of exemplars (templates) of service integration that can then be adapted for use in a specific instance.

He discussed two particular patterns that they’ve found to be particularly useful: web service notification (effectively, pub-sub over web services), and SCRUD (search, create, read, updated, delete); think of these as some basic building blocks of many of the types of service integrations that you might want to create. This was presented in a specific IBM technology context, as you might imagine: DataPower SOA appliances for processing XML messages and legacy message transformations, and WebSphere Services Registry and Repository (WSRR) for service governance.

In his wrapup, he pointed out that not all patterns need to be created at the start, and that patterns can be created as required when there is evidence of reuse potential. Since patterns take more resources to create than a simple service integration, you need to be sure that there will be reuse before it is worth creating a template and adding it to the framework.

Next up was Hans-Arno Jacobsen of University of Toronto discussing their research in managing SLAs across services. He started with a business process example of loan application processing that included automated credit check services, and had an SLA in terms of parameters such as total service subprocess time, service roundtrip time, service cost and service uptime. They’re looking at how the SLAs can guide the efficient execution of processes, based in a large part on event processing to detect and determine the events within the process (published state transitions). He gave quite a detailed description of content-based routing and publish-subscription models, which underlie event-driven BPM, and their PADRES ESB stack that hides the intricacies of the underlying network and system events from the business process execution by creating an overlay of pub-sub brokers that filters and distributes those events. In addition to the usual efficiencies created by the event pub-sub model, this allows (for example) the correlation of network slowdowns with business process delays, so that the root cause of a delay can be understood. Real-time business analytics can also be driven from the pub-sub brokers.

He finished by discussing how business processes can actually be guided by SLAs, that is, runtime use of SLAs rather than just for monitoring processes. If the process can be allocated to multiple resources in a fine-grained manner, then the ESB broker can dynamically determine the assignment of process parts to resources based on how well those resources are meeting their SLAs, or expected performance based on other factors such as location of data or minimization of traffic. He gave an example of optimization based on minimizing traffic by measuring message hops, which takes into account both rate of message hops and distance between execution engines. This requires that the distributed execution engines include engine profiling capabilities that allows an engine to determine not only its own load and capacity, but that of other engines with which it communicates, in order to minimize cost over the entire distribute process. To fine-tune this sort of model, process steps that have a high probability of occurring in sequence can be dynamically bound to the same execution engine. In this situation, they’ve seen a 47% reduction in traffic, and a 50% reduction in cost relative to the static deployment model.

After a brief break, Ignacio Silva-Lepe from IBM Research presented on federated SOA. SOA today is mostly used in a single domain within an organization, that is, it is fairly siloed in spite of the potential for services to be reused across domains. Whereas a single domain will typically have its own registry and repository, a federated SOA can’t assume that is the case, and must be able to discover and invoke services across multiple registries. This requires a federation manager to establish bridges across domains in order to make the service group shareable, and inject any cross-domain proxies required to invoke services across domains.

It’s not always appropriate to have a designated centralized federation manager, so there is also the need for domain autonomy, where each domain can decide what services to share and specify the services that it wants to reuse. The resulting cross-domain service management approach allows for this domain autonomy, while preserving location transparency, dynamic selection and other properties expected from federated SOA. In order to enable domain autonomy, the domain registry must not only have normal service registry functionality, but also references to required services that may be in other domains (possibly in multiple locations). The registries then need to be able to do a bilateral dissemination and matching of interest and availability information: it’s like internet dating for services.

They have quite a bit of work planned for the future, beyond the fairly simple matching of interest to availability: allowing domains to restrict visibility of service specifications to authorized parties without using a centralized authority, for example.

Marsha Checkik, also from University of Toronto, gave a presentation on automated integration determination; like Jacobsen, she collaborates with the IBM Research on middleware and SOA research; unlike Jacobsen, however, she is presenting on research that is at a much earlier stage. She started with a general description of integration, where a producer and a consumer share some interface characteristics. She went on to discuss interface characteristics (what already exists) and service exposition characteristics (what we want): the as-is and to-be state of service interfaces. For example, there may be a requirement for idempotence, where multiple “submit” events over an unreliable communications medium would result in only a single result. In order to resolve the differences in characteristics between the as-is and to-be, we can consider typical service interface patterns, such as data aggregation, mapping or choreography, to describe the resolution of any conflicts. The problem, however, is that there are too many patterns, too many choices and too many dependencies; the goal of their research is to identify essential integration characteristics and make a language out of them, identify a methodology for describing aspects of integration, identify the order in which patterns can be determined, identify decision trees for integration pattern determination, and determine cases where integration is impossible.

Their first insight was to separate pattern-related concerns between physical and logical characteristics; every service has elements of both. They have a series of questions that begin to form a language for describing the service characteristics, and a classification for the results from those questions. The methodology contains a number of steps:

  1. Determine principle data flow
  2. Determine data integrity data flow, e.g., stateful versus stateless
  3. Determine reliability flow, e.g., mean time between failure
  4. Determine efficiency, e.g., response time
  5. Determine maintainability

Each of these steps determines characteristics and mapping to integration patterns; once a step is completed and decisions made, revisiting it should be minimized while performing later steps.

It’s not always possible to provide a specific characteristic for any particular service; their research is working on generating decision trees for determining if a service requirement can be fulfilled. This results in a pattern decision tree based on types of interactions; this provides a logical view but not any information on how to actually implement them. From there, however, patterns can be mapped to implementation alternatives. They are starting to see the potential for automated determination of integration patterns based on the initial language-constrained questions, but aren’t seeing any hard results yet. It will be interesting to see this research a year from now to see how it progresses, especially if they’re able to bring in some targeted domain knowledge.

Last up in the workshop was Vadim Berestetsky of IBM’s ESB tools development group, presenting on support for patterns in IBM integration offerings. He started with a very brief description of an ESB, and WebSphere Message Broker as an example of an ESB that routes messages from anywhere to anywhere, doing transformations and mapping along the way. He basically walked through the usage of the product for creating and using patterns, and gave a demo (where I could see vestiges of the MQ naming conventions). A pattern specification typically includes some descriptive text and solution diagrams, and provides the ability to create a new instance from this pattern. The result is a service integration/orchestration map with many of the properties already filled in; obviously, if this is close to what you need, it can save you a lot of time, like any other template approach.

In addition to demonstrating pattern usage (instantiation), he also showed pattern creation by specifying the exposed properties, artifacts, points of variability, and (developer) user interface. Looks good, but nothing earth-shattering relative to other service and message broker application development environments.

There was an interesting question that goes to the heart of SOA application development: is there any control over what patterns are created and published to ensure that they are useful as well as unique? The answer, not surprisingly, is no: that sort of governance isn’t enforced in the tool since architects and developers who guide the purchase of this tool don’t want that sort of control over what they do. However, IBM may see very similar patterns being created by multiple customer organizations, and choose to include a general version of that pattern in the product in future. A discussion about using social collaboration to create and approve patterns followed, with Berestetsky hinting that something like that might be in the works.

That’s it for the workshop; we’re off to lunch. Overall, a great review of the research being done in the area of service integration.

This afternoon, there’s the keynote and a panel that I’ll be attending. Tomorrow, I’ll likely pop in for a couple of the technical papers and to view the technology showcase exhibits, then I’m back Wednesday morning for the workshop on practical ontologies, and the women in technology lunch panel. Did I mention that this is a great conference? And it’s free?

IOD Keynote: Computational Mathematics and Freakonomics

I attended the keynote this morning, on the theme of looking forward: first we heard from Mike Rhodin, an exec in the IBM Software group, then Brenda Dietrich, a mathematician (and VP – finally, a female IBM exec on stage) from the analytics group in IBM Research. IBM Research has nine labs around the world, including a new one just launched in Brazil, and a number of collaborative research facilities, or “collaboratories”, where they work with universities, government agencies and private industries on research that can be leveraged into the market more quickly. I’ve met a few of the BPM researchers from the Zurich lab at the annual academic BPM conference, but the range of the research across the IBM labs is pretty vast: from nanotechnology, to the cloud, to all of the event generation that leads to the “smarter planet” that IBM has been promoting. She’s here from the analytics group because analytics is at the top of this pyramid of research areas, especially in the context of the smarter planet: all of our devices are generating a flood of events and data, and some pretty smart analytics have to be in place to be able to make sense of all this.

The future of analytics is moving from today’s static model of collect-analyze-present results, to more predictive analytics that can create models of the future based on what’s happened in the past, and use that flood of data (such as Twitter) as input to these analytical models.

I have a lot of respect for IBM for trying out their own ideas on systems on themselves as one big guinea pig, and this analytics research is no exception. They’re using data from all sorts of internal systems, from manufacturing plants to software development processes to human resources, to feed into this research, and benefit from the results. When this starts to hit the outside market, it has impacts on a much wider variety of industries, such as telco and oil field development. Not surprisingly, this ties in with master data management, since you need to deal with common data models if you’re going to perform complex analytics and queries across all of this data, and their research on using the data stream to actually generate the queries is pretty cool.

She showed a short video ciip on Watson, an AI “question answering system” that they’ve built, and showed it playing Jeopardy, interpreting the natural language questions – including colloquialisms – and responding to them quickly, beating out some top human Jeopardy players. She closed with a great quote that is inspirational in so many ways, especially to girls in mathematics: “It’s a great time to be a computational mathematician”.

The high-profile speakers of the keynote were up next: Steven Levitt and Stephen Dubner, authors of Freakonomics and Superfreakonomics, with some interesting anecdotes about how they started working together (Levitt’s the genius economist, and Dubner’s the writer who collaborated with him on the books). They talked about turning data into ideas, tying in with the analytics theme; they had lots of interesting and humorous stories on an economic theme, such as teaching monkeys about money as a token to be exchanged for goods and (ahem) services, and what that teaches us about risk and loss aversion in people.

I have a noon flight home to Toronto, so this ends my time at IOD 2010. This is my first IOD: I used to attend FileNet’s UserNet conference before the acquisition, but have never been to IOD or Impact until this year. With over 10,000 people registered, this is a massive conference that covers a pretty wide range of information management technologies, including the FileNet ECM, BPM and now Case Manager software that is my main focus here. I’ve had a look at the new IBM Case Manager, as you’ve read in my posts from yesterday, and give it a bit of a mixed review, although it’s still not even released. I’m hoping for an in-depth demo sometime in the coming weeks, and will be watching to see how IBM launches itself into the case management space.

Customizing the IBM Case Manager UI

Dave Perman and Lauren Mayes had the unenviable position of presenting at the end of the day, and at the same time as the expo reception was starting (a.k.a. “open bar”), but I wanted to round out my view of the new Case Manager product by looking at how the user interfaces are built. This is all about the Mashup Center and the Case Manager widgets; I’ve played around with the ECM widgets in the past, which provide an easy way to build a composite application that includes FileNet ECM capabilities.

Perman walked through the Case Manager Builder briefly to show how everything hangs together – or at least, the parts that are integrated into the Builder environment, which are the content and process parts, but not rules or analytics – then described the mashup environment. The composite application development (mashup) environment is pretty standard functionality in BPM and ACM these days, but Case Manager comes with a pre-configured set of pages that make it easy to build case application UIs. A business analyst can easily customize the standard Case Manager pages, selecting which widgets are included and their placement on the page, including external (non-Case Manager) widgets.

The designer can also override the standard case view pages either for all users or for specific roles; this requires creating the page in the mashup environment and registering it for use in Case Manager, then using the Case Manager Builder to assign that page to the specific actions associated with a case. In other words, the UI design is not integrated into the Case Builder environment, although the end result is linked within that environment.

Mayes then went through the process of building and integrating 3rd party widgets; there’s a lot of material on the IBM website now on how to build widgets, and this was just a high-level view of that process and the architecture of integrating between the Mashup Center and the ACM widgets, themes and ECM services on the application server. This uses lightweight REST services that return JSON, hence easier to deal with in the browser, including CMIS REST services for content access, PE REST services for process access, and some custom case-specific REST services. Since there are widgets for Sametime presence and chat functionality, they link through to a Sametime proxy server on the application server. For you FileNet developer geeks, know that you also have to have an instance of Workplace XT running on the application server as well. I’m not going to repeat all the gory details, but basically once you have your custom widget built, you can deploy it so that it appears on the Mashup Center palette, and can be used like any other pre-existing widget. There’s also a command widget that retrieves all the case information so that it’s not loaded multiple times by all of the other widgets; it’s also a controller for moving between list and detail pages.

This is a bit more information that I was counting on absorbing this late in the day, and I ducked out early when the IBM partner started presented about what they’ve done with custom widgets.

That’s it for today; tomorrow will be a short day since I fly home mid-day, but I’ll likely be at one or two sessions in the morning.

IBM FileNet BPM Product Update

All this news this week about Case Manager, my old friend BPM seems like it’s been left on the sidelines, although partially hidden within the new Case Manager offering. However, we have one session by Mike Fannon, BPM product manager, giving us the update on what’s happening with BPM.

The first thing is new OS platform support for Linux and zLinux; although this is important for many customers who have standardized on Linux – or want to integrate BPM with CM8 on their mainframes – you can imagine this is not the most exciting announcement to me. Yes, I have customers who will love this. Now move on. 🙂

Next is the port of the Process Engine to a standalone Java app (not J2EE), from its original C++ beginnings. Although this seems on the surface to be not a lot more exciting than the Linux support, this is pretty significant, and not just for the performance boost that they’re seeing. This means improvements to the complexity of the APIs and database interfaces, better standardization, and also brings PE in line architecturally with the Content Engine and even allows PE and CE to share the same database. In the future, they’re considering moving it to a J2EE container, which provides a lot more flexibility for things like server farming.

They’re also supporting multi-tenancy, allowing multiple PEs to run on the same virtual server with separate application environment, user space, backup and restore for each tenant. These PE Stores (analogous to CE Object Stores) seem to be replacing the old isolated regions paradigm, and there are procedures for moving isolated regions to separate PE Stores. If you’re an old BPM hack, then all your old VW-prefixed admin commands will be replaced as the vestiges of Visual WorkFlo are finally purged. As the owner of a small systems integration firm, I designed and wrote one of the first VW apps back in 1994, so this does bring a small tear to my eye, although this has obviously being too long coming.

From an upgrade standpoint, there are supported upgrade paths (some staged, some direct) from eProcess 5.2 (can’t believe that’s still out there) as well as BPM 3.53 and later, including migration tools for in-flight process instances. There are changes to the data model of the underlying database tables, so if you’ve built any applications such as advanced analytics that directly hit the operational database, you’re going to have some refactoring to do.

Process Analyzer has been extended to add capabilities for Case Manager, such as aggregation based on case properties. In addition to using Excel pivot tables, which has always been done in the past for PA, you can use Cognos BI instead. Of course, since PA is based on a set of cubes in a MS SQL Server/MS Analysis Services engine that is trickle-fed from the PE database, this has always been possible, but I assume that it’s just better integrated now. Unsurprisingly, they to have a direction to eliminate Microsoft technology dependencies, so at some point in the future, I expect that you’ll see PA data store ported off the SQL Server/Analysis Services platform. The Process Monitor dashboard has also been updated to handle Case Manager data, and better integrated with Cognos.

There were a number of enhancements to the ECM Widgets in March and June, such as support for Business Space instead of the Mashup Center, and some new widgets for process history and get next work item (finally). It looks like they’re building out the widget functionality to the point where it’s actually usable for real applications; without the get next work item, you couldn’t use it to build any sort of heads-down processing functionality.

There are really few functionality improvements to BPM; most of this is refactoring and platform porting. I think that a lot of the BPM creative juices are going towards Case Manager, and if you look at the direction of the 100% Java PE port and ability to share databases with CE, it’s possible that we’ll see some sort of merging of ECM, BPM and Case Manager into a single engine in the future. IBM, of course, did not say that.

IBM Case Manager Technical Roundtable

Bill Lobig, Mike Marin, Peggy (didn’t catch her last name) and Lauren Mayes hosted a freeform roundtable for any technical questions about the new Case Manager product.

I had a chat with Mike prior to the talk, and he reinforced this during the session, about the genesis of Case Manager: although there were a lot of ideas that came from the old BPF product, Mike and his team spent months interviewing the people who had used BPF to find out what worked and what didn’t work, then built something new that incorporated the features most needed by customers. The object model for the case is now part of the basic server classes rather than being a higher-level (and therefore less efficient) custom object, there are new process classes to map properties between case folders and processes, and a number of other significant architectural changes and upgrades to make this happen. I see TIBCO going through this same pain right now with the lack of upgrade path from iProcess to AMX BPM, and to the guy in the audience who said that it’s not fair that IBM gives you a crappy product, you use it and provide feedback on how to improve it, then they charge you for the new product: well, that’s just how software works sometimes, and vendors will never have true innovation if they always have to be supporting their (and your) entire legacy. There does need to be some sort of migration path at least for the completed case folder objects from BPF to Case Manager native case objects, although that hasn’t been announced, since these are long-term corporate assets that have to be managed the same as any other content; however, I would not expect any migration of the BPF apps themselves.

More process functionality is being built right into the content engine; this is significant in that you’ve always required both ECM and BPM to do any process management, but it sounds like some functionality is being drawn into the content engine. Does this mean that the content and process engines eventually be merged into a single platform and a single product? That would drive further down the road of repositioning FileNet BPM as content-centric – originally done at the time of the FileNet acquisition, I believe, to avoid competition with WebSphere BPM – since if it’s truly content-centric, then why not just converge the engines, including the ACM capabilities? That would certainly make for a more seamless and consistent development environment, especially around issues like object modeling and security.

One consistent message that’s coming across in all the Case Manager sessions is accelerating the development time by allowing a business analyst to create a large part of a case application without involving IT; this is part of what BPF was trying to provide, and even BPM prior to that. I was FileNet’s evangelist for the launch of the eProcess product, which was the first version of the current generation of BPM, and we put forward the idea back in 2000 that a non-technical (or semi-technical) analyst could do some amount of the model-driven application development.

There are obviously still some rough edges in Case Manager still, since version 1.0 isn’t even out yet. In a previous session, we saw some of the kludges for content analytics, dashboarding and business rules, and it sounds like role-based security and e-forms isn’t really fully integrated either. The implications of these latter two are tied up with the ease in which you can migrate a case application from one environment to another, such as from development to test to production: apparently, not completely seamless, although they are able to bundle part of a case application/template and move it between environments in a single operation. Every vendor needs to deal with this issue, and those that have a more tightly integrated set of objects making up an application have a much easier time with this, especially if they also offer a cloud version of their software and need to migrate easily between on premise and cloud environments, such as TIBCO, Fujitsu and Appian. IBM is definitely playing catchup in the area of moving defined applications between environments, as well as their overall integration strategy within Case Manager.

IOD ECM Keynote

Ron Ercanbrack, VP at IBM (my old boss from my brief tenure at FileNet in 2000-1, who once introduced me at a FileNet sales kickoff conference as the “Queen of BPM”), gave a brief ECM-focused keynote this morning. He covered quite a bit of the information that I was briefed on last week, including Case Manager, Content Analytics, improved content integration including CMIS, the Datacap and PSS acquisitions, enhancements to Content Collector, and more. He positioned Case Manager as a product “running on top of BPM”, which is a bit different than the ECM-centric message that I’ve heard so far, but likely also accurate: there are definitely significant components of each in there.

He was followed by Carl Kessler, VP of Development, to give a Case Manager demo; this covered the end-user case management environment (pretty much what we’ve seen in previous sessions, only live), plus Content Analytics for text mining which is not really integrated with Case Manager: it’s a separate app with a different look and feel. I missed the launch point, so I don’t know whether he launched this from a property value in Case Manager or had to start from scratch using the terms relevant to that case. It has some very nice text mining capabilities for searching through the content repository for correlation of terms, including some pretty graphs, but it’s a separate app.

We then went off to the Cognos Real-time Monitoring Dashboard, which is yet again another non-integrated app with a different look and feel. He showed a dashboard that had a graph for average age of cases and allowed drill-down on different parameters such as industry type and dispute type, but that’s not really the same as a fully integrated product suite. Although all of the components applications are functional, this needs a lot more integration at the end-user level.

I did get a closer look at some of the Case Builder functionality than I’ve seen already: in the tasks definition section, there are required tasks, optional tasks and user-created tasks, although it’s not clear what user-created tasks are since this is design-time, not runtime.

Ercanbrack came back to the stage for a brief panel with three customers – Bank of America, State of North Dakota, and BlueCross BlueShield of Tennessee – talking about their ECM journeys. This was not specific to case management at all, but using records/retention management to reduce storage costs and risks in financial services, using e-discovery as part of a legal action in healthcare, and content management with a case management approach for allowing multiple state government agencies to share documents more effectively.

Advanced Case Management Empowering The Business Analyst

We’re still a couple of hours away from the official announcement about the release of IBM Case Manager, and I’m at a session on how business analysts will work with Case Manager to build solutions based on templates.

Like the other ACM sessions, this one starts with an overview of IBM’s case management vision as well as the components that make up the Case Manager product: ECM underlying it all, with Lotus Sametime for real-time presence and chat, ILOG JRules for business rules, Cognos Real Time Monitor for dashboards, IBM Content Analytics for unstructured content analysis, IBM (Lotus) Mashup Center for user interface and some new case management task and workflow functionality that uses P8 BPM under the covers. Outside the core of Case Manager, WebSphere Process Server can be invoked for integration/SOA applications, although it appears that this is done by calling it from P8 BPM, which was existing functionality. On top of this, there are pre-built solutions and solution templates, as well as a vast array of services from IBM GBS and partners.

IBM Case Management Vision

The focus in this session is on the tools for the business analyst in the design-time environment, either based on a template or from scratch, including the user interface creation in the Mashup Center environment, analytics for both real-time and historical views of cases, and business rules. This allows a business analyst to capture requirements from the business users and create a working prototype that will form the shell of the final case application, if not the full executing application. The Case Builder environment that a business analyst works in to design case solutions also allows for testing and deploying the solution, although in most cases you won’t have your BAs deploying directly to a production environment.

Defining a case solution stats with the top-level case solution creation, including name, description and properties, then completing the following:

  • Define case types
  • Specify roles
    • Define role inbasket
  • Define personal inbasket
  • Define document types
  • Associate runtime UI pages

We didn’t see the ILOG JRules integration, and for good reason: in the Q&A, they admitted that this first version of Case Manager didn’t quite have that up to scratch, so I imagine that you have to work in both design environments, then call JRules from a BPM step or something of that nature.

The more that I see of Case Manager, the more I see the case management functionality that was starting to migrate into the FileNet ECM/BPM product from the Business Process Framework (BPF); I predicted that BPF would become part of the core product when I reviewed P8 BPM v4.5 a year and a half ago, and while this is being released as a separate product rather than part of the core ECM product, BPF is definitely being pushed to the side and IBM won’t be encouraging the creation of any new applications based on BPF. There’s no direct migration path from BPF to ACM; BPF technology is a bit old, and the time has come for it to be abandoned in favor of a more modern architecture, even if some of the functionality is replicated in the new system.

The step editor used to define the tasks associated with cases provides swimlanes for roles or workgroups (for underlying queue assignment, I assume), then allows the designer to add steps into the lanes and assign properties to the steps. The step properties are a simplified version of a step definition in P8 BPM, so I assume that this is actually a shared model (as opposed to export/import) that can be opened directly by the more technical BPM Process Designer. In P8 BPM 4.5, they introduced a “diagram mode” for business analysts in the Process Designer; this appears to be an even simpler process diagramming environment. It’s not BPMN compliant, which I think is a huge mistake; since it’s a workflow-style model with lanes, activities and split/merge are supported, this would have been a great opportunity to use the standard BPMN shapes to start getting BAs used to it.

I still have my notes from last week’s analyst briefing and my meeting with Ken Bisconti from yesterday which I will publish; these are more aligned with the “official” announcement that will be coming out today in conjunction with the press release.

IBM’s New Case Manager Product Overview

The day before the official announcement of IBM’s Case Manager product, Jake Levirne, Senior Product Manager, walked us through the capabilities. He started by defining case management, and discussing how it is about providing context to enable better outcomes rather than prescribing the exact method for achieving that outcome. For those of you who have been following ACM for a while, this wasn’t anything new, although I’m imagining that it is for some of the audience here at IOD.

Case Manager is an extension of the core (FileNet) ECM product through the integration of functionality from several other software products across multiple IBM software groups, specifically analytics, rules and collaboration. There is a new design tool targeted at business analysts, and a user interface environment that is the next generation of the old ECM widgets. There’s a new case object model in the repository, allowing the case construct to exist purely in the content repository, and be managed using the full range of content management capabilities including records management. Case tasks can be triggered by a number of different event types: user actions, new content, or updates to the case metadata. By having tasks as objects within the case, each task can then correspond to a structured subprocess in FileNet BPM, or just be part of a checklist of actions to be completed by the case worker (further discussion left it unclear whether even the simple checklist tasks were implemented as a single-step BPM workflow). A task can also call a WebSphere Process Server task; in fact, from what I recall of how the Content Manager objects work, you can call pretty much anything if you want to write a Java wrapper around it, or possibly this is done by triggering a BPM process that in turn calls a web service. The case context – a collection of all related metadata, tasks, content, comments, participants and other information associated with the case – is available to any case worker, giving them a complete view of the history and the current state of the case. Some collaboration features are built in to the runtime, including presence and synchronous chat, as well as simple asynchronous commenting; these collaborations are captured as part of the case context.

As you would expect, cases are dynamic and allow case workers to add new tasks for the case at any time. Business rules, although they may not even be visible to the end user, can be defined during design time in order to set properties and trigger events in the case. Rules can be changed at runtime, although we didn’t see an example of how that would be done or why it might be necessary.

There are two perspectives in the Case Manager Builder design environment: a simplified view for the business analysts to define the high level view of the case, and a more detailed view for the technologists to build in more complex integrations and complex decision logic. This environment allows for either start-from-scratch or template-based case solution definitions, and is targeted at the business analyst with a wizard-based interface. Creating a case solution includes defining the following from the business analyst’s view:

  • case properties (metadata)
  • roles that will work on this case, which will be bound to users at runtime
  • case types that can exist within the same case solution
  • document types that can be included in the case or may even trigger the case
  • case data and search views
  • which case views that each role will see
  • default folders to be included in the case
  • tasks that can be added to this case, each of which is a process (even if only a one-step process), and any triggering events for the tasks
  • the process behind each of the tasks, which is a simple step editor directly in Case Builder; a system lane in this editor can represent the calling of a web service or a WPS process

All of these can be defined on an ad hoc basis, or stubbed out initially using a wizard interface that walks the business analyst through and prompts for which of these things needs to be included in the case solution. Comments can be added on the objects during design time, such as tasks, allowing for collaboration between designers.

As was made clear in an audience question, the design that a business analyst is doing will actually create object classes in both Content Manager and BPM; this is not a requirements definition that then needs to be coded by a developer. From that standpoint, you’ll need to be sure that you don’t let them do this in your production environment since you may want to have someone ensure that the object definitions aren’t going to cause performance problems (that seemed screamingly obvious to me, but maybe wasn’t to the person asking the question).

From what Levirne said, it sounds as if the simple step editor view of the task process can then be opened in the BPM Process Designer by someone more technical to add other information, implying that every task does have a BPM process behind it. It’s not clear if this is an import/export to Process Designer, or just two perspectives on the same model, or if a task always generates a BPM process or if it can exist without one, e.g., as a simple checklist item. There were a lot of questions during the session and he didn’t have time to take them all, but I’m hoping for a more in-depth demo/briefing in the weeks to come.

Case analytics, including both dashboards (Cognos BAM) and reports (Excel and Cognos BI reports) based on case metadata, and more complex analytics based on the actual content (Content Analytics), are provided to allow you to review operational performance and determine root causes of inefficiencies. From a licensing standpoint, you would need a Cognos BI license to use that for reporting, and a limited-license Content Analytics version is included out of the box that can only be used for analyzing cases, not all your content. He didn’t cover much about the analytics in this session, it was primarily focused on the design time and runtime of the case management itself.

The end-user experience for Case Manager is in the IBM Mashup Center, a mashup/widget environment that allows the inclusion of both IBM’s widgets and any other that support the iWidget standard and expose their properties via REST APIs. IBM has had the FileNet ECM widgets available for a while to provide some standard ECM and BPM capabilities; the new version provides much more functionality to include more of the case context including metadata and tasks. A standard case widget provides access to the summary, documents, activities and history views of the case, and can link to a case data widget, a document viewer widget for any given document related to the case, and e-forms for creating more complex user interfaces for presenting and entering data as part of the case.

Someone I know who has worked with FileNet for years commented that Case Manager looks a lot like the integrated demos that they’ve been building for a couple of years now; although there’s some new functionality here and the whole thing is presented as a neat package, it’s likely that you could have done most of this on your own already if you were proficient with FileNet ECM and some of the other products involved.

We also heard from Brian Benoit of Pyramid Solutions, a long-time FileNet partner who has been an early adopter of Case Manager and responsible for building some of the early templates that will be available when the product is released. He demonstrated a financial account management template, including account opening, account maintenance, financial transaction requests and correspondence handling. In spite of IBM’s claim that there is no migration path from Business Process Framework (BPF), there is a very BPF-like nature to this application; clearly, the case management experience that they gained from BPF usage has shaped the creation of Case Manager, or possibly Pyramid was so familiar with BPF that they built something similar to what they knew already. Benoit said that the same functionality could be built out of the box with Case Manager, but that what they have provided is an accelerator for this sort of application.

Levirne assured me that everything in his presentation could be published immediately, although I’ve had analyst briefings on Case Manager that are under embargo until the official announcement tomorrow so I’ll give any of the missing details then.