SAP Analytics Update

A group of bloggers had an update today from Steve Lucas, GM of the SAP business analytics group, covering what happened in 2010 and some outlook and strategy for 2011.

No surprise, they saw an explosion in growth in 2010: analytics has been identified as a key competitive differentiator for a couple of years now due to the huge growth into the amount of information and event being generated for every business; every organization is at least looking at business analytics, if not actually implementing it. SAP has approached analytics across several categories: analytic applications, performance management, business intelligence, information management, data warehousing, and governance/risk/compliance. In other words, it’s not just about the pretty data visualizations, but about all the data gathering, integration, cleanup, validation and storage that needs to go along with it. They’ve also released an analytics appliance, HANA, for sub-second data analysis and visualization on a massive scale. Add it all up, and you’ve got the right data, instantly available.

SAP Analytics products

New features in the recent product releases include an event processing/management component, to allow for real-time event insight for high-volume transactional systems: seems like a perfect match for monitoring events from, for example, an SAP ERP system. There has also been some deep integration into their ERP suite using the Business Intelligence Consumer Services (BICS) connector, although all of the new functionality in their analytics suite really pertains to Business Objects customers who are not SAP ERP customers; interestingly, he refers to customers who have an SAP analytics product but not their ERP suite as “non-SAP customers” – some things never change.

In a move that will be cheered by every SAP analytics user, they’ve finally standardized the user interface so that all of their analytics products share a common (or similar, it wasn’t clear) user experience – this is a bit of catch-up on their part, since they’ve brought together a number of different analytics acquisitions to form their analytics suites.

They’ve been addressing the mobile market as well as the desktop market, and are committing to all mainstream mobile platforms, including RIM’s Playbook. They’re developing their own apps, which will hurt partners such as Roambi who have made use of the open APIs to build apps that access SAP analytics data; there will be more information about the SAP apps in some product announcements coming up on the 23rd. Mobile information consumption is good, and possibly sufficient for some users, but I still think that most people need the ability to take action on the analytics, not just view them. That tied into a question about social BI; Lucas responded that there would be more announcements on the 23rd, but also pointed us towards their StreamWork product, which provides more of the sort of event streaming and collaboration environment that I wrote about earlier in Appian’s Tempo. In other words, maybe the main app on a mobile device will be StreamWork, so that actions and collaboration can be done, rather than the analytics apps directly. It will be interesting to see how well they integrate analytics with StreamWork so that a user doesn’t have to hop around from app to app in order to view and take action on information.

Appian Tempo

I had a chance for an advance briefing of Appian’s Tempo release last week; this is a new part of the Appian product suite that focuses on mobility, cloud and social aspects of BPM for social collaboration. This isn’t a standalone social collaboration platform, but includes deep links into the Appian BPM platform through events, alerts, tasks and more. They’ve included Twitter-like status updates and RSS feeds so that you can publish and consume the information in a variety of other forms, offering a fresh new alternative to the usual sort of process monitoring that we see in a BPMS. The free app for the iPhone and iPad requires an account on Appian Forum (the Appian user community site) or access to an Appian BPM installation (not sure if this is both an on-premise system and the cloud-based offering) in order to do anything so I wasn’t really able to check it out, but saw it on an emulator in the demo.

Appian Tempo in a browserTheir goal for Tempo is to provide a zero-training interface that allows people to track and participate in processes either from a browser or from a mobile device. You can think of it as a new user interface for their BPM and information management systems: in some cases to provide an alternative view to the portal for occasional/less-skilled users, and in some cases as the only interface for more collaborative or monitoring applications. It doesn’t replace the information- and feature-rich portal interface used by heads-down workers, but provides a simpler view for interacting with processes by executives or mobile workers. Users can interact with tasks that are assigned to them in a structured process within BPM, such as approving a purchase request, but can also create a new case from any event, whether that original event was related to BPM or not. For example, I had a view of the internal Appian instance of Tempo (I’ve redacted everything from this screenshot except this event, since some of the other events included internal information) where a “Marketing” stream included RSS feeds from a number of BPM news and blog sources. Swiping on any given event in the iPhone/iPad app – say an external blog post – allowed a new case to be opened that linked to that external event. At this point, the case opening functionality is pretty rudimentary, only allowing for a single process type to be created, which would then be manually reassigned to a specific person or subprocess, you can see the future value of this when the case/process type can be selected.

Appian Tempo in browserAs this scenario highlights, Tempo can include process and information from a variety of other sources, internal and external, that may have nothing to do with Appian BPM, in addition to providing visibility into core business processes. Anything with an RSS feed can be added; I saw Salesforce.com notifications, although not sure if they were just from an RSS feed or if there is some sort of more direct integration. Considering the wide adoption of RSS as a publication method for events, this is likely not an issue, but there are also some more direct system connections: an SAP event appearing in Tempo can be expanded to retrieve data directly from the corresponding SAP item, such as invoice details. This turns Tempo into a sort of generalized business dashboard for monitoring tasks and events from many different business sources: collaboration within a business information context.

The browser interface will be familiar if you’ve ever used Facebook: it has a big panel in the center for events, with the event filters in a panel on the left, and the event actions in a panel on the right. Users can subscribe to specific event types, which automatically creates filters, or can explicitly filter by logical business groupings such as departments. Individual events can be “starred” for easy retrieval, as you would with messages in Gmail. The user’s BPM inbox is exposed in the filter panel as “My Tasks”, so that their interaction with structured business processes is seen in the same context as other information and events with which they interact. The action panel on the right allows for the user to initiate new processes, depending on their role; this is more comprehensive than the “Open a case” functionality that we saw on the iPad: this is a full BPM process instantiation based on a user input form, such as creating a new IT change request. The actions available to a user are based on their role and permissions.

Appian Tempo iPhone appAccess to certain event classes can be restricted based on user and role permissions, but a user can comment on any event that they can see in their event stream. This form of collaboration is very similar to the Facebook model: you comment on someone an item that someone else posts, then are notified when anyone else adds a comment to the same event.

There’s been some nice optimization for the iPhone and iPad apps, such as one-click approvals without having to open the item, and rendering of Appian forms natively in the app. Although I’ve seen many iPad demos in the past year – it seems impossible to visit a vendor or go to a conference without seeing at least one – this offers significant value because of the deep integration to business processes and information. It’s easy to envision a mobile worker, for example, using the app to update information while at their client site, rather than filling out paper forms that need to be transcribed later. The app can directly access documents from the Appian content management system, or link to anything in a browser via a URL. It also allows for multiple user logins from the same device, which makes it good for testing but also useful in cases where a mobile device might be passed from worker to worker, such as for healthcare workers where a single device would support rotating shifts of users.

This certainly isn’t the first mobile app for BPM – you can see a few more listed at David Moser’s blog post on process apps – and the expected demand for mobile BPM will continue to drive more into this marketplace. This is, however, a very competent offering by a mainstream BPM vendor, which helps to validate the mobile BPM market in general.

This also isn’t the first BPM vendor to come out with a social media-style collaborative event stream interface (for lack of a better term), but this is a good indication of what we can expect to see as standard BPM functionality in the future.

Appian Tempo 2011

BPM and Application Composition Webinar This Week

I’m presenting a webinar tomorrow together with Sanjay Shah of Skelta – makers of one of the few Microsoft-centric BPM suites available – on Tuesday at noon Eastern time. The topic is BPM and application composition, an area that I’ve been following closely since I asked the question five years ago: who in the BPM space will jump on the enterprise mashup bandwagon first? Since then, I’ve attended some of the first Mashup Camps (1, 2 and 4) and watched the emerging space of composite applications collide with the world of BPM and SOA, to the point where both Gartner and Forrester consider this important, if not core, functionality in a BPM suite.

I’ll be talking about the current state of composite application development/assembly as it exists in BPM environments, the benefits you can expect, and where I see it going. You can register to attend the webinar here; there will be a white paper published following the webinar.

Reprise of the Four Myths of BPM Projects

Back in June of 2009, I gave a webinar with Active Endpoints called “IT-Business Collaboration on BPM” that included some myths about BPM projects, particularly the level of involvement that can be expected from business users during the design cycle. Don’t get me wrong – there are a lot of great process discovery tools out there, and many cases where business people (really business analysts rather than end users) can design their own processes, but I’m typically involved in the sort of heavy-lifting complex business processes that just aren’t, in practice, designed by non-technical business people, and vendors aren’t really helping by insisting that IT just doesn’t have to be involved in any sort of BPM projects. This webinar, dubbed the “Four Myths” webinar became one of the most popular ones that I did with Active Endpoints

We’re updating and re-presenting this webinar tomorrow, covering the myths and some practical solutions, plus the usual live Q&A. You can sign up for tomorrow’s webinar here, or catch the replay (no registration required) on their VOSibilities blog or their iTunes podcast channel. There will also be a white paper that summarizes the topic, although I don’t think that Active Endpoints has it online yet.

HandySoft Process Intelligence and User Experience

Wow, has it really been a month since I last blogged? A couple of weeks vacation, general year-end busyness and a few non-work side projects have kept me quiet, but it’s time to get back at it. I have a few partially-finished product briefings sitting around, and thought it best to get them out before the vendors come out with their next versions and completely obsolesce these posts. 🙂

I had a chat with Garth Knudson of HandySoft in late November about the latest version of their BizFlow product, specifically around the new reporting capabilities and their WebMaker RIA development environment. Although these don’t show off the core BPM capabilities in their product suite (which I reviewed in late 2009), these are two well-integrated tools that allow for easy building of reports and applications within a BizFlow BPM environment. I always enjoy talking with Garth because he says good things about his competitors’ products, which means that not only does he have good manners, but he takes enough care to learn something about the competition rather than just tarring them all with the same brush.

We first looked at their user-driven reporting – available from the My AdHoc Reports option on the BizFlow menu – which is driven by OEM versions of the Jaspersoft open source BI server components; by next year, they’ll have the entire Jaspersoft suite integrated for more complete process analytics capabilities. Although you can already monitor the current processes from the core BizFlow capability, the ad hoc reporting add-on allows users (or more likely, business analysts) to define their own reports, which can then be run on demand or on a schedule.

HandySoft BizFlow Advanced Reporting - select data domainIf you’ve seen Jaspersoft (or most other ad hoc reporting tools) at work, there isn’t much new here: you can select the data domain from the list of data marts set up by an administrator, then select the type of report/graph, the fields, filtering criteria and layout. It’s a bit too techie for the average user to actually create a new report definition, since it provides a little much close contact with the database, such as displaying the actual SQL field names instead of aliases, but once the definition is created, it’s easy enough to run from the BizFlow interface. Regular report runs can be scheduled to output to a specific folder in a specific format (PDF, Excel, etc.), based on the underlying Jaspersoft functionality.

The key integration points with BizFlow BPM, then, are the ability of an administrator to include process instance data in the data marts as well as any other corporate data, allowing for composite reporting across sources; and access to the report definitions in the My AdHoc Reports tab.

The second part of the demo was on their WebMaker application development environment. Most BPM suites these days have some sort of RIA development tool, allowing you to build user forms, screens, portals and dashboards without using a third-party tool. This is driven in part by the former lack of good tools for doing this, and in part by the major analyst reports that state that a BPMS has to have some sort of application development built in to it. Personally, I’m torn on that: most BPMS vendors are not necessarily experts at creating application development tools, and making the BPMS capabilities available for consumption by more generic application development environments through standard component wrappers fits better with a best-of-breed approach that I tend to favor. However, many organizations that buy a BPMS don’t have modern application development tools at all, so the inclusion of at least an adequate one is usually a help.

HandySoft BizFlow WebMaker - specify field visibiltyHandySoft’s WebMaker is loosely coupled with BizFlow, so it can be used for any web application development, not just BPM-related applications. It does integrate natively with BizFlow, but can also connect with any web service or JDBC-compliant database (as you would expect) and uses the Model-View-Controller (MVC) paradigm. For a process-based application, you define the process map first, then create a new WebMaker project, define a page (form), and connect the page to the process definition. Once that’s done, you can then drag the process variables directly onto the form to create the user interface objects. There’s a full array of on-form objects available, including AJAX partial pages, maps, charts, etc., as well as the usual data entry fields, drop-downs and buttons. Since the process parameters are all available to the form, the form can change its appearance and behavior depending on the process variables, for example, to allow a partial page group to be enabled or disabled based on the specific step in the process or the value of the process instances variables at that step. This allows a single form to be used for multiple steps in a process that require a similar but not identical look and feel, such as a data entry screen and a QA screen; alternatively, multiple forms can be defined and assigned to different steps in the same process.

To be clear, WebMaker is not a tool for non-technical people: although a trained business analyst could probably get through the initial screen designs, there is far too much technical detail exposed if you want to do anything except very vanilla static forms; the fact that you can easily expose the MVC execution stack is a clue that this is really a developer tool. It is, however, well-integrated with BizFlow BPM, allowing the process instance variables to be used in WebMaker, and the WebMaker forms to be assigned to each activity using the Process Studio.

HandySoft is one of the small players in the BPMS market, and has focused on ad hoc and dynamic processes from the start. Now that all of the BPMS vendors have jumped into the dynamic BPM fray, it will be interesting to see if these new BizFlow tools round out their suite sufficiently to compete with the bigger players.

RAVEN Cloud General Release: Generate Process Maps From Natural Language Text

Back in May, I wrote about a cool new cloud-based service called RAVEN Cloud, which translated natural language text into process maps. As I wrote then:

You start out either with one of the standard text examples or by entering your own text to describe the process; you can use some basic text formatting to help clarify, such as lists, indenting and fonts. Then, you click the big red button, wait a few seconds, and voilà: you have a process map. Seriously.

They’re releasing RAVEN Cloud for general availability today (the beta sticker is still on the site as of the time of this writing), and I had an update demo with Dave Ruiz a couple of days ago. There are two major updates: UI enhancements, particularly the Business Process Explorer for process organization and categorization, and exporting to something other than JPEG.

RAVEN Cloud - Context menu in Business Process Explorer, and process attributes paneThe Business Process Explorer, in the left sidebar, looks like a set of folders containing processes although the “folders” are actually categories/tags, like in Google Docs: a process can be in more than one of these folders simultaneously if it relates to multiple categories, and the categories become metadata on the processes “contained” within them. This become more obvious when you look at the attributes for a process, where the Process Category drop-down list allows multiple selections. There is context menu support in the explorer to take actions on a selected process (open, rename, delete, move, save as), and the Process Explorer can be collapsed to provide more screen real estate for the process itself.

The Process Explorer contains a few standard categories, including process examples and tutorials; there is a separate administration panel for managing the process examples, which can then be used by any user as templates for creating  a new process. The tutorials highlight topics such as writing nested conditionals, and can be used in conjunction with the writing guide and their YouTube videos. I liked this one on correcting errors; I saw a bit of this in the demo when Dave accidentally misspelled a role name, resulting in an unwanted break in the flow, and didn’t specify the “else” clause of an “if” statement, resulting in an incomplete conditional:

Another feature that I saw in this version, which also brings them closer to BPMN compliance, is the inclusion of explicit start and end nodes in a process model. There can be multiple end nodes, but not multiple start nodes.

In addition to exporting as a JPEG image – useful for documentation but not for importing to another tool for analysis or execution – RAVEN Cloud now supports export to Visio or a choice of three XML formats: XMI 2.1, XPDL 2.0 and XPDL 2.1. The process model imported to Visio looked great, and the metadata at the process and shape level were preserved. Importing the XPDL into the BizAgi Process Modeler didn’t create quite as pretty a result: the process model was topographically correct, but the formatting needed some manual cleanup. In either case, this demonstrates the ability to have a business analyst without process modeling skills create a first version of a model, which can then be imported into another tool for further analysis and/or execution.

RAVEN Cloud - Error correction #4: role name fixed, process map regeneratedThis still creates only simple process models: it supports unadorned activities, simple start and end events, sequence flows, OR gateways and swimlanes. It also isn’t BPMN compliant, although it’s close. They’re planning to add link events (off-page connectors) and AND gateways, although it’s not clear what natural language constructs would support those, and they may have to use keywords instead, which weakens the natural language argument.

There will still be a free version, which does not support user-created categories or Visio/XPDL exports, and the paid version will be available for subscription for $25/user/month with volume discounts plus a 10% discount for an annual versus monthly subscription. An account can be either single-user or multi-user; by default, all models created within an account are visible for read-only access to all other users in that account, although access can be restricted further if required. A future version will include process model versioning and more access control options, since you can’t really have multi-user editing of a single process model unless you’re keeping some past versions. I think there’s also an opportunity for hybrid pricing similar to Blueworks Live, where a lower-cost user could have read-only permissions on models that were created by others, possibly with some commenting capabilities for feedback. It’s all self-provisioned: you just set up your account, enter your credit card details if you’re going for the paid version, and add users by their name and email address; they’ll receive an email invitation to create their account login and profile. I didn’t ask if one RAVEN Cloud login/profile can be shared across multiple accounts; that would be interesting for people like me, who work with multiple organizations on their process models, and I’ve seen something like this in Freshbooks, an online time tracking and invoicing applications, so that Freshbooks customers can easily interact since a single login (authentication) can have access to multiple accounts (authorization).

They’re also working on hosting RAVEN Cloud in a private cloud environment, so keep watching for that.

My verdict: still cool, but they need to re-work their subscription model a bit, and bring their notation in line with BPMN. They also have some challenges ahead in defining the language for new element types, but I’m sure that they’re up to it.

BPM Meets Goldilocks: Picking a First Process That’s Just Right

The key to picking the right process for your first BPM implementation is a bit like Goldilocks: you don’t want one too big, or too small, but just right. We’ve covered this topic in this week’s article in the series that I’m writing with Global 360’s Steve Russell, published over on bpm.com.

This is the first of a six-part series, with an introductory post published a couple of weeks ago to talk about the entire series. Coming up next: gaining business buy-in for project success.

Process Knowledge Initiative Technical Team

When I got involved in the Process Knowledge Initiative to help create an open-source body of knowledge, I knew that the first part, with all the forming of committees and methodology and such, would be a bit excruciating for me. I was not wrong. However, it has been thankfully short due the contributions of many people with more competence and patience in that area than I, and I’m pleased to announce that we’ve put together an initial structure and will soon be starting on the fun stuff (in my opinion): working with the global BPM community to create the actual BoK content.

From our announcement earlier this week:

The month of November was a busy one for the Process Knowledge Initiative. In execution of our startup activities, we defined the PKBoK governance process and technical team structure, recruited our first round of technical experts, and secured preliminary funding via our Catalyst Program.

On the PKBoK development side, the team is actively researching and defining the candidate knowledge (or focus) areas in preparation for a January community review release.

With the knowledge area release, the development of the PKBoK becomes a full community activity, from content contributions, working group collaboration, and public commentary to content improvement and annotation.

It’s impossible to do something like this without some sort of infrastructure to get things kicked off, although we expect most of the actual content to be created by the community, not a committee. To that end, we’ve put forward an initial team structure as follows:

  • Technical Integration Team is responsible for establishing the PKBoK blueprint (scope, knowledge areas, ontology, content templates), recruiting working group leaders, and coordinating content development, publication and review.
  • Methodology Advisory Board provides guidance and support on PKBoK development and review processes. The Methodology Advisory Board does not participate in content creation or review; rather it provides rigor to ensure the final content represents the community perspective.
  • Technical Advisory Board provides expert input to, and review of, deliverables from the Technical Integration Team and Working Groups. Technical Advisors may lead, or contribute content to working groups within their area of specialization.
  • Working Groups develop PKBoK content for a particular knowledge area, task or set of tasks. Working groups will form via public calls for participation. The first call is planned for April 2011.
  • BPM Community reviews, contributes to, and consumes the PKBoK. All BPM community members are welcome to participate in the development of the PKBoK or utilize the delivered content in their individual BPM practices.

You can see the people who are participating in the first three of these in the announcement – including academia, industry analysts, standards associations, vendors and end-user organizations – and we’re looking for more people to join these groups as we move along.

Most of the content creation will be done by the working groups and the global BPM community; the other groups are there to provide support and guidance as required. We’ll soon be putting forward the proposed knowledge areas for discussion, which will kick off the process of forming the working groups and creating content.

I’m also starting to look at wiki platforms that we can use for this, since this really needs to be an inclusive community effort that embraces multiple points of view, not a moderated walled garden. This open model for content creation, as well as a liberal Creative Commons license for distribution, is intended to gain maximum participation both from contributors and consumers of the BoK.

IBM Case Manager In Depth

I had a chance to see IBM’s new Case Manager product at IOD last month, and last week Jake Levirne, the product manager, gave me a more complete demo. If you haven’t read my earlier product overview from IOD as well as the pre-IOD briefing on Case Manager and related products, the business analyst view, a quick bit on customizing the UI and the technical roundtable, you may want to do so now since I’ll try not to repeat too much of what’s there already.

Runtime

IBM Case Manager Runtime - CSR role view in portalWe started by going through the end-user view of an application for insurance claims. There’s a role-based portal interface, and this user role (CSR) sees a list of cases, can search for a case based on any of the properties, or add a new case – fairly standard functionality. In most cases, as we’ll see later, cases are created automatically on the receipt of a specific document type, but there needs to be the flexibility to have users create their own as well. Opening a case, the case detail view shows case data (metadata) and case information, which comprises documents, tasks and history that are contained within the case. There’s also a document viewer, reminding us that case management is content-centric; the entire view is a bit reminiscent of the previous Business Process Framework (BPF) case management add-on, which has definitely contributed to Case Manager in a philosophical sense if not any of the actual underlying technology.

For those FileNet geeks in the crowd, a case is now a native content type in the FileNet content repository, rather than a custom object type as was used in the BPF; logically, you can think of this as a case folder that contains everything related to the case. The Documents tab is pretty straightforward – a list of documents attached to the case – and the History tab shows a list of events on the case, including documents being added and tasks started/completed. The interesting part, as you might have guessed, is in the Tasks tab, which shows the tasks (small structured processes, in reality) assigned to this case, either as required or optional tasks. Tasks can be added to a case at design time or runtime; when added at runtime, these are predefined processes, although there may be customizable parameters that the user can modify, but the end user can’t change the definition of a task. This gives some flexibility to the user – they can choose whether or not to execute the optional tasks, they can execute tasks in any order, and they can add new tasks to a case – but doesn’t allow the user to create new tasks: they are always selecting from a predefined list of tasks. Depending on the task definition, tasks for their case may end up assigned to them or to someone else, or to a shared queue corresponding to a role. This results in the two lists that we saw back in the first portal view: one is a list of cases based on search criteria, and the other is a list of tasks assigned to this user or a shared queue on which they are working.

IBM Case Manager Runtime - case task viewCreating a new case is fairly simple for the user: they click to add a case, and are presented with a list of instructions for filling out the initial case data, such as the date of loss and policy number in our insurance claim example. The data that can be entered using the standard metadata widget is pretty limited and the form isn’t customizable, however, and often there is an e-form included in the case that is used to capture more information. In this situation, there is a First Notice of Loss e-form that the user fills out to gather the claim data; this e-form is contained as a document in the case, but also synchronizes some of its fields with the case metadata. This ability to combine capabilities of documents, e-forms and folders has been in FileNet for quite a while, so it’s no surprise that they’re leveraging it here. It is important to note, however that this e-form would have to be designed in the Lotus forms designer, not in the Case Manager design tools: a reminder that the IBM Case Manager solution is a combination of multiple tools, not a single monolithic system. Whether this is a good or bad thing is a bit of a philosophical discussion: in the case of e-forms, for example, you may want to use this same form in other applications besides Case Manager, so it may make sense that it is defined independently, but it will require additional design skills.

Once the case is created, it will follow any initial process flows that are assigned to it, and can kick off manual tasks. For example, there could be automated activities that update a claims systems with the data captured on the FNOL form, and manual tasks created and assigned to a CSR to call the third parties’ insurance carrier. The underlying FileNet content engine has a lot of content-centric event handling baked right into it, so being able to do things such as trigger processes or other actions based on content or metadata updates have been there all along and are being used for any changes to a case or its contents.

Design Time

We moved over to the Case Manager Builder to look at how designers – business analysts, in IBM’s view – define new case types. At the highest level, you first define a “solution”, which can include multiple case types. Although the example that we went through used one case type per solution, we discussed some situations where you might want to have multiple case types in a single solution: for example, a solution for a customer service desktop, where there was a different case type defined for each type of request. Since case types within a single solution can share user interface designs, document types and properties, this can reduce the amount of design work if you plan ahead a bit.

IBM Case Manager Builder - define solution propertiesFor each solution, you define the following:

  • Properties (metadata)
  • Roles and the in-baskets (shared work queues) to which they have access
  • Document types
  • In-baskets associated with this solution
  • Case types that make up this solution.

Then, for each case type within a solution, you define the following:

  • The document type that will be used to trigger the creation of a case of this type, if any. Cases can be added manually, as we saw in the runtime example, or can be triggered by other events, but the heavily content-centric focus of Case Manager assumes that you might usually want to kick off a case automatically when a certain document type is added to the content repository.
  • The default Add Case page, which is a link to a previously-defined page in the IBM Mashup Center that will be used as the user interface on selecting the Add Case button.
  • The default Case Details page, which is a link to the Mashup Center page for displaying a case.
  • Optionally, overrides for the case details page for each role, which allows different roles to see different views of the case details.
  • Properties for this case type, which can be manually inherited from the solution level or defined just at this level. All solution properties are not automatically inherited by each case type, since it was felt that this would make it unnecessarily confusing, but any of the solution properties can be selected for exposure at the case level.
  • The property views (subsets) that are displayed in the case summary, case details and case search views. If more than about a dozen properties are used, then IBM recommends using an e-form instead of the standard views, which are pretty limited in terms of display customization. A view can include a group of properties for visual grouping.
  • Case folders to organize the content within a case.
  • Tasks associated with the case, grouped by required and optional tasks. Unlike the user interfaces, document types and properties, task definitions are not shared across case types within a solution, which requires that similar or identical tasks will require redefining for each case type. This is definitely an area that they can improve in the future; if their claim of loosely-coupled cases and processes is to be fully realized, then task/process definitions should be reusable at least across case types within a solution, if not across solutions.

IBM Case Manager Builder - Step EditorAlthough part of the case type definition, I’ll separate out the task definition for clarity. For each task within a case type, you define:

  • As noted above, whether it is required or optional for this case type.
  • Whether the task starts automatically or manually, or if the user optionally adds the task to the case at runtime.
  • Inclusion of the task in a set. Sets provide visual grouping of tasks within a case, but also control execution: a set can be specified as all-inclusive (all tasks execute if any of the tasks execute) or mutually exclusive (only one of the tasks in the set can be executed). The mutually exclusive situation could be used to create a manner of case subtypes, instead of using multiple case types within a solution, where the differences between the subtypes are minimal.
  • Preconditions for the task to execute, that is, the task triggers. In many cases, this will be the case start, but could also be when a document of a specific type is added to the case, or a case property value is updated to meet certain conditions, including combinations of property values.
  • Design comments, which could be used simply as documentation, but are primarily intended for use by a non-technical business analyst who created the case type definition up to this point but wants to pass of the creation of the actual process flow to someone more technical.
  • The process flow associated with this task, using the visual Step Editor. This allows the roles defined for the solution to be added as swimlanes, and the human-facing steps to be plotted out. This supports branching as well as sequential flow, but no automated steps; however, any automated steps that are added via the full Process Designer will appear in the uneditable grey lanes at the top of the Step Editor map. If you’ve used the Process Designer before, the step properties at the left of the Step Editor will appear familiar: they’re a subset of the step properties that you would see in the full Process Designer, such as step deadlines and allowing reassignment of the step to another user.

Being long acquainted with FileNet BPM, a number of my questions were around the connection between the Step Editor and the full BPM Process Designer; Levirne handled some of these, and I also had a few technical discussions at IOD that shed light on this. In short, the Step Editor creates a full XPDL process definition and stores it in the content repository, which is the same as what happens for any process definition created in the Process Designer. However, if you open this process definition with the Process Designer, it recognizes that it was created using the Case Manager Step Editor and performs some special handling. From the Process Designer, a more technical designer can add any system steps required (which will appear, but not be editable, in the Step Editor): in other words, they’ve implemented a fully shared model used by two different tools: the Case Builder Step Editor for a less technical business analyst, and the BPM Process Designer for a developer. IBM Case Manager Builder - deploy solutionAs with any process definition, the Case Manager task process definitions must be transferred to the process engine before they can be used to instantiate new processes: this is done automatically when the solution is deployed.

Deploying a solution to a test environment is a one-click operation from the Case Manager Builder main screen, although moving that to another environment isn’t quite as easy: the new release of the P8 platform allows a Case Manager solution to be packaged in order to move it between servers, but there’s still some manual work involved.

We wrapped up with a discussion of the other IBM products that integrate with Case Manager, some easier than others:

  • Case Manager includes a limited license of ILOG JRules, but it’s not integrated in the Case Manager Builder environment: it must be called as a web service from the Process Designer. There are already plans for better integration here, which is essential.
  • Content Analytics for data mining and analytics on the case metadata and case content, including the content of attached documents.
  • Case Analyzer, which is a version of the old BPM Process Analyzer, with enhancements to show analytics at the case level and the inclusion of custom case properties to provide a business view as well as an operational view in dashboard and reports.

They’re working on better integration between Case Manager and the the WebSphere product line, including both WebSphere Process Server and Lombardi; this will be necessary to combat the competition who have a single solution that covers the full range of BPM functionality from structured processes to completely dynamic case management.

Built on one of the best industrial-strength enterprise content management products around, IBM Case Manager will definitely see some adoption in the existing IBM/FileNet client base: adding this capability onto an existing FileNet Content Manager repository could provide a lot of value with a minimal amount of work for the customer, assuming that they actually allowed their business analysts to do the work that IBM intends them to. In spite of the power, however, there is a lack of flexibility in the runtime task definition that may make them less competitive in the open market.

IBM Case Manager demo

Friday Diversion: The Kemsley Wartime Journals

For those of you who follow me on Twitter or Facebook, you may have already seen Frank Kemsley’s Journal: the blog of my grandfather’s WWI daily diary from the time that he spent in the Canadian army. I launched it last week on Remembrance Day, and started the regular blogging on November 16th, corresponding to his first journal entry on November 16th, 1916. I’m publishing the scanned pages of the journal along with the posts, embedded in the first post corresponding to that page, and today was the first full journal page. His journals (there are 3 of them) run until he returns home in 1919, and I will do my best to keep up my transcription of his journals as long as he took to write the journals.

I’ve had some tremendous feedback so far on this, including retweets by the mayor of TorontoTony Baer’s tweet that this is obviously where I get my blogging gene 😉 plus an interesting comment from Martin Cleaver that these paper journals may outlive all of our online journaling.

When I discovered my grandfather’s journals, I also found a WWII journal of my father’s from 1944, which I will be starting to blog on January 1st. It only covers the period from January-September 1944, but he was in the Canadian Navy in the Atlantic, so there’s some interesting stuff right around June of that year.