What’s New in IBM ECM Products

Feri Clayton gave an update on the ECM product portfolio and roadmap, in a bit more depth than yesterday’s Bisconti/Murphy ECM product strategy session. She reinforced the message that the products are made up of suites of capabilities and components, so that you’re not using different software silos. I’m not sure I completely buy into IBM’s implementation of this message as long as there are still quite different design environments for many of these tools, although they are making strides in consolidating the end user experience.

She showed the roadmap for what has been released in 2011, plus the remainder of this year and 2012: on the BPM side, there will be a 5.1 release of both BPM and Case Manager in Q4, which I’ll be hearing more about in separate BPM and Case Manager product sessions this afternoon. The new Nexus UI will previous in Q4, and be released in Q2 of 2012. There’s another Case Manager release projected for Q4 2012.

There was a question about why BPM didn’t appear in the ECM portfolio diagram, and Clayton stated that “BPM is now considered part of Case Manager”. Unlike the BPM vendors who think of ACM as a part of BPM, I think that she’s right: BPM (that is, structured process management that you would do with IBM FileNet BPM) is a functionality within ACM, not the other way around.

She went through the individual products in the portfolio, and some of the updates:

  • Production Imaging and Capture now includes remote capture, which is nice for organizations that don’t want to centralize their scanning/capture. It’s not clear how much of this is the Datacap platform versus the heritage FileNet Capture, but I imagine that the Datacap technology is going to be driving the capture direction from here on. They’ve integrated the IBM Classification Module for auto recognition and classification of documents.
  • Content Manager OnDemand (CMOD) for report storage and presentment will see a number of enhancements including CMIS integration.
  • Social Content Management uses an integration of IBM Connections with ECM to allow an ECM library to access and manage content from within Connections, display ECM content within a Connections Community and a few other cross-product integrations. There are a couple of product announcements about this, but they seem to be in the area of integration between Connections and ECM as opposed to adding any native social content management to ECM.
  • FileNet P8, the core content management product, had a recent release (August) with such enhancements as bidirectional replication between P8 and Image Services, content encryption, and a new IBM-created search engine (replacing Verity).
  • IBM Content Manager (a.k.a., the product that used to compete with P8) has a laundry list of enhancements, although it still lags far behind P8 in most areas.

We had another short demo of Nexus, pretty much the same as I saw yesterday: the three-pane UI dominated by an activity stream with content-related events, plus panes for favorites and repositories. They highlighted the customizability of Nexus, including lookups and rules applied to metadata field entry during document import, plus some nice enhancements to the content viewer. The new UI also includes a work inbasket for case management tasks; not sure if this also includes other types of tasks such as BPM or even legacy Content Manager content lifecycle tasks (if those are still supported).

Nexus will replace all of the current end-user clients for both content and image servers, providing a rich and flexible user experience that is highly customizable and extensible. They will also be adding more social features to this; it will be interesting to see how this develops as they expand from a simple activity stream to more social capabilities.

Clayton then moved on to talk about ACM and the Case Manager product, which is now coming up to its second release (called v5.1, naturally). Given that much of the audience probably hasn’t seem it before, she wen through some of the use cases for Case Manager across a variety of industries. Even more than the base content management, Case Manager is a combination of a broad portfolio of IBM products within a common framework. She listed some of the new features, but I expect to see these in more detail in this afternoon’s dedicated Case Manager session so will wait to cover them then.

She discussed FileNet P8 BPM version 5.x: now Java-based for significant performance and capacity improvements (also due to a great deal of refactoring to remove old code sludge, as I have heard). As I wrote about last month, it provides Linux and zLinux support, and also allows for multi-tenancy.

With only a few minutes to go, she whipped through information lifecycle governance (records and retention management), including integration of the PSS Atlas product; IBM Content Collector; and search and content analytics. Given the huge focus on analytics in the morning keynote, it’s kind of funny that it gets about 30 seconds at the end of this session.

IBM IOD Day 2 Opening Keynote: Transformation in the Era of Big Data and Analytics

Today’s morning keynote kicked off with Steve Mills talking about big data – “as if data weren’t big before”, he joked – and highlighted that the real challenge is not necessarily the volume of data, but what we need to do in order to make use of that data. A huge application for this is customer service and sentiment analysis: figuring out what your customers are saying to you (and about you), and using that to figure out how to deliver better service. Another significant application area is that of the smarter planet: sensing and responding to events triggered by instrumentation and physical devices. He discussed a number of customer examples, pointing out that no two situations are the same and that a variety of technologies are required, but there are reusable patterns across industries.

Doug Hunt was up next to talk about content analytics – another type of big data – and the impact on transforming business processes. He introduced Randy Sumrall, CIO of Education Service Center Region 10 (State of Texas), to talk about the impact of technology on education and the “no child left behind” policy. New technology can be overwhelming for teachers, who are often required to select what technologies are to be used without sufficient information or skills to do so; there needs to be better ways to empower the educator directly rather than just having information available at the administrative level. For example, they’ve developed an “early dropout warning” tool to be used by teachers, analyzing a variety of factors in order to alert the teachers about students who are at risk of dropping out of school. The idea is to create tools for completely customized learning for each student, covering assessment, design and delivery; this is more classical BI than big data. Some interesting solutions, but as some people pointed out on the Twitter stream, there’s a whole political and cultural element to education as well. Just as some doctors will resist diagnostic assistance from analytics, so too will some teachers resist student assessments based on analytics rather than their own judgment.

Next was Frank Kern to talk about organizations’ urgency to transform their businesses, for competitive differentiation but also for basic survival in today’s fast-moving, social, data-driven world. According to a recent MIT Sloan study, 60% of organizations are differentiating based on analytics, and outperform their competitors by 220%. It’s all about speed, risk and customers; much of the success is based on making decisions and taking actions in an automated fashion, based on the right analysis of the right data.

Some of IBM’s future of big data analytics is Watson, and Manoj Saxena presented on how Watson is being applied to healthcare – being demonstrated at IOD – as well as future applications in financial services and other industries. In healthcare, consider that medical information is doubling every five years, and about 20% of diagnoses in the US have some sort of preventable error. Using Watson as a diagnostic tool puts all healthcare information into the mix, not just what your doctor has learned (and remembers). Watson understands human speech, including puns, metaphors and other colloquial speech; it generates hypotheses based on the information that it absorbs; then it understands and learns from how the system is used. A medical diagnosis, then, can include information about symptoms and diseases, patient healthcare and treatment history, family healthcare history, and even patient lifestyle and travel choices to detect those nasty tropical bugs that your North American doctor is unlikely to know about. Watson’s not going to replace your doctor, but provide decision support during diagnosis and treatment.

Dr. Carolyn McGregor of UOIT was interviewed about big data for capturing health informatics, particularly the flood of information generated by the instrumentation hooked up to premature babies in NICU: some medical devices generating several thousand readings per second. Most of these devices may have a couple of days of memory to store the measurements; after that, the data is lost if not captured into some external system. Being able to analyze patterns over several days’ data can detect problems as they are forming, allowing for early preventative measures to be taken: saving lives and reducing costs by reducing the time that the baby spends in NICU. A pilot is being done at Toronto’s world-class Hospital for Sick Children, providing analysis of 90 million data points each day. This isn’t just for premature babies, but is easily applicable to any ICU instrumentation where the patients require careful monitoring for changing conditions. This can even be extended to any sort of medical monitoring, such as home monitoring of blood glucose levels. Once this level of monitoring is commonplace, the potential for detecting early warning signals for a wide variety of conditions becomes available.

Interesting themes for day 2 of IOD. However, as much as they are pushing that this is about big data and analytics, it’s also about the decision management and process management required to take action based on that analysis.

IBM ECM Product Strategy

I finished the first day of IOD in the ECM product strategy session with Ken Bisconti and John Murphy. I was supposed to have a 1:1 interview with Bisconti at this same time, so now I know why that cancelled – the room is literally standing room only, and the same session (or, at least, a session with the identical name) is scheduled for tomorrow morning so there’s obviously a great deal of interest in what’s coming up in ECM.

They started with a summary of their 2011-2012 priorities:

  • Intelligent, distributed capture based on the DataCap acquisition
  • Customer self-service and web presentment of reports and statements
  • Rich user experiences and mobile device support
  • Whole solutions through better product integration and packaging as well as vertical applications and templates

The key deliverables in this time frame:

  • IBM Production Imaging Edition
  • DataCap Taskmaster expansion
  • CM8, FileNet CM updates
  • Project “Nexus”, due in 2012, which is the next generation of web-based user experience across the IBM software portfolio

They stressed that customers’ investments in their repositories is maintained, so the focus is on new ways to capture, integrate and access that data, such as bidirectional replication (including annotations and metadata) between older Image Services repositories and P8 Content Manager, and content repository federation.

Nexus is intended to address the classic problems with FileNet UI components: either they were easy to maintain or easy to customize, but never both. As someone who spent a lot of time in the 90s customizing UIs with the early versions of those components, I’d have to agree wholeheartedly with that statement. We saw a demo of the under-development version of Nexus, which shows three panes: a filterable activity stream for content and related processes, a favorites list, and a list of repositories. Searching in this environment can be restricted to a subset of the repositories, or across repositories: including non-IBM repositories such as SharePoint. Navigating to a repository provides a fairly standard folder-based view of the repository – cool for demos but often useless for very large repositories – with drag-and-drop capabilities for adding documents to the repository. The property dialog that appears for a new document can access external data sources in order to restrict the input to specific metadata fields.

This also provides access to teamspaces, which are sort of like a restricted version of an object store/library, where a user can create a teamspace (optionally based on a template), specify the folder structure, metadata and predefined searches, then add other users who can collaborate within this space. When a teamspace is opened, it looks pretty much like a regular library, except that it’s a user-created space rather than something that a system admin needs to set up.

Because of the underlying technology, Nexus can be surfaced in a number of different ways, including different types of widgets as well as on mobile devices. This style of user experience is a bit behind the curve of some other vendors, but is at least moving in the right direction. I look forward to seeing how this rolls out next year.

They moved on to discuss social content management, which covers a variety of social use cases:

  • Accessing and sharing content in the context of communities
  • Finding and navigating social content and social networks
  • Managing and governing social business information
  • Delivering social content business solutions

This obviously covers a lot of ground, and they’re really going to have to leverage the skills and lessons learned over in the Lotus group to jumpstart some of the social areas.

Next was Case Manager; I’m looking forward to a more in-depth product briefing on this alone, rather than just five minutes as part of the entire ECM strategy, but their content-centric view of case management seems to be resonating with their customers. That’s not to say that this is the only way to do case management, as we see from a number of other ACM vendors, but rather than IBM customers with big FileNet content repositories can really see additional value in the functionality that Case Manager provides on top of these repositories.

The newly announced Case Manager v5.1 aims to make it simpler to create and deliver case-based solutions, and includes a number of new integration capabilities including BPM (as we saw this morning) and data integration. They are also focusing on vertical industry case-based accelerators, and we saw a demo of a healthcare claims case management application that brings together case management, content and analytics to help a case worker to detect fraud. Like most case management scenarios, this is not focused on the actual automated detection of fraud, but on surfacing information to the user that will allow them to make that determination. Doing this in the context of content repositories and content analytics provides an rich view of the situation, allowing the case worker to make better decisions in much less time.

The case worker can create and assign tasks to others, including field workers who use a native iPad app to perform their field review (in the demo, this was a fraud investigator visiting a healthcare practitioner) including capturing new content using the iPad’s camera. Although the version that they demonstrated requires a live connection, they do expect to be delivering apps for disconnected remote devices as well, which is truly critical for supporting remote workers who may wander far beyond the range of their data network.

Moving on to information lifecycle management and governance, some of which is based on last year’s acquisition of PSS Systems, the portfolio includes smart archive (e.g., archiving SAP and other structured data), legal eDiscovery, records management and retention, and disposal and governance management. They’re now providing smart archive as a cloud offering, as well as on premise. The buzz-phrase of this entire area is “defensible disposition”, which sounds a bit like something that happens on The Sopranos, but is really about having an overall information governance plan for how data of all types area maintained, retained and destroyed.

They finished with a bit about IBM Watson for integrating search with predictive analytics, and the industry solutions emerging from this such as IBM Content and Predictive Analytics for Healthcare which is being shown here at IOD this week. We heard a bit about what this combination of technologies can do in the Seton Healthcare presentation earlier this afternoon, and we’ll see a demo of the actual packaged solution in the Wednesday morning keynote.

IBM IOD ECM Keynote: Content In Motion

Content at rest = cost

Content in motion = value

That was the message that kicked off the ECM keynote, then Kevin Painter took the stage to introduce the winners of the four ECM customer innovation awards – Novartis, Tejon Ranch, US Nuclear Regulatory Commission and Wells Fargo – before turning things over to Doug Hunt.

IBM defines unstructured data, or content, as pretty much everything that doesn’t fit in a database table. Traditionally, this type of information is seen as inaccessible, cumbersome, expensive, unmanageable and risky by business, IT, records managers and legal. However, with the right management of that content, including putting it into motion to augment systems of record, it can become accessible and relevant, providing a competitive advantage.

We heard from Wells Fargo about their ECM implementation, where they are moving from having scanned documents as merely an archival practice to having those documents be an active part of the business transactions. [This sounds just like moving from post-processing scanning to pre-processing scanning and workflow, which we’ve been doing for 30+ years, but maybe it’s more complex than that.] For them, ECM is a fundamental part of their mortgage processing architecture and business transaction enabling, supporting multiple lines of business and processes. This, I think, is meant to represent the “Capture” slice of the pie.

Novartis was on stage next to talk about their records management (the “Govern” slice), particularly around retention management of their records to reduce legal risk across their multi-national organization.

Next, Hunt addressed “Analyze” with content analytics, joined by someone from Seton Healthcare to discuss how they’re using Watson analytics to proactively identify certain high-risk patients with congestive heart failure to allow early treatment that can reduce the rate of hospital readmissions. With 80% of their information being unstructured, they need something beyond standard analytics to address this.

Case management was highlighted as addressing the “Activate” slice, and Hunt was joined by someone from SEB, a Nordic bank, to discuss how they are using IBM Case Manager as an exception handling platform (i.e., for those processes that kick out of the standard straight-through process), replacing their existing workflow engine.

Hunt did briefly address the “Socialize” slice, but he was so clued out about anything to do with social content, it was a bit embarrassing. Seriously, I don’t want to hear the executive in charge of IBM’s ECM strategy talk about social as something that his wife and kids do, but he doesn’t.

He finished up talking about the strength of the IBM ECM industry accelerators and business partners, both of which help to get systems up and running at their customers’ sites as quickly as possible.

Streaming Video from IBM IOD

You can watch live streaming video of the IOD keynotes, such as this afternoon’s EDM keynote at 2:15PT/5:15ET, plus interviews from the show floor (and hopefully, soon, a replay of this morning’s keynote with Jeff Jonas) here:

Watch live streaming video from ibmsoftware at livestream.com

 

The good news is that I can now watch the keynotes from the comfort of my hotel room, if I want, where the wifi doesn’t suck.

Better Together: IBM Case Manager, IBM Content Manager and IBM BPM

Dave Yockelson from ECM product marketing and Amy Dickson from IBM BPM product management talked about something that I’m sure is on the minds of all FileNet customers who are doing anything with process: how do the (FileNet-based) Case Manager and Content Manager fit together with the WebSphere BPM products?

They started with a description of the IBM BPM portfolio – nothing new here – and how ACM requires an integrated approach that addresses repeatable patterns. Hmmmm, not completely sure I agree with that. Yockelson went through the three Forrester divisions of case management from their report on the ACM space, then went through a bit more detail on IBM Case Manager (ICM) and how it knits together functionality from the entire IBM software portfolio: content, collaboration, workflow, rules, events, integration, and monitoring and analytics. He positioned it as a rapid application development environment for case-based solutions, which is probably a good description. Dickson then went through IBM BPM (the amalgam of Lombardi and WebSphere Process Server that I covered at Impact), which she promised would finish up the “background” part and allow them to move on to the “better together” part.

So, in the aforementioned better together area:

  • Extend IBM BPM processes with content, using document and list widgets that can be integrated in a BPM application. This does not include content event processes, e.g., spawning a specific process when a document event such as check-in occurs, so is no different than integrating FileNet content into any BPMS.
  • Extend IBM BPM Advanced (i.e., WPS) processes with content through a WebSphere CMIS adapter into the content repository. Ditto re: any BPMS (or other system) that supports CMIS being able to integrate with FileNet content.
  • Invoke an IBM BPM Advanced process from an ICM case task. Assuming that this is via a web service call (since WPS allows processes to be exposed as web services), not specifically an IBM-to-IBM integration.

Coming up, we’ll see some additional integration points:

  • Invoke an IBM BPM Express/Standard process from an ICM case task. This, interestingly, implies that you can’t expose a BPM Express/Standard process as a web service, or it could have been done without additional integration, doesn’t it? The selection of the process and mapping of case to process variables is built right into the ICM Builder, which is definitely a nice piece of integration to make it relatively seamless to integrate ICM and BPM.
  • Provide a federated inbox for ICM and BPM (there was already an integrated inbox for the different types of BPM processes) so that you see all of your tasks in a single list, based on the Business Space Human Tasks widget. When you click on a task in the list, the appropriate widgets are spawned to handle that type of work.
  • Interact with ICM cases directly from a BPM process through an integration service that allows cases to be created, retrieved and updated (metadata only, it appears) as part of a BPM process.

This definitely fits IBM’s usual modus operandi of integrating rather than combining products with similar functionality; this has a lot of advantages in terms of reducing the time to releasing something that looks (sort of) like a single product, but has some disadvantages in the underlying software complexity as I discussed in my IBM BPM review from Impact. A question from the audience asked about consolidation of the design environment; as expected, the answer is “yes, over time”, which is similar to the answer I received at Impact about consolidation of the process engines. I expect that we’ll see a unified design environment at some point for ICM and both flavors of BPM by pulling ICM design into the Process Center, but there might still be three engines under the covers for the foreseeable future. Given the multi-product mix that makes up ICM, there will also be separate engines (and likely design environments) for non-process functions such as rules, events and analytics, too; the separate engines are inevitable in that case, but there could definitely be some better integration on the design side.

IBM IOD Keynote: Turn Insight Into Action

This is a big conference. We’re in the Mandalay Bay Events Center, which is a stadium that would probably hold a hockey rink, and although all the seats are not full, it’s a pretty big turnout. This is IBM’s centennial, which is a theme throughout the conference, and the opening session started with some key points in the history of IBM’s products. IBM might seem like a massive, slow-moving ship at times, but there is no doubt that they’ve been an innovator through the entire age of modern computing. I just hope to be seeing some of that innovation in their ECM and ACM products this week.

The keynote session was hosted by Katty Kay, a BBC news journalist in the Washington bureau, who added a lot of interesting business and social context to the presentations.

Jeff Jonas spoke about analytics, pointing out that with the massive amounts of data available to enterprises, enterprises are actually getting dumber because they’re not analyzing and correlating that data in context. He used a jigsaw puzzle metaphor: you don’t know what any particular piece means until you see it in relation to the others with which it fits. You also don’t need all of the pieces in the puzzle to understand the big picture: context accumulates with each new observation, and at some point, confidence improves while computational effort decreases.

He looked at two sides of analytics – sense and respond, and explore and reflect – and how they fit into the activity of achieving insight. If the keynotes are available online, definitely watch Jonas’ presentation: he’s funny and insightful in equal measure, and has a great example of a test he ran with jigsaw puzzles and human cognition. He went much too fast for me to keep up in these notes, and I’ll be watching it again if I can find it. The only problem was that his presentation ruined me for the rest of the keynotes, which seemed dull in comparison. 🙂

Sarah Diamond was up next to talk about the challenges facing financial institutions, and how analytics can support the transformation of these organizations by helping them to manage risk more effectively. She introduced a speaker from SunTrust, and IBM customer, who spoke about their risk management practices based around shared data warehousing and reporting services. Another SunTrust speaker then talked about how they use analytics in the context of other activities, such as workflow. A good solid case study, but not sure that this was worth such a big chunk of the main keynote.

Mike Rhodin spoke about how innovation across industries is opening new possibilities for business optimization, particularly where analytics create a competitive advantage. Analytics are no longer a nice-to-have, but an imperative for even staying in business: the performance gap between the winners and losers in business is growing, and is fueled in part by the expedient use of analytics to generate insights that allow for business optimization. Interestingly, marketing and finance are the big users of analytics; only 25% of HR leaders are using analytics to help them with hiring an effective workforce.

Robert LeBlanc discussed how the current state of information from everywhere, radical flexibility and extreme scalability impacts organizations’ information strategy, and challenged the audience to consider if their information strategy is bold enough to live in this new environment. Given that 30% of organizations surveyed reported that they don’t even know what to do with analytics, it’s probably safe to say that there are some decidedly meek information strategies out there. Information – both data and unstructured content – can come from anywhere, both inside and outside your organization, meaning that the single-repository dream is really just a fantasy: repositories need to be federated and integrated so that analytics can be applied on all of the sources where they live, allowing you to exploit information from everywhere. He pointed out the importance of leveraging your unstructured information as part of this.

The keynote finished with Arvind Krishna – who will be giving another full keynote later today – encouraging the audience to take the lead on turning insight into action. He summarized this week’s product announcements: DB2 Analytics Accelerator, leveraging Netezza; IMS 12; IBM Content and Predictive Analytics for Healthcare; IBM Case Manager v5.1, bringing together BPM and case management; InfoSphere MDM 10; InfoSphere Information Server 8.7; InfoSphere Optim Test Data Management Self Service Center; Cognos native iPad support; Cognos BI v10.1.1. He also announced that they closed the Algorithmics acquisition last week, and that they will be acquiring Q1 Labs for security intelligence and risk management. He spoke about their new products, InfoSphere BigInsights and InfoSphere Streams, which we’ll be hearing about more in tomorrow’s keynote.

Elmer Sotto of Facebook Canada at DemoCamp Toronto 30

Unbelievably, the 30th edition of DemoCamp happened in Toronto a couple of weeks ago, and I was there to hear the keynote from Elmer Sotto of Facebook Canada, as well as see the short, live demos from four local startups. I’ll post my notes on the demo in a subsequent post, but I’ve been thinking about Sotto’s exploration of the question of what is social: although he was focused on the consumer market, I saw a lot of parallels with social business. He saw three basic drivers for a social environment:

  • You are proud of what you do and want to share it
  • Others want to see what you have to share
  • You specifically share with your social network

He spoke about having a social platform that is optimized for telling stories, where those stories are for the purpose of building identity, sparking conversation or deepening relationships. Or, as we might say in the social enterprise world: stories for reputation, collaboration or building our social graph.

To be truly social, a platform must be social by design, not just have share/like buttons tacked on after completion. Software that has social in its very DNA must be shared to be fully functional; can you imagine Facebook if you were the only one on it? It must also mimic real social norms in order to be successful: amplifying existing social or cultural activities, not trying to create new ones, and extending an existing social graph rather than creating a new one.

It’s interesting that Facebook is taking on the challenge of replacing the mostly unstructured data of notes with more structured semantic data to allow the surfacing of that data to parts of your social graph: instead of just “liking” something, they are allowing applications to create the structure of user/action/object for users to interact with that application.

The latter part of his presentation turned into a bit of a Facebook ad, including video from the F8 conference about the new Timeline feature, but I found some of his points were surprisingly useful in an enterprise context.

Enabling Agile Processes With IBM BPM For z/OS

Dave Marquard, Janet Wall and Eric Herness from the IBM BPM team gave an analyst briefing today on BPM on the z/OS platform. At Impact earlier this year, we saw a merging of the Lombardi acquisition and WebSphere Process Server into a unified IBM BPM product, and this month, they released BPM on z/OS. This is intended to unify across the historic divide between z (mainframe) and non-z assets, and allow the benefits of BPM for agility and visibility to be combined more easily with the z/OS applications and data.

The presentation highlighted a typical process problem in a System z environment: account opening in financial institution, where paper-based manual processes at the front end are combined with multiple repositories of customer information, a variety of systems for risk assessment and customer care, and legacy account management systems. In their new vision, this can be replaced with explicit process management and better orchestration between the components; this, of course, is not unique to this platform, but is a general benefit of BPM. Deploying BPM on z/OS, however, leverages co-location for better performance and access to data, as well as the scalability that you would expect on this larger platform.

From an IBM BPM architecture standpoint, the Process Server components can now be hosted on z/OS, while the Process Center and its repository stay on Windows, AIX or Linux. Process Server Advanced for z/OS is more than just a simple port: it leverages native z/OS data structures, supports languages such as COBOL, provides local adapters to other z/OS applications, and allows reusable services to be created more easily. Since the process and services are both running on z/OS, WebSphere z/OS does optimization for cross-memory local communications to improve performance and resource utilization, providing the most benefit when the processes frequently interact with DB2, CICS and IMS on the same platform, and also providing seamless integration with other facilities such as RACF.

This plugs into Business Monitor for z/OS that monitors the processes, other z/OS applications and events, and provides user-customizable dashboards for overall monitoring and some KPI-based predictive analytics. Other process-related offerings that are already on z/OS include business rules, ESB and message broker, so the migration of BPM to this platform provides a pretty robust set of tools for those companies who rely on z/OS for their primary operations. This is now providing a much more model-driven, process-oriented platform, allowing the underlying DB2 and CICS applications to be abstracted and orchestrated more easily.

They talked about a couple of case studies (without naming the clients), highlighting scalability, performance and resilience as the key differentiators of running BPM on z/OS for existing z/OS clients.

A few additional references provided in the briefing notes:

Since most of my customers are in financial services and insurance, many of them are on IBM mainframe platforms. Although not all will choose to deploy BPM on z/OS, this does provide an option for those who want to more fully integrate their mission-critical processes with their existing z/OS applications.

Colonial Life at TUCON

I’m wrapping up my TUCON trip with the Colonial Life presentation on their TIBCO iProcess and BusinessWorks implementation in their back office. I work a lot with insurance companies, and find that they can be very conservative in terms of technology implementations: many are just implementing document imaging and workflow, and haven’t really looked at full BPM functionality that includes orchestration of different systems as well as work management. I had a chance to talk with the two presenters, Bijit Das from the business side and Phil Johnston from IT, in a private briefing yesterday; I heard about their business goals to do better work management and improve efficiencies by removing paper from the process, as well as their technical goal to build an agile solution that could be used across multiple process flows. They have done their first implementation in their policy administration area, where they receive about 180k pages of inbound documentation per year, resulting in about 10k work items per month.

They ended up using iProcess primarily as a state management and queuing engine, embedding most of the process flow rules in external database tables, and having just simple process flows in iProcess that routed work based on the table values rather than logic within the process model itself. Once a piece of work ended up in the right queue (or in a user-filtered view of a common work queue), the user could complete it, route it elsewhere, or put it on hold while they performed some activity outside of BPM. A huge part of their improvements came from using BW to create reusable services, where these services could be called from the processes, but they also turned that around and have some cases where iProcess is called as a service from BW for queue and state management, using services that had been developed by Unum (their parent company) for their implementation. They wrote their own custom user interface portal, allowing users to select the queue that they want to work, filter the queue manually, and select the work item that they want to work on. This is a bit unusual for back-office transactional systems, which typically push the next piece of work to a user rather than allowing them to select it, but it’s a lot harder to build those rules when you’re effectively writing all the work management in database tables rather than leveraging work management capabilities in a BPMS.

They transitioned from a very waterfall development model to a much more agile methodology throughout their first project lifecycle, which meant that the business area was seeing the code as it was being developed and testing, allowing for much smoother iterations. Although their first production release took about nine months (after an additional six months to implement the infrastructure), they did their next release in two months. They still do a lot of swivel-chair integration with their legacy policy administration system, and need to build better integration to further improve efficiencies and start to do some straight-through processing.

They’ve seen some impressive improvements:

  • The discovery and modeling that happened prior to the implementation forced them to look at their processes critically, and reorganize their teams so that similar work was processed by the same team
  • Minimized handoffs have improved SLAs by 4%
  • Increased visibility into processes
  • Removed 180k pieces of paper per year from the operations area
  • 20% efficiency improvement
  • Standardized solution for future implementations in other areas

They also learned some lessons and best practices, such as establishing scope, tools for process discovery and brining in the right resources at the right time. Yesterday, when I met with them, I mentioned Nimbus to them, which they had not yet looked at; obviously, they had time to check it out since then, since Bijit called it out from the presentation, saying that it could have helped them during process discovery. Their next steps are to do more system integration to further improve efficiencies by automating where possible, add input channels, and integrate smart forms to drive processes.

Although they have seen a huge amount of improvement in their processes, this still feels a bit like an old-school document workflow implementation, with table-driven simple process flows. Undoubtedly, the service layer is more modern, but I’m left feeling like they could see a lot more benefit to their business if they were to take advantage of newer BPM capabilities. However, this was probably a necessary first step for such a paper-bound organization that was cautiously dipping its toe into the BPM waters.