BPM 2010 in North America For The First Time

The past couple of years, I’ve been attending the academic/research BPM conference – BPM 2008 in Milan, BPM 2009 in Ulm – where BPM researchers from corporate research facilities and universities present papers and findings all about BPM. This is BPM of the future, and if you’re interested in where BPM is going, you should be there, too. This year, for the first time, it’s in North America, hosted by the Stevens Institute in Hoboken, NJ, which provides an opportunity to participate for those of us on this side of the pond with little travel budget. Before you look at my coverage from previous years and cringe in horror at the descriptions of papers rife with statistical analysis, keep in mind that this year there will also be an industry track in addition to the educational paper track, showcasing some of the more practical aspects.

If you’re a BPM vendor, you should be sending along people from your architecture and development labs who are thinking about future generations of your product: they will definitely come away with valuable ideas and contacts. You might even find yourself a smart young Ph.D. candidate with research that specifically matches your interests. If you have your own research that you’d like to share, there’s still time to submit a paper for one of the pre-conference workshops.

Vendors, you should also consider sponsoring the conference: this is a prestigious place to have your name associated with BPM, and is likely to have more lasting benefits than sponsoring your standard BPM dog-and-pony show. You can find out more about sponsorship opportunities here. Tell Michael that I sent you. 🙂

Impact Keynote: Agility in an Era of Change

Today’s keynote was focused on customers and how they improving their processes in order to become more agile, reduce costs and become more competitive in the marketplace. After a talk and intro by Carrie Lee, business news correspondent and WSJ columnist, Beth Smith and Shanker Ramamurthy of IBM hosted Richard Ward of Blue Cross Blue Shield of Michigan, Rick Goldgar of the Texas Education Agency and Justin Snoxall of Visa Europe.

The message from yesterday continued: process is king, and is at the heart of any business improvement. This isn’t just traditional structured process management, but social and contextual capabilities, ad hoc and dynamic tasks, and interactions across the business network. As they pointed out, dynamic processes don’t lead to chaos: they deliver consistent outcomes in goal-oriented knowledge work. First of all, there are usually structured portions of any process, whether that forms the overarching framework from which collaborations are launched, or whether structured subprocesses are spawned from an unstructured dynamic process. Secondly, monitoring and controls still exist, like guardrails around your dynamic process to keep it from running off the road.

The Lombardi products are getting top billing again here today, with Blueprint (now IBM BPM Blueprint, which is a bit of a mouthful) positioned as a key collaborative process discovery and modeling tool. There’s not much new in Blueprint since the Lombardi days except for a bit of branding; in other words, it remains a solid and innovative way for geographically (and temporally) separated participants to collaborate on process discovery. Blueprint has far better capabilities than other online process discovery tools, but they are going to need to address the overlap – whether real or perceived – with the free process discovery tools including IBM BlueWorks, ARISalign, InterstageBPM and others.

Smith gave a brief demo of Blueprint, which is probably a first view for many of the people in the audience based on the tweets that I’m seeing. Ramamurthy stepped in to point out that processes are part of your larger business network: that’s the beauty of tools like Blueprint, which allow people in different companies to collaborate on a hosted web application. And since Lombardi has been touting their support of BPMN 2.0 since last September, it’s no surprise that they can exchange process models between Blueprint and process execution engines – not the full advantages of a completely model-driven environment with a shared repository, but a reasonable bridge between a hosted modeling tool and an on-premise execution tool.

As you get into demanding transaction processing applications, however, Smith discussed WebSphere Process Server as their industrial-strength offering for handling high volumes of transactions. What’s unclear is where the Lombardi Edition (formerly TeamWorks) will fit as WPS builds out its human-centric capabilities, creating more of an overlap between these process execution environments. A year ago, I would have said that TeamWorks and WPS fit together with a minimum of overlap; now, there is a more significant overlap, and based on the WPS direction, there will be more in the future. IBM is no longer applying the “departmental” label to Lombardi, but I’m not sure that they really understand how to make these two process execution engines either work together with a minimum of overlap, or merge into a single system. Or maybe they’re just not telling.

It’s not just about process, however: there’s also predictive analytics and using real-time information to monitor and adjust processes, leveraging business rules and process optimization to improve processes. They talked about infusing processes with points of agility through the use/integration of rules, collaboration, content and more. As great as this sounds, this isn’t just one product, or a seamlessly-integrated suite: we’re back to the issue that I discussed with Angel Diaz yesterday, where IBM’s checklist for customers to decide which BPM products that they need will inevitably end up with multiple selections.

The session ended up with the IBM execs and all three customers being interviewed by Carrie Lee; as a skilled interviewer who has obviously done her homework, this had a good flow with a reasonable degree of interaction between the panelists. The need for business-controlled rules was stressed as a way to provide more dynamic control of processes to the business; in general, a more agile approach was seen as a way to reduce implementation time and make the systems more flexible in the face of changing business needs. Ward (from BCBS) said that they had to focus on keeping BPM as a key process improvement methodology, rather than just using TeamWorks as another application development tool, and recommended not going live with a BPMS without metrics for you to understand the benefits. That sounds like good advice for any organization finding themselves going down the rabbit hole of BPMS application development when they really need to focus on their processes.

Using Business Space for Human Workflows

Back to the breakouts for the rest of the afternoon, I attended a presentation and demo by Michael Friess of IBM’s BBlingen R&D lab on using Business Space to build user interfaces for human-centric processes.

Business Space is what I would call a mashup environment, although I think that IBM is avoiding that term because it just isn’t taken seriously in business; in other words, a portal-like composition application development environment where pre-built widgets from disparate sources can quickly be assembled into an application, with a great deal more interaction between the widgets than you would find in a simple portal. Business Space is, in fact, built on the Lotus Mashup Center infrastructure; I think they just prettied it up and gave it a more corporate-sounding name, since it bears a resemblance to the Lotus Mashup Center version that I played with a while back with the FileNet ECM widgets. It’s browser-based and is fairly clean-looking, with easy placement, resizing and configuration of widgets.

Friess considered both “traditional” (predefined structured) and dynamic human BPM, where the dynamic side includes collaboration, allowing the user to organize their own environment, and adaptive case management. Structured BPM typically has fixed user interfaces that have a specific mode of task assignment (get next, personal task list, team task list, or team-based allocation). Business Space, on the other hand, provides a semi-structured framework for BPM user interfaces where the BPM widgets can be assembled under the toolbar-like links to other spaces and pages; the widgets use REST interfaces to back-end IBM services such as WPS, Business Compass, Business Monitor, Business Fabric and ESB, as well as any other services available internally or externally via REST. Templates can be used to create pages with standard functionality, such as a vanilla BPM interface, which can then be customized to suit the specific application.

Each widget can be configured for the content (which tasks and properties are visible and editable to the user), the actions available to the user, and the display modes such as list or table view. Even if a specific user isn’t allowed to choose the widgets that appear on the page, they typically will have the ability to customize the view somewhat through built-in (server-side) filtering and sorting.

Once widgets are placed on a page and configured, they are wired together in order to create interactions between the widgets: for example, a task list widget will be wired to a task details widget so that the item selected in the list will be displayed in the details view.

There are a number of BPM widgets available, including task list, task details, escalations list, human workflow diagram (from the process model, which will change to indicate any new collaboration tasks) and even free-form forms; these in turn allow any sort of BPM functionality such as spawning a collaboration task. Care must be taken in constructing the queries that underlay the list-type widgets, although that would be true in any user interface development that presents a list to a user; the only specific consideration here is that the mashup may not be constructed by an developer, but rather by a business analyst, which may require a developer to predefine some views or queries for use by the widgets.

If you’ve seen any mashup environment, this is all going to look pretty familiar, but I consider that a good thing: the ability to build composite applications like this is critical in many situations where full application development can’t be justified, especially for prototype and situational applications, but also to replace the end user computing applications that your business analysts have previously built in Excel or Access. Unfortunately, I think that some professional services types feel that mashup environments and widgets are toys rather than real application development tools; that’s an unfortunate misconception, since these can be every bit as functional and scalable as writing custom Java code, and a lot more agile. You’re probably not going to use mashups and widgets for every user interface in BPM, but it should be a part of your application development toolkit.

WebSphere BPM Analyst Briefing

The second of the analyst roundtables that I attended was with Angel Diaz, VP of BPM, and Rod Favaron, who used to head up Lombardi and is now part of the WebSphere team. My biggest question for them was what’s happening (if anything) with some consolidation of the BPM portfolio; after much gnashing of teeth and avoiding of the subject, my interpretation of their answer is that there will be no consolidation, but that customers will just buy it all.

IBM has a list of 10 questions that they use with their customers to determine which BPM product(s) that they need; my guess is that most customers will somehow end up requiring several products, even if the case could have been made in the past that a single one would do. Angel and Rod talked about the overlap between the products, highlighting that WPS and Lombardi have been targeted at very different applications; although that has been true in the past, the new functionality that we’re seeing in WPS for human-centric processes is creating a much greater overlap, although I would guess that Lombardi is far superior (and will remain so for some time) for that functionality, just as WPS provides greater scalability for pure orchestration processes. There’s also overlap in the modeling side between the free BlueWorks site and the on-demand Blueprint: both offer some discovery and mapping of processes, and as more functionality is added to BlueWorks, it may be difficult to justify the move to a paid service if the customer needs are minimal.

They were more comfortable talking about what was being done in order to move Lombardi fully into the WebSphere family: a single install for Lombardi and WAS; leveraging WebSphere infrastructure such as ESB; and the integration of other IBM rules, content and analytic products to provide an alternative to the previously existing third-party product interfaces used in Lombardi TeamWorks. They also discussed how the small Lombardi sales team has been integrated into the 800-strong WebSphere sales team, and used to train that team on how to position and sell the Lombardi products.

We had a very enjoyable session: I like both Rod and Angel, and they were pretty relaxed (except for the points when I asked if they considered FileNet to be their competitor, and mentioned that Blueprint should be Lotus rather than WebSphere), even having a competition where whichever of them said “TeamWorks” (instead of IBM WebSphere Lombardi Edition) had to throw a dollar into the pot, presumably for the beer fund. At the end of it, however, I was left with the thought – and hope – that this story will continue to evolve, and that we’ll see something a bit more consolidated, and a bit more cohesive, out of the WebSphere BPM team.

WebSphere Business Performance and Service Optimization

I sat in on a roundtable with Doug Hunt, VP of Business Performance and Service Optimization (which appears to be a fancy name for industry accelerators) and Alan Godfrey of Lombardi. Basically, BP&SO is a team within the software group (as opposed to services) that works with GBS (the services part of IBM) to build out industry vertical accelerators based on actual customer experience. In other words, these are licensed software packs that would typically be bundled with services. A BP&SO center of excellence within GBS has been launched in order to link the efforts between the two areas.

I heard a bit about the accelerators in the BPM portfolio update this morning; they’re focused on making implementation faster by providing a set of templates, adapters, event interfaces and content for a specific industry process, which can then be built out into a complete solutions by GBS or a partner. In particular, the accelerators look at how collaboration, monitoring, analytics, rules and content can be used specifically in the context of the vertical use case. They’re not really focused on the execution layer, since that tends to be where the ISVs play, but rather more prescriptive, such as the control layer for real-time monitoring across multiple business silos.

Interestingly, Hunt describe the recently-revealed advanced case management (ACM) as a use case around which an accelerator could be developed; I’m not sure that everyone would agree with this characterization, although it may be technically closer to the truth than trying to pass off the ACM “strategy” as a product.

This trend for vertical accelerators has been around in the BPM market for a while with many other vendors, and the large analysts typically look at this as a measure of the BPMS vendor’s maturity in BPM. The WebSphere accelerators are less than a packaged application, but more than a sales tool; maybe not much more, since they were described as being suitable for an “advanced conference room pilot”. In any case, they’re being driven in part by the customers’ need to be more agile than is permitted with a structured packaged application. There’s no doubt that some highly regulated processes, such as in healthcare, may still be more suited for a packaged application, but the more flexible accelerators widen the market beyond that of the packaged applications.

WebSphere BPM Analyst Update

There was a lunchtime update for the analysts on all the new WebSphere offerings; this was, in part, a higher-level (and more business oriented) view of what I saw in the technical update session earlier.

We also saw a demo of using Cast Iron (which was just acquired by IBM this morning) to integrate an on-premise SAP system with Salesforce.com; this sort of integration across the firewall is essential if cloud platforms are going to be used effectively, since most large enterprises will have a blend of cloud and on-premise.

There’s a ton of great traffic going on at #ibmimpact on Twitter and the IBM Impact social site, and you can catch the keynotes and press conference on streaming video. Maybe a bit too much traffic, since the wifi is a bit of a disaster.

WebSphere BPM Product Portfolio Technical Update

The keynotes sessions this morning were typical “big conference”: too much loud music, comedians and irrelevant speakers for my taste, although the brief addresses by Steve Mills and Craig Hayman as well as this morning’s press release showed that process is definitely high on IBM’s mind. The breakout session that I attended following that, however, contained more of the specifics about what’s happening with IBM WebSphere BPM. This is a portfolio of products – in some cases, not yet really integrated – including Process Server and Lombardi.

Some of the new features:

  • A whole bunch of infrastructure stuff such as clustering for simple/POC environments
  • WS CloudBurst Appliance supports Process Server Hypervisor Edition for fast, repeatable deployments
  • Database configuration tools to help simplify creation and configuration of databases, rather than requiring back and forth with a DBA as was required with previous version
  • Business Space has some enhancements, and is being positioned as the “Web 2.0 interface into BPM” (a message that they should probably pass on to GBS)
  • A number of new and updated widgets for Business Space and Lotus Mashups
  • UI integration between Business Space and WS Portal
  • Webform Server removes the need for a client form viewer on each desktop in order to interact with Lotus Forms – this is huge in cases where forms are used as a UI for BPM participant tasks
  • Version migration tools
  • BPMN 2.0 support, using different levels/subclasses of the language in different tools
  • Enhancements to WS Business Modeler (including the BPMN 2.0 support), including team support, and new constructs including case and compensation
  • Parallel routing tasks in WPS (amazing that they existed this long without that, but an artifact of the BPEL base)
  • Improved monitoring support in WS Business Monitor for ad hoc human tasks.
  • Work baskets for human workflow in WPS, allowing for runtime reallocation of tasks – I’m definitely interested in more details on this
  • The ability to add business categories to tasks in WPS to allow for easier searching and sorting of human tasks; these can be assigned at design time or runtime
  • Instance migration to move long-running process instances to a new process schema
  • A lot of technical implementation enhancements, such as new WESB primitives and improvements to the developer environment, that likely meant a lot to the WebSphere experts in the room (which I’m not)
  • Allowing Business Monitor to better monitor BPEL processes
  • Industry accelerators (previously known as industry content packs) that include capability models, process flows, service interfaces, business vocabulary, data models, dashboards and solution templates – note that these are across seven different products, not some sort of all-in-one solution
  • WAS and BPM performance enhancements enabling scalability
  • WS Lombardi Edition: not sure what’s really new here except for the bluewashing

I’m still fighting with the attendee site to get a copy of the presentation, so I’m sure that I’ve missed things here, but I have some roundtable and one-on-one sessions later today and tomorrow that should clarify things further. Looking at the breakout sessions for the rest of the day, I’m definitely going to have to clone myself in order to attend everything that looks interesting.

In terms of the WPS enhancements, many of the things that we saw in this session seem to be starting to bring WebSphere BPM level with other full BPM suites: it’s definitely expanding beyond being just a BPEL-based orchestration tool to include full support for human tasks and long-running processes. The question lurking in my mind, of course, is what happens to FileNet P8 BPM and WS Lombardi (formerly TeamWorks) as mainstream BPM engines if WPS can do it all in the future? Given that my recommendation at the time of the FileNet acquisition was to rip out BPM and move it over to the WebSphere portfolio, and the spirited response that I had recently to a post about customers not wanting 3 BPMSs, I definitely believe that more BPM product consolidation is required in this portfolio.

PegaWORLD: Managing Aircraft at Heathrow Airport

Eamonn Cheverton of BAA discussed the recent event-driven implementation of Pega at Heathrow airport for managing aircraft from touchdown to wheels-up at that busiest of airports. In spite of the recent interruption caused by the volcanic eruption in Iceland, Heathrow sees millions of passengers each year, yet had little operational support or information sharing between all of the areas that handle aircraft, resulting in a depressingly low (for those of us who fly through Heathrow occasionally) on-time departure rate of 68%. A Europe-wide initiative to allow for a three-fold increase in capacity while improving safety and reducing environmental effects drove a new business architecture, and had them look at more generic solutions such as BPM rather than expensive airport-specific software.

We’ll be looking more at their operations tomorrow morning in the case management workshop, but in short, they are managing aircraft air-to-air: all activities from the point that an aircraft lands until it takes off again, including fuel, crew, water, cleaning, catering, passengers and baggage handling. Interestingly, the airport has no visibility into the inbound flights until about 10 minutes before they land, which doesn’t provide the ability to plan and manage the on-ground activities very well; the new pan-European initiative will at least allow them to know when planes enter European airspace. For North Americans, this is a bit strange, since the systems across Canada and the US are sufficiently integrated that a short-haul flight doesn’t take off until it has a landing slot already assigned at the destination airport.

Managing the events that might cause a flight departure to be delayed allows for much better management of airline and airport resources, such as reducing fuel spent due to excessive taxi times. By mapping the business processes and doing some capability mapping at the business architecture level, BAA is able to understand the interaction between the activities and events, and therefore understand the impact of a delay in one area on all the others. As part of this, they documented the enterprise objects (such as flights) and their characteristics. Their entire business architecture and set of reference models are created independent of Pega (or any other implementation tool) as an enterprise architecture initiative; to the business and process architects, Pega is a black box that manages the events, rules and processes.

Due in part to this initiative, Heathrow has moved from being consider the world’s worst airport to the 4th best, with the infamous “disastrous” terminal 5 now voted best in the world. They’re saving 90 liters of fuel per flight, have raised their on-time departure rate to 83%, and now involve all stakeholders in the processes as well as sharing information. In the near future, they’re planning for real-time demand capacity balancing through better integration, including coordinating aircraft movement across Europe and not just within Heathrow’s airspace. They’re also looking at physical improvements that will improve movement between terminals, such as underground baggage transport links that allow passengers to check in baggage at any terminal. Their current airport plan is based around plans for each stand, gate, person, vehicle, baggage and check-in resource; in the future, they will have a single integrated plan for the airport based on flights. They’re also adopting ideas from other industries: providing a timed entry ticket to security at the time that you check in, for example, similar to the fast-track system in theme parks. Also (which will raise some security hackles), tracking you on public transit on your way to the airport so that your flight can be rescheduled if your subway is delayed. With some luck, they’ll be able to solve some of the airport turnaround problems such as I experienced in Frankfurt recently.

The tracking and management system, created using Pega, was built in about 180 days: this shows the status of arrivals, departures, turnarounds (the end-to-end process) and a real-time feed of aircraft locations on the airport property, plus historical and predictive reports on departures, arrivals and holdings. Really fascinating case study of using BPM in a non-traditional industry.

PegaWORLD: SunTrust Account Opening

Trace Fooshee, VP and Business Process Lead at SunTrust, discussed how they are using Pega to improve account opening in their wealth management area. He’s in a group that acts as internal management consultants for process transformation and related technology implementation: this covers issues ranging from lack of integration to inconsistent front-back office communications to inefficient manual work management. Some of the challenges within their wealth management account opening process were increasing costs due to inefficient and time-consuming manual processes, inconsistent processes, and poor operational and management control.

In order to address the challenges, they set a series of goals: reducing account opening time from 15 to 4 days, improving staff productivity, eliminating client setup inconsistencies, streamlining approvals, and converting 40% of maintenance requests to STP. In addition to these specific targets, they also wanted to develop enterprise account opening services and assets that could be used beyond wealth management. They approached all of this with online intent-driven forms, imaging and automated work management, online reporting and auditing, backend system integration, and standardized case management to share front and back office information.

Having some previous experience with Pega, they looked at how BPM could be applied to these issues, and found advantages in terms of flexibility, costs and other factors compared to both in-house builds and purchase of an account opening application. In considering their architecture, they classified some parts as enterprise assets, such as scanned versions of the executed trust documents that went into their into their FileNet enterprise content repository, versus line-of-business and business unit assets, such as specific process flows for account setup.

Using an iterative waterfall approach, they took a year to put their first pilot into production: it seems like they could have benefited from a more agile approach that would have seen incremental releases sooner, although this was seen as being fast compared to their standard SDLC. Considering that the system just went into production a couple of weeks ago, they don’t really know how many more iterations will be required – or how long each will take – to optimize this for the business. They were unable to use Pega’s Directly Capturing Objectives (DCO) methodology for requirements since it conflicted with their current standard for requirements; as he discussed their SLDC and standard approaches, it appears that they’re caught in the position of many companies, where they know that they should go agile, but just can’t align their current approach to that. The trick is, of course, that they have to get rid of their current approach, but I imagine that they’re still in denial about that.

Some of the lessons that they learned:

  • Break down complex processes and implement iteratively.
  • Strong business leadership accelerates implementation.
  • Track and manage deferred requirements, and have a protocol for making decisions on which to implement and which to defer.
  • Get something working and in the hands of the users as soon as possible.

The year that they took for their first release was 3-4 months longer than originally planned: although that doesn’t sound like much, consider it as a 30-40% schedule overrun. Combining that with the lesson that they learned about putting something into the users’ hands early, moving from an iterative waterfall to agile approach could likely help them significantly.

Their next steps include returning to their deferred requirements (I really hope that they re-validate them relative to the actual user experience with the first iteration, rather than just implementing them), expanding into other account opening areas in the bank, and leveraging more of the functionality in their enterprise content management system.

PegaWORLD: Zurich’s Operational Transformation Through BPM

Nancy Mueller, EVP of Operations at Zurich Insurance North American, gave a keynote today on their operational transformation. Zurich has 60,000 employees, 9,500 of them in North America, and serves customers in 170 countries. Due to growth by acquisition, they ended up with five separate legal entities within the US, only one of which was branded as Zurich; this tended to inhibit enterprise-wide transformation. Their business in North America is purely commercial, which tends to result in much more complex policies that require highly-skilled and knowledgeable underwriters.

She admitted that the insurance industry isn’t exactly at the forefront of technology adoption and progressive change, but that they are recognizing that change is necessary: to stay competitive, to adapt to changing economic environments, and to meet customer demands. Their focus for change is the vision of a target operating model with specific success criteria around efficiency, effectiveness and customer satisfaction. They started with process, a significant new idea for Zurich, and managing the metrics around the business processes: getting the right skills doing the right parts of the process. For example, in underwriting, there were a lot of steps being done by the highly-skilled underwriters because it was just easier than handing things off (something that I’ve seen in practice with my insurance clients), although it could be much more effective to have underwriter support people involved that can take on the tasks that don’t need to be done by an underwriter. One of the challenges was managing a paperless process: trying to get people to stop printing out emails and sending them around to drive processes – something that I still see in many financial and insurance organizations.

As they looked into their processes, they found that there were many ways to do the same process, when it should be much more structured, and ended up standardizing their processes using Lean methods in order to reduce waste and streamline processes. The result of just looking at the process was a focus on the things that their current systems didn’t do: many of the process aberrations were due to workarounds for lack of system functionality. Also, they saw the need for electronic underwriting files in order to allow collaboration during the underwriting process: as simple as that sounds, many insurance companies are just not there yet. Moving to an electronic file in turn drives the needs for BPM: you needs something to move that electronic file from one desk to another in order to complete that standardized underwriting process.

Once those two components of technology are in place – electronic underwriting files and BPM – portions of the process can be done by people other than underwriters without decreasing efficiency. They’re just starting to roll this out, but expect to deploy it across the organization later this year. This also provides a base for looking at other ways to be more agile and flexible in their business, not just incremental improvements in their existing processes.

So far, they are seeing improvements in quality that are being noticed by their brokers and customers: policies are being issued right the first time, requiring less return and rework, which is critical for their commercial customer base. They’ve also improved their policy renewal and delivery timeframe, required to meet commercial insurance regulations. They’re looking forward to their full roll-out later this year, and how this can help them to further improve on their major performance metric, customer satisfaction.