WebSphere BPM Analyst Briefing

The second of the analyst roundtables that I attended was with Angel Diaz, VP of BPM, and Rod Favaron, who used to head up Lombardi and is now part of the WebSphere team. My biggest question for them was what’s happening (if anything) with some consolidation of the BPM portfolio; after much gnashing of teeth and avoiding of the subject, my interpretation of their answer is that there will be no consolidation, but that customers will just buy it all.

IBM has a list of 10 questions that they use with their customers to determine which BPM product(s) that they need; my guess is that most customers will somehow end up requiring several products, even if the case could have been made in the past that a single one would do. Angel and Rod talked about the overlap between the products, highlighting that WPS and Lombardi have been targeted at very different applications; although that has been true in the past, the new functionality that we’re seeing in WPS for human-centric processes is creating a much greater overlap, although I would guess that Lombardi is far superior (and will remain so for some time) for that functionality, just as WPS provides greater scalability for pure orchestration processes. There’s also overlap in the modeling side between the free BlueWorks site and the on-demand Blueprint: both offer some discovery and mapping of processes, and as more functionality is added to BlueWorks, it may be difficult to justify the move to a paid service if the customer needs are minimal.

They were more comfortable talking about what was being done in order to move Lombardi fully into the WebSphere family: a single install for Lombardi and WAS; leveraging WebSphere infrastructure such as ESB; and the integration of other IBM rules, content and analytic products to provide an alternative to the previously existing third-party product interfaces used in Lombardi TeamWorks. They also discussed how the small Lombardi sales team has been integrated into the 800-strong WebSphere sales team, and used to train that team on how to position and sell the Lombardi products.

We had a very enjoyable session: I like both Rod and Angel, and they were pretty relaxed (except for the points when I asked if they considered FileNet to be their competitor, and mentioned that Blueprint should be Lotus rather than WebSphere), even having a competition where whichever of them said “TeamWorks” (instead of IBM WebSphere Lombardi Edition) had to throw a dollar into the pot, presumably for the beer fund. At the end of it, however, I was left with the thought – and hope – that this story will continue to evolve, and that we’ll see something a bit more consolidated, and a bit more cohesive, out of the WebSphere BPM team.

WebSphere Business Performance and Service Optimization

I sat in on a roundtable with Doug Hunt, VP of Business Performance and Service Optimization (which appears to be a fancy name for industry accelerators) and Alan Godfrey of Lombardi. Basically, BP&SO is a team within the software group (as opposed to services) that works with GBS (the services part of IBM) to build out industry vertical accelerators based on actual customer experience. In other words, these are licensed software packs that would typically be bundled with services. A BP&SO center of excellence within GBS has been launched in order to link the efforts between the two areas.

I heard a bit about the accelerators in the BPM portfolio update this morning; they’re focused on making implementation faster by providing a set of templates, adapters, event interfaces and content for a specific industry process, which can then be built out into a complete solutions by GBS or a partner. In particular, the accelerators look at how collaboration, monitoring, analytics, rules and content can be used specifically in the context of the vertical use case. They’re not really focused on the execution layer, since that tends to be where the ISVs play, but rather more prescriptive, such as the control layer for real-time monitoring across multiple business silos.

Interestingly, Hunt describe the recently-revealed advanced case management (ACM) as a use case around which an accelerator could be developed; I’m not sure that everyone would agree with this characterization, although it may be technically closer to the truth than trying to pass off the ACM “strategy” as a product.

This trend for vertical accelerators has been around in the BPM market for a while with many other vendors, and the large analysts typically look at this as a measure of the BPMS vendor’s maturity in BPM. The WebSphere accelerators are less than a packaged application, but more than a sales tool; maybe not much more, since they were described as being suitable for an “advanced conference room pilot”. In any case, they’re being driven in part by the customers’ need to be more agile than is permitted with a structured packaged application. There’s no doubt that some highly regulated processes, such as in healthcare, may still be more suited for a packaged application, but the more flexible accelerators widen the market beyond that of the packaged applications.

WebSphere BPM Analyst Update

There was a lunchtime update for the analysts on all the new WebSphere offerings; this was, in part, a higher-level (and more business oriented) view of what I saw in the technical update session earlier.

We also saw a demo of using Cast Iron (which was just acquired by IBM this morning) to integrate an on-premise SAP system with Salesforce.com; this sort of integration across the firewall is essential if cloud platforms are going to be used effectively, since most large enterprises will have a blend of cloud and on-premise.

There’s a ton of great traffic going on at #ibmimpact on Twitter and the IBM Impact social site, and you can catch the keynotes and press conference on streaming video. Maybe a bit too much traffic, since the wifi is a bit of a disaster.

WebSphere BPM Product Portfolio Technical Update

The keynotes sessions this morning were typical “big conference”: too much loud music, comedians and irrelevant speakers for my taste, although the brief addresses by Steve Mills and Craig Hayman as well as this morning’s press release showed that process is definitely high on IBM’s mind. The breakout session that I attended following that, however, contained more of the specifics about what’s happening with IBM WebSphere BPM. This is a portfolio of products – in some cases, not yet really integrated – including Process Server and Lombardi.

Some of the new features:

  • A whole bunch of infrastructure stuff such as clustering for simple/POC environments
  • WS CloudBurst Appliance supports Process Server Hypervisor Edition for fast, repeatable deployments
  • Database configuration tools to help simplify creation and configuration of databases, rather than requiring back and forth with a DBA as was required with previous version
  • Business Space has some enhancements, and is being positioned as the “Web 2.0 interface into BPM” (a message that they should probably pass on to GBS)
  • A number of new and updated widgets for Business Space and Lotus Mashups
  • UI integration between Business Space and WS Portal
  • Webform Server removes the need for a client form viewer on each desktop in order to interact with Lotus Forms – this is huge in cases where forms are used as a UI for BPM participant tasks
  • Version migration tools
  • BPMN 2.0 support, using different levels/subclasses of the language in different tools
  • Enhancements to WS Business Modeler (including the BPMN 2.0 support), including team support, and new constructs including case and compensation
  • Parallel routing tasks in WPS (amazing that they existed this long without that, but an artifact of the BPEL base)
  • Improved monitoring support in WS Business Monitor for ad hoc human tasks.
  • Work baskets for human workflow in WPS, allowing for runtime reallocation of tasks – I’m definitely interested in more details on this
  • The ability to add business categories to tasks in WPS to allow for easier searching and sorting of human tasks; these can be assigned at design time or runtime
  • Instance migration to move long-running process instances to a new process schema
  • A lot of technical implementation enhancements, such as new WESB primitives and improvements to the developer environment, that likely meant a lot to the WebSphere experts in the room (which I’m not)
  • Allowing Business Monitor to better monitor BPEL processes
  • Industry accelerators (previously known as industry content packs) that include capability models, process flows, service interfaces, business vocabulary, data models, dashboards and solution templates – note that these are across seven different products, not some sort of all-in-one solution
  • WAS and BPM performance enhancements enabling scalability
  • WS Lombardi Edition: not sure what’s really new here except for the bluewashing

I’m still fighting with the attendee site to get a copy of the presentation, so I’m sure that I’ve missed things here, but I have some roundtable and one-on-one sessions later today and tomorrow that should clarify things further. Looking at the breakout sessions for the rest of the day, I’m definitely going to have to clone myself in order to attend everything that looks interesting.

In terms of the WPS enhancements, many of the things that we saw in this session seem to be starting to bring WebSphere BPM level with other full BPM suites: it’s definitely expanding beyond being just a BPEL-based orchestration tool to include full support for human tasks and long-running processes. The question lurking in my mind, of course, is what happens to FileNet P8 BPM and WS Lombardi (formerly TeamWorks) as mainstream BPM engines if WPS can do it all in the future? Given that my recommendation at the time of the FileNet acquisition was to rip out BPM and move it over to the WebSphere portfolio, and the spirited response that I had recently to a post about customers not wanting 3 BPMSs, I definitely believe that more BPM product consolidation is required in this portfolio.

PegaWORLD: Managing Aircraft at Heathrow Airport

Eamonn Cheverton of BAA discussed the recent event-driven implementation of Pega at Heathrow airport for managing aircraft from touchdown to wheels-up at that busiest of airports. In spite of the recent interruption caused by the volcanic eruption in Iceland, Heathrow sees millions of passengers each year, yet had little operational support or information sharing between all of the areas that handle aircraft, resulting in a depressingly low (for those of us who fly through Heathrow occasionally) on-time departure rate of 68%. A Europe-wide initiative to allow for a three-fold increase in capacity while improving safety and reducing environmental effects drove a new business architecture, and had them look at more generic solutions such as BPM rather than expensive airport-specific software.

We’ll be looking more at their operations tomorrow morning in the case management workshop, but in short, they are managing aircraft air-to-air: all activities from the point that an aircraft lands until it takes off again, including fuel, crew, water, cleaning, catering, passengers and baggage handling. Interestingly, the airport has no visibility into the inbound flights until about 10 minutes before they land, which doesn’t provide the ability to plan and manage the on-ground activities very well; the new pan-European initiative will at least allow them to know when planes enter European airspace. For North Americans, this is a bit strange, since the systems across Canada and the US are sufficiently integrated that a short-haul flight doesn’t take off until it has a landing slot already assigned at the destination airport.

Managing the events that might cause a flight departure to be delayed allows for much better management of airline and airport resources, such as reducing fuel spent due to excessive taxi times. By mapping the business processes and doing some capability mapping at the business architecture level, BAA is able to understand the interaction between the activities and events, and therefore understand the impact of a delay in one area on all the others. As part of this, they documented the enterprise objects (such as flights) and their characteristics. Their entire business architecture and set of reference models are created independent of Pega (or any other implementation tool) as an enterprise architecture initiative; to the business and process architects, Pega is a black box that manages the events, rules and processes.

Due in part to this initiative, Heathrow has moved from being consider the world’s worst airport to the 4th best, with the infamous “disastrous” terminal 5 now voted best in the world. They’re saving 90 liters of fuel per flight, have raised their on-time departure rate to 83%, and now involve all stakeholders in the processes as well as sharing information. In the near future, they’re planning for real-time demand capacity balancing through better integration, including coordinating aircraft movement across Europe and not just within Heathrow’s airspace. They’re also looking at physical improvements that will improve movement between terminals, such as underground baggage transport links that allow passengers to check in baggage at any terminal. Their current airport plan is based around plans for each stand, gate, person, vehicle, baggage and check-in resource; in the future, they will have a single integrated plan for the airport based on flights. They’re also adopting ideas from other industries: providing a timed entry ticket to security at the time that you check in, for example, similar to the fast-track system in theme parks. Also (which will raise some security hackles), tracking you on public transit on your way to the airport so that your flight can be rescheduled if your subway is delayed. With some luck, they’ll be able to solve some of the airport turnaround problems such as I experienced in Frankfurt recently.

The tracking and management system, created using Pega, was built in about 180 days: this shows the status of arrivals, departures, turnarounds (the end-to-end process) and a real-time feed of aircraft locations on the airport property, plus historical and predictive reports on departures, arrivals and holdings. Really fascinating case study of using BPM in a non-traditional industry.

PegaWORLD: SunTrust Account Opening

Trace Fooshee, VP and Business Process Lead at SunTrust, discussed how they are using Pega to improve account opening in their wealth management area. He’s in a group that acts as internal management consultants for process transformation and related technology implementation: this covers issues ranging from lack of integration to inconsistent front-back office communications to inefficient manual work management. Some of the challenges within their wealth management account opening process were increasing costs due to inefficient and time-consuming manual processes, inconsistent processes, and poor operational and management control.

In order to address the challenges, they set a series of goals: reducing account opening time from 15 to 4 days, improving staff productivity, eliminating client setup inconsistencies, streamlining approvals, and converting 40% of maintenance requests to STP. In addition to these specific targets, they also wanted to develop enterprise account opening services and assets that could be used beyond wealth management. They approached all of this with online intent-driven forms, imaging and automated work management, online reporting and auditing, backend system integration, and standardized case management to share front and back office information.

Having some previous experience with Pega, they looked at how BPM could be applied to these issues, and found advantages in terms of flexibility, costs and other factors compared to both in-house builds and purchase of an account opening application. In considering their architecture, they classified some parts as enterprise assets, such as scanned versions of the executed trust documents that went into their into their FileNet enterprise content repository, versus line-of-business and business unit assets, such as specific process flows for account setup.

Using an iterative waterfall approach, they took a year to put their first pilot into production: it seems like they could have benefited from a more agile approach that would have seen incremental releases sooner, although this was seen as being fast compared to their standard SDLC. Considering that the system just went into production a couple of weeks ago, they don’t really know how many more iterations will be required – or how long each will take – to optimize this for the business. They were unable to use Pega’s Directly Capturing Objectives (DCO) methodology for requirements since it conflicted with their current standard for requirements; as he discussed their SLDC and standard approaches, it appears that they’re caught in the position of many companies, where they know that they should go agile, but just can’t align their current approach to that. The trick is, of course, that they have to get rid of their current approach, but I imagine that they’re still in denial about that.

Some of the lessons that they learned:

  • Break down complex processes and implement iteratively.
  • Strong business leadership accelerates implementation.
  • Track and manage deferred requirements, and have a protocol for making decisions on which to implement and which to defer.
  • Get something working and in the hands of the users as soon as possible.

The year that they took for their first release was 3-4 months longer than originally planned: although that doesn’t sound like much, consider it as a 30-40% schedule overrun. Combining that with the lesson that they learned about putting something into the users’ hands early, moving from an iterative waterfall to agile approach could likely help them significantly.

Their next steps include returning to their deferred requirements (I really hope that they re-validate them relative to the actual user experience with the first iteration, rather than just implementing them), expanding into other account opening areas in the bank, and leveraging more of the functionality in their enterprise content management system.

PegaWORLD: Zurich’s Operational Transformation Through BPM

Nancy Mueller, EVP of Operations at Zurich Insurance North American, gave a keynote today on their operational transformation. Zurich has 60,000 employees, 9,500 of them in North America, and serves customers in 170 countries. Due to growth by acquisition, they ended up with five separate legal entities within the US, only one of which was branded as Zurich; this tended to inhibit enterprise-wide transformation. Their business in North America is purely commercial, which tends to result in much more complex policies that require highly-skilled and knowledgeable underwriters.

She admitted that the insurance industry isn’t exactly at the forefront of technology adoption and progressive change, but that they are recognizing that change is necessary: to stay competitive, to adapt to changing economic environments, and to meet customer demands. Their focus for change is the vision of a target operating model with specific success criteria around efficiency, effectiveness and customer satisfaction. They started with process, a significant new idea for Zurich, and managing the metrics around the business processes: getting the right skills doing the right parts of the process. For example, in underwriting, there were a lot of steps being done by the highly-skilled underwriters because it was just easier than handing things off (something that I’ve seen in practice with my insurance clients), although it could be much more effective to have underwriter support people involved that can take on the tasks that don’t need to be done by an underwriter. One of the challenges was managing a paperless process: trying to get people to stop printing out emails and sending them around to drive processes – something that I still see in many financial and insurance organizations.

As they looked into their processes, they found that there were many ways to do the same process, when it should be much more structured, and ended up standardizing their processes using Lean methods in order to reduce waste and streamline processes. The result of just looking at the process was a focus on the things that their current systems didn’t do: many of the process aberrations were due to workarounds for lack of system functionality. Also, they saw the need for electronic underwriting files in order to allow collaboration during the underwriting process: as simple as that sounds, many insurance companies are just not there yet. Moving to an electronic file in turn drives the needs for BPM: you needs something to move that electronic file from one desk to another in order to complete that standardized underwriting process.

Once those two components of technology are in place – electronic underwriting files and BPM – portions of the process can be done by people other than underwriters without decreasing efficiency. They’re just starting to roll this out, but expect to deploy it across the organization later this year. This also provides a base for looking at other ways to be more agile and flexible in their business, not just incremental improvements in their existing processes.

So far, they are seeing improvements in quality that are being noticed by their brokers and customers: policies are being issued right the first time, requiring less return and rework, which is critical for their commercial customer base. They’ve also improved their policy renewal and delivery timeframe, required to meet commercial insurance regulations. They’re looking forward to their full roll-out later this year, and how this can help them to further improve on their major performance metric, customer satisfaction.

PegaWORLD: SmartBPM at TD Bank Financial Group

Adrian Hopkins from TD’s Visa Systems and Technology group talked about their experiences with Pega, through various major upgrades over the years and now with SmartBPM for multi-channel customer management in their call centers. TD is one of Canada’s “big five” banks, but is also the 6th-largest bank in North American due to its diverse holdings in the US as well as Canada, serving 17 million customers. They’ve been a Pega customer for quite a while; I first wrote about it back in 2006

8-10% of TD’s workforce – 7,000 employees – are in call centers spread across 23 locations in North American and India, handling 47 million calls per year, hence their need to commoditize the agent and provide the ability to route any call to any center and have the customer’s questions answered satisfactorily. The key here is service leveling: providing the same level of service regardless of from where the call is serviced, through training, access to technology and information, and scripting. The goals were to improve service levels, increase capacity, and providing opportunities for up-selling by the agents while they have the customer on the call. TD is using BPM automation to achieve some of this – automating fulfillment and integrating disparate systems – plus providing more intuitive processes that require little training. In one example, they’ve consolidated the content and functionality of 12 different mainframe green screens into a single screen that can be used to handle a majority of the inbound calls; another allows them to process a credit card fraud claim in a single screen and a small number of steps, replacing an overly-complex manual process of 95 steps that involved managing the claim, handling the fraud and replacing the car. In the latter case, they’ve moved to a completely paperless process for handling a fraud claim, and reduced the case handling time from 7 hours down to minutes. Interestingly, they didn’t take away the old green screen methods of doing things when they deployed the new system, since some of the call center old-timers insisted that it was faster for them; however, they gradually removed the access since the new interfaces enforced rules and procedures that were not built into the green screens, generally improving quality of service.

They’re looking at savings and benefits in several areas:

  • Reduce training time: at a 10% attrition rate, saving one week of training per new employee means a savings of $750,000/year
  • Reduce callback rates, which increases customer satisfaction as well as increasing agent capacity
  • Improve compliance, and reduce the cost of achieving and proving compliance
  • Increase customer-facing time due to less follow-up paperwork, increasing agent capacity
  • Reduce secondary training requirements by guiding agents through complex inquires, allowing less-skilled agents to handle a wider variety of calls, and reducing handoffs
  • Increase ability to drive incremental sales from every contact, or “would you like fries with that?”, although they’re not yet actively doing this in their implementation

They’re still using SmartBPM 5.3, but are looking forward to some of the new capabilities in version 6, which should reduce the amount of code that they’re writing and allow them to put more control of the business process rules in the hands of the users.

Based on the screen snapshots that we saw, however, they’re still building fairly large desktop applications; this must be impacting their agility, in spite of their Agile approach, even though it is providing a huge benefit over using the green screens directly.

PegaWORLD: Medco’s Business Transformation

Kenny Klepper, president of Medco Health, gave a keynote on their business transformation, and how Pega has played a part in that. Personalized medicine is at the core of their strategy; that concept, plus their rapid change and growth through acquisition makes business agility a critical competency. They are looking at how to make medicine smarter through delivering pharmaceutical care as well as ongoing genetic research, particularly for patients with chronic and/or complex conditions. Since poor management of chronic and complex diseases leads to $350B in excess healthcare costs each year in the US, getting a better handle on this is important for the patients, for healthcare providers and ultimately (given the new healthcare bill in the US) on everyone in the country via tax burden. This isn’t about reducing care: it’s about making sure that the right people are getting the right care in order to prevent future problems.

Shifting focus to agile enterprises, he talked about the necessity of agility in order to address changing requirements, and to reduce operating expenses. He sees high operational expenses as an indicator of lack of agility in an enterprise, and hence agility as a primary way to drive down operating expenses. An agile enterprise needs to have a strong underlying technology platform to allow new applications to be built and deployed with a minimum of new investment: their architecture includes a data warehouse, a data management layer, data “fabric”, a service bus, and application frameworks on top of all of that. That provides wide access to information – no one needs to reinvent data access protocols, for example – and allows business applications to be built easily and quickly (using Pega) on top of the stack. As their business becomes more complex, expanding across lines of business and new markets, they are focusing on the common core processes across those lines, and aligning centers of excellence with those horizontal business processes, not just on lines of business or technologies. This shifts the “hot spot” issues from those processes into the CoEs (e.g., an order processing CoE), providing a single source for resolving problems in similar processes across the entire organization. This, plus the fact that they allocate budgets to the joint control of business and IT (redeploying many IT people to the business-focused CoEs), enables real business-IT alignment.

Klepper noted that the more legacy-bound that your organization is, the harder it is for individual operating units to achieve their goals, and the more conflict there is within IT for competing resources. Their focus, however, is on productivity: he considers that they get Agile methods for free by focusing on the ROI of reducing operational costs through the business applications that they create. To achieve this, however, a key focus for any change agent is on sustaining sponsorship: you can’t keep these initiatives going unless you can successfully sell them internally.

This wasn’t at all a technical talk on how Medco uses Pega; rather, it was a call to arms for the agile enterprise. I hope that some of the attendees, mired in legacy-bound conservative industries, got the message.

PegaWORLD: Alan Trefler keynote

Weather and Air Canada conspired against me getting to Philadelphia yesterday, but here I am at the opening keynote as Alan Trefler gives us the high-level view of Pega’s progress (including the Chordiant acquisition) and what’s coming up. Pega is one of the longest-standing BPM players, now 27 years old – although not all of that strictly in BPM, I think – which gives them a good perspective of how the industry is changing. My links post this morning was a collection of posts about adaptive case management, dynamic BPM and social BPM, and Pega is part of this trend. In fact, Trefler claimed that many of the other vendors are hopping on the agility bandwagon late (even the BPM bandwagon), or in words only.

He pointed out how many of the pure play vendors have been acquired recently, and sees this as a play by the acquiring companies to reduce choice in the market, and artificially bolster the "x% of companies run our software" claims. In his usual style, he used a giant photo of a shark on the screen behind him to illustrate this point. He made direct hits on Oracle, SAP and IBM with his comments, claiming that if consolidation results in the lowest common denominator – a common level of mediocrity – then the customers will lose out. The acquiring stack vendors end up offering a Frankenstack of products that do not integrate properly (if at all), and that so much custom code is required in order to deploy these that they become the new legacy systems, unchangeable and not able to meet the customer needs, since they require that you change your business in order to fit the software rather than the other way around.

He discussed their approach to case management, stating that a case is a metaphor for whatever you need it to be in your business, not a construct that is pre-defined by a vendor. Like the comments that I saw about the recent Process.gov conference, I think that this is also going to be a conference about adaptive case management (ACM) as well as BPM.

Pega is about to announce a set of managed services; they are already pretty cloud-friendly (the recent demo that I had from them was done on an EC2 instance, for example) since they allow for complete configuration and administration via a browser. They’ve been talking about platform as a service for over a year now, so this isn’t a bit surprise, but good to see something concrete rolling out.

He finished up by stating that Pega intends to be the dominant player in the space. They announced on Friday that they’ve added 114 people in the first quarter of 2010, and have just announced 11 consecutive quarters of record revenues. They will continue to invest in R&D in order to achieve and maintain this position.

Judging by the tweets that I’m already seeing, several key BPM analysts are here with me, so expect a lot of good coverage of the conference.