Ultimus: Me on the Future of BPM

Here’s the presentation that I just delivered at the Ultimus user conference:

First time that I’ve given this in this format, but it’s a combination of so much that I’m already writing or talking about, it flowed pretty well. I’m writing a paper for publication right now on Enterprise 2.0 and BPM, which will expand on some of these ideas.

Ultimus: Product Road Map

Chris Adams from Ultimus product marketing management gave us a brief view of the road map and product vision for Ultimus Adaptive BPM Suite. Not surprisingly, their product manages the entire process lifecycle, and is focused on continuous process optimization. They’re a strong Microsoft shop — you’ll find them near the top of the Microsoft version of the Forrester wave report for human-centric BPM — with SharePoint integration as well as the underlying Microsoft infrastructure support. They have a very Microsoft 2007 look and feel, e.g., the use of ribbon bars.

Their last major version was 8, in October 2007, but they’re still supporting V7 (90% of customers are still on that platform) and some V6. They have a migration strategy that allows you to run two servers simultaneously, gradually migrating process instances from one to another, even directly from V6 to V8.

The improvements in 8.1 were around collaboration and efficiency features; they need to spend a bit more time on some of the BPM standards, where they’re far behind, but they’re planning to implement BPMN in V8.2 in December, and BPEL and XPDL in V8.3 in spring 2009. Also coming in 8.2 is an interactive process history and auditing, plus a Visual Studio plug-in for better integration into the Microsoft development environment.

V8.3 will also see the entire suite (except server components) moved to zero footprint applications: no desktop applications, even for process modeling. They’ll be open sourcing some of their components, as well as including some social software concepts such as presence awareness, collaboration on tasks, sharing tasks, and collecting feedback during transactional processes.

A nice segue to my talk, where I talk about social computing as one of the key components of the future of BPM.

Over and above the enhancements to the core suite, they’re evolving from one product to multiple products. They see the need for a lighter suite (currently labeled as “Workflow Suite”, which likely won’t be the final name) for global markets: a low-cost (about 20% of the BPM Suite price) BPM solution with the same core engine, but some features turned off, and exposed APIs for regional partners to build applications. They’re also keeping an eye on SaaS directions, but have no announcements in that area; however, with the move to a zero-footprint suite, they’re positioning themselves well for that eventuality.

They’ll be releasing a number of templates as a starting point for new processes, including verticals such as finance and healthcare/pharmaceuticals, plus horizontal processes like human resources and IT.

They’re pushing a lot of online training for their product, which makes sense considering that they’re a relatively small company covering a large geography.

With most of the people in the room still on V7, this is a bit of a sales job to get them to move over to V8: more than 500 new features, more out of the box functionality, reusable features and functions, and connectors to web services and many other data sources.

Chris will be doing a more in-depth view of this tomorrow, but this gave us a quick overview.

PegaWorld: Mashups with IAC

I thought that I should attend one technical/product breakout, since I’ve been covering customer case studies so far, and I wanted to get a closer look at Pega’s composite application development environment for internet applications, Internet Application Composer. I had a briefing on this a few months back, so have some notes and screenshots from that time that I’ll incorporate here as well.

IAM embeds PRPC application gadgets — like a worklist — on existing web pages, allowing SmartBPM functionality to be added directly and securely. This is like having PRPC on the web: actually exposing Pega functionality as Javascript gadgets, rather than just a back-end system that supports a website. This is not a limited set of pre-made gadgets, but the ability to turn a UI created in PRPC into a gadget.

pega-internet-application-composer_2961749959_o

IAC has three main components:

  • Composite Gadget Manager, which is a Javascript file that you load on your web page that implements your Pega gadgets. It controls configuration settings, plus the attributes, actions and events for the gadgets.
  • An existing PRPC application, running on a web node (behind the firewall); a web node is a separate PRPC node designated specifically for handling IAC requests, where all functionality is disabled by default unless explicit enabled for the web applications. This makes it possible to globally restrict certain types of access and functions on that node, rather than having to build that into the user interface.
  • IAC Composite Gateway, which is a servlet (in the DMZ) that manages the HTTP traffic between the Pega gadgets and the SmartBPM application on the web node.

As with most web applications that interact with behind-the-firewall systems, it’s a best practice to have the web application authenticate the user, pass on the trusted session to the Composite Gateway, which in turn passes it to behind-the-firewall web node.

Not only can PRPC gadgets be created and exposed, but legacy applications can be wrapped in SmartBPM to expose them to the web more easily. Since the PRPC application controls the view through the gadget as well as the internal view, any changes to the PRPC UI will be reflected both internally and in the gadget, without changing the web page itself: build once, deploy everywhere.

There’s tighter security than in many consumer mashup architectures: parameters are encrypted and obfuscated within URLs, for example.

The interface within the gadgets is rich, e.g., if one parameter is changed, a related parameter may update without a screen refresh. Gadgets on a page can interoperate, so a change within one gadget may cause another gadget’s data to update, such as showing the details for a selected item: this is enabled with some advanced actions and events, and uses JSON.

We looked at the actual HTML required to add a gadget to a web page: there’s a pretty small block of Javascript up front to set the configuration, then a 10-line block in a <div> structure that actually embeds the gadget.

IAC was released earlier this year, but a 2.0 version is available as of the end of September that includes HTTPS support, support for load balanced web node and gateway, tracing tools for debugging, samples and more. If you’re a Pega customer, you can access a number of in-depth technical articles on the Pega Developer Network about IAC.

PegaWorld: Meryl Stewart and Kelly Karlen on Business-IT Collaboration at BlueCross BlueShield

Last session of the day, and Meryl Stewart and Kelly Karlen of BlueCross BlueShield of Minnesota talked about maximizing BPM value through business and IT collaboration. They established a shared business-IT objective of enabling the business to manage their frequently-changing business rules to provide agility, while still maintaining environmental stability by following the necessary change management procedures.

They’ve wrapped some procedures around their projects to explicitly call this out, as well as explicit governance layers for processes and rules. Some of this — a big part — is about well-defined roles and responsibilities, such as a business rules steward. They categorize these procedures and methods by collection, execution and optimization stages, and walked us through each of the roles in each of the stages.

In the collection stage, they have a pretty structured way to create business rules and shore them in an enterprise repository; this is independent of their BPM technology, since not all processes end up being automated.

They wanted to make execution more efficient, so combined their RUP methodology with Pega’s RUP-like methodology and lightened it up to create a more agile “RUP Lite” (although as they walk through it, it doesn’t feel so light but it does have fairly frequent code releases).  Within that methodology, they have a number of additional roles to work on the business to technology transformation of the execution phase, and definite rules about who can do what types of changes and who does the associated testing. There’s a level of semi-technical analyst who can do a lot of the non-coding changes.

The optimization stage is where business agility happens, but this was addressed pretty quickly and seemed to be some sort of standard change management procedure.

This definitely shows some good software development practices, but there’s nothing particularly innovative here that can’t be replicated elsewhere as long as you can get the collaboration part to work; collaboration is primarily a function of finding people on both sides of the business-IT divide who can see over the wall to the other side, and maybe even straddle the divide somewhat with their skills.

They’ve applied the methodology to a couple of projects and have seen positive ROI, and very few coding changes since most of the process tuning can be done by business users or the semi-technical analysts. In one process, they’ve had 11 rule changes in 4 months with resultant savings of $820k in the improved processes; if IT were to have been involved in these changes, only $126k of the savings could have been realized in the same timeframe due to IT project schedules — a good measure of the value of agility provided by allowing the business to change business rules. Fundamentally, they changed an 8-week IT build cycle to 10 days or less by allowing the business to change the rules, but still following a test and deploy cycle that keeps lT happy.

That’s it for today; there’s a reception, then dinner and a cruise on the Potomac to view the monuments by night. The esteemed Dr. Michael zur Muehlen will not be joining us in spite of being right across the river in Arlington; when I invited him, he gave some lame excuse about just getting back from Seoul. 😉

PegaWorld: Rod Dunlap on BlueCross BlueShield Claims

The second breakout this afternoon was on improving claims throughput by Rod Dunlap, director at BlueCross BlueShield of North Carolina. Their main driver was to reduce the percentage of claims requiring manual intervention from an initial rate of 17% to a mere 2% by the end of 2009; this didn’t quite turn out to be realistic, but they are headed for 4.5% by the end of 2009. This four-fold performance increase will result in $20M in annual savings, which buys a lot of Pega licenses, but more importantly, this is seen as the implementation of transformational change that will change the corporate culture.

The obstacles are pretty typical of what I’ve seen: middle management in this conservative company was risk averse and threatened by change, with a predisposition to failure from some of the key business management. There was a history of trying to use any new technology as a silver bullet, which typically ends in some sort of disaster. Furthermore, they had a traditional waterfall development methodology governed by an enterprise project office that tended to get in the way of innovation.

To address this, they developed a number of innovations:

  • Business people writing business rules: about 180 rules so far. As with any model-driven design, this eliminates translation errors, increases accuracy and increases time to delivery. The problem, of course, is that business people don’t understand programming basics, and Dunlap admitted that Pega isn’t quite as easy to use as the brochures claim. 🙂
  • To resolve this, they built their own claims optimization framework. Claims could come in in any format, and rules (written by the business) causes the claim to be routed to the appropriate area for processing or rejected to the old process if it couldn’t yet be handled by the framework application. The framework appears to be a set of common services such and logging and reporting, plus a configurable application layer. They’re using IPD as their workflow engine currently, with Pega being used primarily as a rules engine, but in some cases the claims do not have to hit the workflow engine at all since the rules can determine the disposition of the claim. They plan to replace IPD in the future with Pega’s BPM.
  • Implement Agile/SCRUM concepts in an iterative development methodology, with 30-day sprints that result in a code release every 30 days — no schedule slippage allowed, although features may slip into the next version. Features within a sprint are prioritized based on business payback, effort and dependencies. They’ve moved away from use cases and into user stories (scenarios and personas): this is the best practice for modern user design interaction. The combination of prioritizing features by payback and the use of personas allows some edge-case features to be completely dropped off the schedule.

They co-located the business and IT people on the project — key to success in Agile projects — and keep the teams small. They kicked the project off in December 2007, and completed their first sprint in February, with the framework delivered in July 2008.

With what they’ve accomplished so far, 86% of duplicate claims are handled automatically, saving 20 FTE; 48% of corrected claims do not need manual intervention, saving another 18 FTE. They’re still working on the goal of 4.5% manual touches on claims by adding automated claims adjustments and building new types of claim repairs. Then, they’ll expand beyond claims to bring the rules technology to membership, billing and finance, and start implementing BPM.

PegaWorld: Martin Venema on RBC Customer Service

This afternoon is all breakout sessions, and I started with Martin Venema of Royal Bank of Canada (the bank is a customer of mine, although it’s so huge that I’ve never dealt with his group) as he discussed improving customer service through rules-driven case management. Unfortunately, the wifi doesn’t extend to the breakout rooms so posting may be delayed, but at least I found a power outlet in this room. Also, he was introduced by someone who knew how to pronounce “Mississauga”, for extra bonus points.

RBC has implemented 3 different Pega applications, although Venema focused on only one of them, for handling client requests.

Their case for change came from problems dealing with client requests: someone in a branch can’t answer a client question, and needs to pass that question along to someone in a back office fulfillment group. However, there were 21 different ways to get to those back office groups — email, e-forms, fax, phone — and since there was no way to track the requests once they’ve been sent off, there was a 20% duplication rate. A new branch employee had to figure out which group to send it to, how to send it and which form to use: a daunting training task.

Their goals were to improve their clients’ experience, but also to make it easy for the front-line staff to manage and route client requests. For the first goal, they wanted to be able to provide clients with a service commitment that allows them to track their request and anticipate the future steps and parameters in the request process. For the second goal, they wanted a tool that could just figure out where the request should go, and provide notifications and tracking of the requests.

What they ended up with was a system that provide a single point of entry for client requests, which then used business rules to automatically route requests to the correct group for resolution. In the fulfillment groups, it provided standardization of processes (woo hoo! someone else who says “proh-cess” instead of “praw-cess”), workload balancing across geographies, and skill-based routing.

Cost reduction, cost avoidance, increase revenue and intangible factors all contributed to their business case; interestingly, the cost factors weren’t enough, and they needed to bring in the increased revenue factors such as enhancing customer loyalty in order to justify the initiative.

Under cost reduction, they considered:

  • Reducing staff costs, by reducing manual interventions as well as internal tracking phone calls and emails
  • Increasing productivity/improving efficiency by reducing rerouting by sending requests to the right place based on business rules, reducing duplicate requests, and integrating back-end systems to auto-populate information into the request form
  • Reducing error handling by removing paper processes, adding validation and coaching tips (to help front-line staff to use the tools and potentially to resolve the problem on the spot), and standardizing the processes across geographic regions

Bill payment investigations, for example, was able to eliminate 4 FTEs; other processes saw similar gains due to cost reduction factors. Due to the high turnover rates in the fulfillment groups, they didn’t need to do any mass layoffs for the first phase, although the subsequent phases may cause deeper cuts.

RBC’s marketing analytics group has some pretty good measures on how customer loyalty translates to revenue increase: a top score on loyalty (willingness to recommend) leads to a 6% increase in client profitability. Furthermore, a top score on problem resolution generated more profit since the customers are less likely to move financial institutions, and may even increase the number of products that they hold with RBC. That means that even if there are problems with a client, if it’s resolved to their satisfaction, then the client loyalty (and therefore profitability) will increase; if not, then the client is much more likely to change banks.

He went through some pointers for how they engaged the business stakeholders, then discussed their incremental implementation approach. Their first implementation of CART (Client Action and Request Tool) was a single e-form that could initiate 18 different processes for personal deposit account problems. This took about eight months to develop, and was rolled out to 30,000 users in the branches and call centers, and impacted two fulfillment groups when it was rolled out in May of this year. The next phase will ramp up to about 500 different processes. He admitted that they had a typical waterfall development methodology in the past, and had a bit of trouble moving to a completely iterative and agile approach right away; he believes that they need more of a hybrid approach to do this successfully, which is the same as I see in many organizations that have a long history of waterfall-style development.

Their early results:

  • 20,000 new work objects created each month with no performance issues (I would be incredibly surprised if there were performance issues at this relatively low volume)
  • Early adoption rate of >60% of staff with no formal training
  • 20% reduction in number of requests reaching the fulfillment centers through elimination of duplicates and some requests being handled by the front-line staff member

They did learn that an iterative approach does work best in terms of selecting small, manageable deliverables. They’ve seen an increased demand for other BPM solutions now that they have something successful in production, and since the infrastructure costs are front-end loaded, subsequent implementations will be lower in cost than the first phase. Venema believes that they waited too long to involve Pega and their partner professional services, and could have accelerated the project further with earlier involvement. He also sees that they were somewhat unprepared for the second phase, and could have started some of that project work during the rollout of the first phase. They’ve created a center of excellence to ensure consistent practices and standards, including user interface standards, to allow for greater reusability.

Eventually, they’ll add in requests for other product lines, and expect to handle 3M work objects per year; simultaneously, there are other Pega projects happening simultaneously within RBC. They are training up some of their own staff for Pega development, and are using Pega professional services as well as a Pega partner. Their biggest staffing challenge, which they’ve had to supplement with external resources, is with finding or training Pega PRPC developers; they’re also finding gathering business requirements to be very time-consuming and want to move to the Pega methodology of directly capturing objectives instead of a more waterfall requirements process. He sees a big challenge going forward on the business side in terms of optimizing the processes so that they’re not just paving the cowpaths as they automate the processes.

The Future of BPM

I’m doing the afternoon keynote at the Ultimus user conference on Wednesday (yes, I have to get from DC to San Antonio) on the topic of the future of BPM. If there are any particular things that you think I should include in my talk, add them as comments to this post or email/Twitter/Skype/Facebook me: in the spirit of social networking, I’ll give full credit to my crowd-sourcers.

PegaWorld: SmartBPM Vision

The morning finished with a session on Pega’s SmartBPM product vision by Kerim Akgonul and Russell Keziere. Due to overruns in the earlier sessions, they had to try to cram a 30-minute presentation into about 8 minutes.

They covered off some of the keys to BPM success within organizations:

  • Create a vision for a transparent path to success: enablers, methodology, communications, culture, etc.
  • Deliver a dynamic and compelling user experience, a common theme with many BPM vendors right now
  • Share and collaborate across the enterprise
  • Give BPM to anyone who needs it, anywhere

They then mapped some of the new functionality of SmartBPM against these requirements:

  • Platform as a service, allowing for new BPM deployments to be rolled out with much less infrastructure effort through a multi-tenanted architecture that allows a new instance — complete with pre-configured processes, rules and best practices — to be created in a few minutes
  • Distributed business edition, including:
    • Integrated Work Manager, a composite application portal environment for accessing multiple SmartBPM instances in a single environment, including a consistent UI and cross-functional prioritization of work
    • Virtual Enterprise Repository, a library of reusable processes, rules, services and other software assets, to encourage reuse and enable governance
    • Business Intelligence Exchange, allowing SmartBPM data to be consolidated into existing BI environments for analytics and reporting
    • Multi-cluster Autonomic Event Services, which provides performance monitoring and optimization across all of the SmartBPM clusters within an organization, based on advanced diagnostics and SLAs
  • Flexible layouts and UI enhancements to enrich the SmartBPM user experience with some AJAX-y goodness. Some of this is as simple as automatic alignment when components from different sources are combined on the same screen, plus a number of new rich controls such as tree structures. There’s also support for mashups, although they didn’t elaborate on this.
  • Using BPM as part of the BPM development methodology to create an accelerated software enabled methodology that shortens the implementation cycle and optimizes BPM resources, using:
    • Application Profiler to directly capture objectives
    • Project Management Framework for organizing, assigning and monitoring tasks as well as tracking changes and providing an audit trail

Pega continues to innovate and create tools for deploying BPM, but some of the complexity around the methodology and the system itself can give potential customers pause when they’re comparing it with systems that (appear to) have a shorter learning curve.

They ended up taking most of the intended 30 minutes anyway, but with a long lunch break up next, it should all shake out by afternoon.

Forrester Integration-Centric BPM report available

Forrester has released the 2008 version of their Wave report on integration-centric BPM suites; you can find it on Vitria’s site here (registration required).

I won’t reproduce the chart here since that always seems to get me in trouble, but suffice it say that Software AG (the former webMethods product), IBM (various WebSphere bits), Vitria, TIBCO (ActiveMatrix), Oracle (SOA Suite; the BEA products were not evaluated due to timing relative to the acquisition), SAP (NetWeaver) and Cordys should all be very happy.

Oracle-BEA Strategy

Oracle’s been taking a bit of a beating lately, with Gartner stating that their middleware suites are “assemblies of convenience”, and that they are unlikely to offer any surprising innovations in the short term as they’re attempting to resolve the overlap and incompatibilities between the Oracle and BEA product lines. Gartner’s saying “watch this space”, but some of Oracle’s competitors are interpreting that as “they’ve got a big bunch of SOA stuff they have to integrate, and you know it’s going to hurt, so delay the pain”.

I discussed Oracle’s Borg-like acquisition of BEA back in June, and Bruce Silver recently agreed that Oracle knows how to do acquisitions right, and discussed the Oracle middleware product strategy outlined at Open World last month.

I did a review of the product strategy in early days shortly after the acquisition, then had a chance to attend an in-person briefing more recently at an BEA “customer welcome day” in Toronto along with about 120 attendees, with Mason Ng of Oracle as the main speaker. This followed the same lines as the web briefing that I’ve already written about, with the products marked in red for “strategic” (immediate adoption, minor redesign), blue for “continue and converge” (limited development, converge/merge with another product, 9-year support cycle), and white for “maintenance” (end of life). The AquaLogic brand is being discontinued, but not (necessarily) the products; other brands, such as WebLogic, are being maintained for their marketing value.

There were some misleading comments from Ng: he stated that BPMN is for human-centric workflow BPM and BPEL is for system-centric BPM, which certainly planted the wrong message about BPMN (a graphical notation) and BPEL (a serialization/execution language) in the minds of anyone in the audience who didn’t already have an opinion on this. I’m not sure if he doesn’t get it, or he wanted to create a reason for why multiple BPM products are required, but he positioned BPMN and BPEL as competing standards in his presentation; I think that he’s really talking about XPDL, since AL-BPM natively executes XPDL from the BPMN serialization.

Some mysteries remain, such as how Oracle ESB and AquaLogic Service Bus can both be considered strategic, when they are being merged into a single product, Oracle Service Bus. Realistically, both original products will be modified significantly to create OSB, but it was stated that AL-SB will be the core, with features from O-ESB rolled in. Good news for AL-SB customers, not so much for O-ESB customers. Ditto with Oracle BPM, which will be a merging of AL-BPM and Oracle BPEL Process Manager: both of the constituent products are considered “strategic” (which is supposed to mean “minor redesign” only), but they stated that the core will be AL-BPM with BPEL capabilities rolled into a single engine, which will mean major changes to Oracle BPEL Process Manager and, most likely, AL-BPM in order to create the merged product.

The attendees at this event were primarily BEA customers, which means fairly deep inside the IT organization, and not necessarily innovators. I saw a lot more old Blackberries than new iPhones. And in this conservative development environment, there’s a big perception problem as well: Oracle positions the 9 years of support for the blue products as being incredibly long, but organizations starting out on 5-year development projects (as was one audience member) see that as being just around the corner, and likely to be biting them just as they’re rolling out the last bits of the project. There’s the bigger question of why anyone is planning a 5-year portal development project, but when the guy beside me admitted that 50% of their desktops are still on Windows 2000, I started to see the gap between the starry-eyed vendors and the reality of the slow pace of enterprise development.

At the end, we had the obligatory appearance of the regional sales team — 5 white guys in suits — stating that “nothing has changed” and “it will be a good thing in the end”. In other words, resistance is futile.