Innovation World: ChoicePoint external customers solutions with BPM, BAM and ESB

I took some time out from sessions this afternoon to meet with Software AG’s deputy CTOs, Bjoern Brauel and Miko Matsumura, but I’m back for the last session of the day with Cory Kirspel, VP of identity risk management at ChoicePoint (a LexisNexis company), on how they have created externally-facing solutions using BPM, BAM and ESB. ChoicePoint screens and authenticates people for employment screening, insurance services and other identity-related purposes, plus does court document retrieval. There’s a fine line to walk here: companies need to protect the privacy of individuals while minimizing identify fraud.

Even though they only really do two things — credential and investigate people and businesses — they had 43+ separate applications on 12 platforms with various technologies in order to do this. Not only did that make it hard to do what they needed internally, customers were also wanting to integrate ChoicePoint’s systems directly into their own with an implementation time of only 3-4 months, and provide visibility into the processes.

They were already a Software AG customer with the legacy modernization products, so took a look at their BPM, BAM and ESB. The result is that they had better visibility, and could leverage the tools to build solutions much faster since they weren’t building everything from the ground up. He walked us through some of the application screens that they developed for use in their customers’ call centers: allow a CSR to enter some data about a caller, select a matching identity by address, verify the identity (e.g., does the SSN match the name), authenticate the caller with questions that only they could answer, then provide a pass/fall result. The overall flow and the parameters of every screen can be controlled by the customer organization, and the whole flow is driven by a process model in the BPMS which allows them to assign and track KPIs on each step in the process.

They’re also moving their own executives from the old way of keeping an eye on business — looking at historical reports — to the new way with near real-time dashboards. As well as having visibility into transaction volumes, they are also able to detect unusual situations that might indicate fraud or other situations of increased risk, and alert their customers. They found that BAM and BI were misunderstood, poorly managed and under-leveraged; these technologies could be used on legacy systems to start getting benefits even before BPM was added into the mix.

All of this allowed them to reduce the cost of ownership, which protects them in a business that competes on price, as well as offering a level of innovation and integration with their customers’ systems that their competitors are unable to achieve.

They used Software AG’s professional services, and paired each external person with an internal one in order to achieve knowledge transfer.

Software AG partner frameworks

I had lunch today at Innovation World with a Software AG partner that will be releasing one of the industry vertical frameworks (I didn’t ask if I could use their name, so am withholding it for now). They see the frameworks as a necessity to even demonstrate BPM to a vertical business, as well as providing a base on which to create a custom solution. A few bits about the partner frameworks:

  • The partner that I spoke with does not plan to productize the framework; rather, it will be used as the starting point for a custom solution for a client. This is how I see most companies implementing frameworks on top of any BPMS, with a mixed degree of success: although it’s difficult to turn some or all of a framework into a product rather than a service, especially for the services companies who normally create them, a productized framework can have advantages when it comes to maintenance and support. Customers need to be aware that a non-productized framework is really just another piece of custom code, and the long-term maintenance costs will reflect that.
  • This partner plans to retain the intellectual property of the framework and any custom code that they build on it, allowing them to roll the code developed for any customer back into the framework for resale. This is great for the industry in general, and future customers in particular, but customers would need to ensure that they specify any processes to which they do not want to give up IP rights.
  • Software AG does not provide guidelines or rules for what should or should not be in a framework, or how to create one. In their online partner forum, however, they describe the existing frameworks so that partners can get an idea of what should be in one.
  • Software AG is not certifying the partner frameworks, so customers need to do their own due diligence on the strength of the solution. Some sort of certification program would likely improve customer confidence in the third-party frameworks.

Vertical industry frameworks are definitely the new black in BPM these days: in addition to Software AG’s program of mixed internal and third-party frameworks, Savvion announced a fairly ambitious framework program with one tier of components built by Savvion and one by their partners, and TIBCO provides some vertical frameworks as a vertical marketing tool.

I’m all for providing a leg up for customers to start working with a BPMS in their industry, but we need to be clear about whether something is a true framework or a set of unsupported templates, understand the value that a framework can bring, and know the pitfalls of a framework approach. I’ve seen some pretty big BPMS implementations that went totally off track because of the use of a non-productized framework: the framework became brittle legacy custom code before it was even in production, was seriously impacted by a minor upgrade to the underlying BPMS platform, and did not allow access to recent modeling and optimization functions provided in the BPMS since it was designed and built for a previous version.

In general, I think that most “frameworks” that overlay BPMS’ are actually templates, providing marketing and sales support for the underlying product in that vertical, but not providing a lot of value in terms of a code base. Those that do have a significant code base are usually not productized, hence need to be evaluated as a big chunk of custom code: although the initial purchase price is likely lower than having all that code written for you, you have to consider the ongoing maintenance costs.

Innovation World: Media and Analyst Forum

I’m spending the morning at the media and analyst forum at Software AG’s user conference, Innovation World, in Miami. The first half of the morning covered mainframe modernization, plus a presentation by Miko Matsumura (who I met last week at the Business Rules Forum), Deputy CTO, on the state of SOA adoption. He’s just published a book — more of a booklet at 86 pages — on SOA Adoption for Dummies, continuing Software AG’s trend of using the Dummies brand to push out lengthy white papers in a lighthearted format. For example, chapter 10 is SOA Rocket Science, which covers three principles of SOA:

  1. Keep the pointy end up (instrumentation)
  2. Maintain upward momentum (organization)
  3. Don’t stop until you’re in orbit (automation)

He finished up with a discourse on SOA as IT postmodernism, casting postmodernism as an architectural pattern language: given a breakdown in the dominant metanarrative and the push towards deconstructionism, a paradigm of composition emerges…

After the break, we heard from Ian Walsh from webMethods product marketing to give us an overview of the webMethods suite:

  • webMethods BPM, including process management, rules management and simulation
  • CAF (composite application framework), for codeless application design and web-based composite applications
  • BAM, including process monitoring and alerting, and predictive management

He stated that the “pure-play” BPMS vendors (mentioning Lombardi, Savvion and Pega) are having problems because they sold on the ability to allow the  business to create their own processes quickly, but that doesn’t work in reality when you have to integrate complex systems. He also said that the platform vendors (Microsoft, IBM, Oracle) have confusing offerings that are not well integrated, hence take too long to implement. He mentioned TIBCO as a special case, neither pure-play nor platform, but sees their weakness as being very focused on events: good for their CEP strategy, but not good for their broader BPM/SOA strategy.

Walsh sees their strengths in both BPM and SOA as their differentiator: customers are buying both their BPM and SOA products together, not individually.

Bruce Williams was up next speaking on the BPM as the killer application for SOA. He’s a Six Sigma guy, so spent some time talking about BPM in the context of quality management initiatives: if we can manage processes well, we can achieve our business goals; in order to manage processes, we need some systems and infrastructure. He defines the killer app as being flexible and dynamic, not a fixed state or a system with unchangeable functionality. He sees BPM as being the language that can be spoken and understood by both business and IT: not the Tower of Babel created by technology-speak, but how process ties to business strategy.

Logistics are not great: they’ve billeted me in the down-market Marriott Courtyard next door rather than at the Hyatt where the conference is being held (I had to change rooms due to no hot water, can’t run the a/c at night because of the noise, and I have a view — complete with sound effects — of the I95 onramp), and there’s no wifi or power in the lecture hall. There’s supposed to be wifi, but it’s a hidden, protected network that only some people seem to be able to connect to (yes, I added it manually to my wireless network settings). They’ve promised us power at the desks and some assistance with the wifi after lunch.

In case my policy about vendor conferences isn’t crystal clear from previous posts, Software AG is paying my travel expenses to be here, although they are not compensating me for my time nor do they have any editorial control over what I write.

Business Rules Forum: Kathy Long on Process and Rules

Kathy Long, who (like me) is more from the process side than the rules side, gave a breakout session on how process and rules can be combined, and particularly how to find the rules within processes. She stated that most of the improvements in business processes don’t come from improving the flow (the inputs and outputs), but in the policies, procedures, knowledge, experience and bureaucracy (the guides and enablers): about 85% of the improvement comes from the latter category. She uses an analysis technique that looks at these four types of components:

  • Input: something that is consumed or transformed by a process
  • Guide: something that determines how, why or when a process occurs, but is not consumed
  • Output: something that is produced by or results from a process
  • Enabler: something used to perform a process

There’s quite a bit of material similar to her talk last year (including the core case study); I assume that this is the methodology that she uses with clients hence it doesn’t change often. Rules fall into the “guides” category, that is the policies and procedures that dictate how, why and when a process occurs. I’m not sure that I get the distinction that she’s making between the “how” in her description of guides, and the “how” that embedded within process flows; I typically think of policies as business rules, and procedures as business processes, rather than both policies and procedures as being rules. Her interpretation is that policies aren’t actionable, but need to be converted to procedures, which are actionable; since rules are, by their nature, actionable, that’s what gets converted to rules. However, the examples of rules that she provided (“customer bill cannot exceed preset limit”) seem to be more policies than procedures to me.

In finding the rules in the process, she believes that we need to start at the top, not at the lowest atomic level: in other words, you don’t want to go right to the step level and try to figure out what rules to create to guide that step; you want to start at the top of the process and figure out if you’re even doing the right higher-level subprocesses and tasks, given that you’re implemented rules to automate some of the decisions in the process.

The SVBR (Semantics of Business Vocabulary and Business Rules) standard defines the difference between rules and advice, and breaks down rules into business rules and structural rules. From there, we end up with structural business rules — which are criteria for making decisions, and can’t be violated — and operative business rules — which are guides for conduct or action, but can be violated (potentially with a penalty, e.g., an SLA). Structural rules might be more what you think of as business rules, that is, they are the underpinning for automated decisions, or are a specific computation. On the other hand, operative business rules may be dictated by company policy or external regulation, but may be overridden; or represent a threshold at which an alert will be raised or a process escalated.

She recommends documenting rules outside the process, since the alternative is to build a decision tree into your process flow, which gets really ugly. I joked during my presentation on Tuesday that the process bigots would include all rules as explicit decision trees within the BPMS; the rules bigots would have a single step in the entire process in the BPMS, and that step would call the BRMS. Obviously, you have to find the right balance between what’s in the process map and what’s in the rules/decision service, especially when you’re creating them in separate environments.

The largest detractor from the presentation is that Long used a case study scenario to show the value of separating rules from process, but described it in large blocks of text on her slides which she just read aloud to us. She added a lot of information as she went along, but any guideline on giving a presentation tells you not to put a ton of text on your slides and just read it, for very good reasons: the audience tends to be reading the slides in case of listening to you. She might want to consider the guides that are inherent in the process of taking a case study and turning it into a presentation.

A brilliant recommendation that she ended with is to create appropriate and consistent rules across the enterprise, then let the business design their own process. Funny how some of us who are practitioners in BPM (whether at the management consulting or implementation end of things) are the biggest critics of BPM, or specifically, we see the value of using rules for agility because process often doesn’t deliver on its promises. I’ve made the statement in two presentations within the last week that BPMS implementations are becoming the new legacy systems — not (purely) because of the capability of the products, but because of how organizations are deploying them.

Business Rules Forum: James Taylor and Neil Raden keynote

Opening the second conference day, James Taylor and Neil Raden gave a keynote about competing on decisions. First up was James, who started with a definition of what a decision is (and isn’t), speaking particularly about operation decisions that we often see in the context of automated business processes. He made a good point that your customers react to your business decisions as if they were deliberate and personal to them, when often they’re not; James’ premise is that you should be making these deliberate and personal, providing the level of micro-targeting that’s appropriate to your business (without getting too creepy about it), but that there’s a mismatch between what customers want and what most organizations provide.

Decisions have to be built into processes and systems that manage your business, so although business may drive change, IT gets to manage it. James used the term “orthogonal” when talking about the crossover between process and rules; I used this same expression in a discussion with him yesterday in discussing how processes and decisions should not be dependent upon each other: if a decision and a process are interdependent, then you’re likely dealing with a process decision that should be embedded within the process, rather than a business decision.

A decision-centric organization is focused on the effectiveness of its decisions rather than aggregated, after-the-fact metrics; decision-making is seen as a specific competency, and resources are dedicated to making those decisions better.

Enterprise decision management, as James and Neil now define it, is an approach for managing and approving the decisions that drive your business:

  • Making the decisions explicit
  • Tracking the effectiveness of the decisions in order to improve them
  • Learning from the past to increase the precision of the decisions
  • Defining and managing these decisions for consistency
  • Ensuring that they can be changed as needed for maximum agility
  • Knowing how fast the decisions must be made in order to match the speed of the business context
  • Minimizing the cost of decisions

Using an airline pilot analogy, he discussed how business executives need a number of decision-related tools to do their job effectively:

  • Simulators (what-if analysis), to learn what impact an action might have
  • Auto-pilot, so that their business can (sometimes) work effectively without them
  • Heads-up display, so they can see what’s happening now, what’s coming up, and the available options
  • Controls, simple to use but able to control complex outcomes
  • Time, to be able to take a more strategic look at their business

Continuing on the pilot analogy, he pointed out that the term dashboard is used in business to really mean an instrument cluster: display, but no control. A true dashboard must include not just a display of what’s happening, but controls that can impact what’s happening in the business. I saw a great example of that last week at the Ultimus conference: their dashboard includes a type of interactive dial that can be used to temporarily change thresholds that control the process.

James turned the floor over to Neil, who dug further into the agility imperative: rethinking BI for processes. He sees that today’s BI tools are insufficient for monitoring and analyzing business processes, because of the agile and interconnected nature of these processes. This comes through in the results of a survey that they did about how often people are using related tools: the average hours per week that a marketing analyst spends using their BI tool was 1.2, versus 17.4 for Excel, 4.2 for Access and 6.2 for other data administration tools. I see Excel everywhere in most businesses, whereas BI tools are typically only used by specialists, so this result does not come as a big surprise.

The analytical needs of processes are inherently complex, requiring an understanding of the resources involved and process instance data, as well as the actual process flow. Processes are complex causal systems: much more than just that simple BPMN diagram that you see. A business process may span multiple automated (monitored) processes, and may be created or modified frequently. Stakeholders require different views of those processes; simple tactical needs can be served by BAM-type dashboards, but strategic needs — particularly predictive analysis — are not well-served by this technology. This is beyond BI: it’s process intelligence, where there must be understanding of other factors affecting a process, not just measuring the aggregated outcomes. He sees process intelligence as a distinct product type, not the same as BI; unfortunately, the market is being served (or not really served) by traditional query-based approaches against a relatively static data model, or what Neil refers to as a “tortured OLAP cube-based approach”.

What process intelligence really needs is the ability to analyze the timing of the traffic flow within a process model in order to provide more accurate flow predictions, while allowing for more agile process views that are generated automatically from the BPMN process models. The analytics of process intelligence are based on the process logs, not pre-determined KPIs.

Neil ended up by tying this back to decisions: basically, you can’t make good decisions if you don’t understand how your processes work in the first place.

Interesting that James and Neil deal with two very important aspects of business processes: James covers decisions, and Neil covers analytics. I’ve done presentations in the past on the crossover between BPM, BRM and BI; but they’ve dug into these concepts in much more detail. If you haven’t read their book, Smart Enough Systems, there’s a lot of great material in there on this same theme; if you’re here at the forum, you can pick up a copy at their table at the expo this afternoon.

Business Rules Forum: Mixing Rules and Process

I had fun with my presentation on mixing rules and process, and it was a good tweetup (meeting arranged via Twitter) opportunity: Mike Kavis sat in on the session, Miko Matsumura of Software AG caught up with me afterwards, and James Taylor even admitted to stepping in for the last few minutes.
 

I’ve removed most of the screen snapshots from the presentation since they don’t make any sense without the discussion; the text itself is pretty straightforward and, in the end, not all that representative of what I talked about. I guess you just had to be there.

Ultimus: V8 technical demo

FlobotI ended up wrapped up in a discussion at the break that had me arrive late to the last session of the day; Steve Jones of Ultimus is going through many of the technical underpinnings of V8 for designers and developers, particularly those that are relevant to the people in the audience who will be upgrading from those old V7 systems soon.

A nice way to integrate with web services, where the WSDL can be interrogated and a data structure matching the interface parameters created directly from that; most other systems that I’ve seen require that you define the process parameters explicitly then map from one to the other. Of course, there’s lots of cases when you don’t want a full representation of the web services interface, or you want to filter or combine parameters during interface, but this gives you the option for setting up a lot of web services really quickly.

The integrated rules editor allows you to drag and drop process variables — including recipients — onto a graphical decision tree; you don’t have the full power of a business rules system, but this may be enough for a lot of human-centric processes where most of the complex decisions in the process are made by people rather than the system.

For interfacing with any of the external components, such as the email connector or a form, it’s possible to drag and drop data fields from the process instance schema or org chart/ActiveDirectory directly to assign variables for that component, which is a pretty intuitive way to make the link between the data sources and the external calls. They’ve also eliminated some of the coding required for things like getting the current user’s supervisor’s email address, which used to require a bit of code in V7.

Ultimus provides a virtual machine with the software pre-installed as part of their training offerings, which is a great way to learn how to work with all of this; I don’t understand why more vendors don’t provide this to their customers.

I looked back to some old notes from early 2007 when I had a demo of Ultimus V7; my impression at that time is that it was very code-like, with very little functionality that was appropriate for business analysts; V8 looks like a significant improvement over this. They’re still behind the curve relative to many of their competitors, but that’s not completely surprising considering their management upheavals over the past year. If you’re a pure Microsoft shop, however, you’ll likely be willing to overlook some of those issues; Forrester placed Ultimus in the leaders sector (in an admittedly small field) in their report on human-centric BPM on Microsoft platforms. In the broader market of all BPM vendors, Gartner placed them in the visionaries quadrant: good completeness of vision, but not quite enough ability to execute to make it into the leaders quadrant, although this latter assessment seemed to be based on the performance of the previous management team.

Steve spent a bit of time showing the V8 end-user interface: reconfigurable columns in task lists, including queries and filters; shared views to allow a personal view to be shared with another user (and allow that other user to complete work on your behalf); and the ability to run reports directly out of the standard user environment, not a separate interface.

They’ve also done some performance improvements, such as moving completed process instances to a separate set of tables (or even archived out to another database) for historical reporting without impacting the performance of work in progress.

That’s it for me for the conference (and the week); tonight, we’ll be down by the Riverwalk drinking margaritas while listening to a Mariachi band. Tomorrow is an Ultimus partner day and I’ll be on an early morning flight home. Next week, I’ll be at the Business Rules Forum in Orlando, where I’m giving a presentation on mixing rules and process. The following week, I’m headed to Miami for the Software AG analyst/blogger roundtable and a day at their user conference, a late addition to my schedule.

Ultimus: Process optimization

Chris Adams is back to talk to us about process optimization, both as a concept and in the context of the Ultimus tools available to assist with this. I’m a bit surprised with the tone/content of this presentation, in which Chris is explaining why you need to optimize processes; I would have thought that anyone who has bought a BPMS probably gets the need for process optimization.

The strategies that they support:

  • Classic: updating your process and republishing it without changing work in progress
  • Iterative: focused and more specific changes updating live process instances
  • Situational/temporary: managers changing the runtime logic (really, the thresholds applied using rules) in live processes, such as changing an approval threshold during a month-end volume increase
  • Round-trip optimization: comparing live data against modeling result sets in simulation

There’s a number of tools for optimizing and updating processes:

  • Ultimus Director, allowing a business manager to change the rules in active processes
  • Studio Client, the main process design environment, which allows for versioning each artifact of a process; it also allows changes to be published back to update work in progress
  • iBAM, providing visibility into work in progress; it’s a generic dashboarding tool that can also be used for visualization of other data sets, not just Ultimus BPM instance data

He finished up with some best practices:

  • Make small optimizations to the process and update often, particularly because Ultimus allows for the easy upgrade of existing process instances
  • Use Ultimus Director to get notifications of
  • Use Ultimus iBAM interactive dials to allow executives to make temporary changes to rule thresholds that impact process flow

There was a great question from the audience about the use of engineering systems methodology in process optimization, such as theory of constraints; I don’t think that most of the vendors are addressing this explicitly, although the ideas are creeping into some of the more sophisticated simulation product.

Ultimus: V8 Technical Deep Dive

Chris Adams is back for a somewhat longer session — I think that he zipped through the previous overview session in about 5 minutes to make up time on the schedule — to give us a lot more detail on the V8 product features. Some of this will only be of interest to Ultimus customers, but I find that it gives some good insight into how the product works and the directions that they’re taking.

ultimus-bpm-suite_2966204861_o

First, he discussed what’s already in the released 8.x product:

  • Flobot connectors are now reusable. “Flobots” are the Ultimus connectors to other systems, with about 10 types available out of the box including web services calls (and I now have a very cool Flobot USB key); previously, you had to reconfigure each connector for every use. For example, for the email connector, you had to set up all parameters for the email connector (ports, authentication, etc.) each place it was used in the process, and change it whenever there was a change to, for example, the recipient. Now, they’ve allows for a reusable connector that has some or all of the parameters predefined to allow that to be more easily used in the process.
  • XML data storage replaces the V7 spreadsheet data structure that was previously used (which previously limited each data element to 255 characters, a limit that I sense from the audience was a sore point). My first reaction was “you used to keep your process instance data in a spreadsheet?”; sometimes you only find out about weirdnesses in a product when you hear about their upgrade out of that state.
  • A new Ultimus rules engine replaces event conditions, with a graphical representation of the rules. Rules actions can be related to steps in the process, or call .Net code or web services. Previously, the event conditions were kept in the spreadsheet data structure, and you had to reference the spreadsheet cell address rather than a schema variable name within rules. Now, you can add rules to processes directly in-line using the process parameters in the rule definitions.
  • Native ActiveDirectory support, so that you can (for example) assign a step to a group that exists in AD. You can still use their org chart functionality to create groups directly in Ultimus.
  • Attachments to process instances have been moved off the BPM server, and into SharePoint. You can use another content repository, but they do SharePoint out of the box and feel that it’s the best integrated solution.

Coming up in 8.2 in December:

  • BPMN support, although you can still convert back and forth to the Ultimus shapes if you’re more familiar with them. He showed a screenshot that looked pretty rudimentary, but it’s not released yet so I’ll reserve judgement until I see the final version.
  • Increased visibility into process incident history, to be able to step through exactly what happened in any particular process instance, including which rules that fired. You can actually playback
  • Enhanced development environment by adding Ultimus awareness to Microsoft Visual Studio for a single environment.
  • Fully exposed APIs, that is, access to the same APIs that the out of the box system is built on to allow you to build the same functionality into your own custom applications, with any function that you see in a pop-up menu also available through an API.

He showed us some architecture diagrams showing their new open architecture, including the client services for building custom client applications, BI services for custom reporting applications, and Flobots for external connectors.

Ultimus: V8 Introduction

Wow, I could have had a V8!

Okay, now that that’s out of my system, Chris Adams (VP Product Management) was up to give us an overview of the V8 release which will be an intro to the deep dive session that’s coming up next. V8 isn’t brand new — 8.0 was in October 2007, 8.1 in July 2008, 8.2 is coming in December — but most of their customers haven’t yet moved to it yet.

Key differences:

  • Moved from the Tidestone spreadsheet data model to XML
  • Providing reusable connectors in a repository for linking to other systems rather than having to retrain Flobots
  • ActiveX controls changed to .Net
  • Changed event conditions to their rules engine
  • Attachments do not need to be kept on the BPM server, but can be stored in SharePoint
  • Native use of ActiveDirectory rather than having to build an org chart first

As a Microsoft partner, they have a strong focus on MS-Office/MOSS 2007, including MS-Office Flobots included directly in the processes: that means that an Excel spreadsheet (for example) can be used as the UI at a step in the process instead of using a custom web form.

I’m going to stick around for the deep dive session next, so more detail to come.