Innovation World: ChoicePoint external customers solutions with BPM, BAM and ESB

I took some time out from sessions this afternoon to meet with Software AG’s deputy CTOs, Bjoern Brauel and Miko Matsumura, but I’m back for the last session of the day with Cory Kirspel, VP of identity risk management at ChoicePoint (a LexisNexis company), on how they have created externally-facing solutions using BPM, BAM and ESB. ChoicePoint screens and authenticates people for employment screening, insurance services and other identity-related purposes, plus does court document retrieval. There’s a fine line to walk here: companies need to protect the privacy of individuals while minimizing identify fraud.

Even though they only really do two things — credential and investigate people and businesses — they had 43+ separate applications on 12 platforms with various technologies in order to do this. Not only did that make it hard to do what they needed internally, customers were also wanting to integrate ChoicePoint’s systems directly into their own with an implementation time of only 3-4 months, and provide visibility into the processes.

They were already a Software AG customer with the legacy modernization products, so took a look at their BPM, BAM and ESB. The result is that they had better visibility, and could leverage the tools to build solutions much faster since they weren’t building everything from the ground up. He walked us through some of the application screens that they developed for use in their customers’ call centers: allow a CSR to enter some data about a caller, select a matching identity by address, verify the identity (e.g., does the SSN match the name), authenticate the caller with questions that only they could answer, then provide a pass/fall result. The overall flow and the parameters of every screen can be controlled by the customer organization, and the whole flow is driven by a process model in the BPMS which allows them to assign and track KPIs on each step in the process.

They’re also moving their own executives from the old way of keeping an eye on business — looking at historical reports — to the new way with near real-time dashboards. As well as having visibility into transaction volumes, they are also able to detect unusual situations that might indicate fraud or other situations of increased risk, and alert their customers. They found that BAM and BI were misunderstood, poorly managed and under-leveraged; these technologies could be used on legacy systems to start getting benefits even before BPM was added into the mix.

All of this allowed them to reduce the cost of ownership, which protects them in a business that competes on price, as well as offering a level of innovation and integration with their customers’ systems that their competitors are unable to achieve.

They used Software AG’s professional services, and paired each external person with an internal one in order to achieve knowledge transfer.

Business Rules Forum: James Taylor and Neil Raden keynote

Opening the second conference day, James Taylor and Neil Raden gave a keynote about competing on decisions. First up was James, who started with a definition of what a decision is (and isn’t), speaking particularly about operation decisions that we often see in the context of automated business processes. He made a good point that your customers react to your business decisions as if they were deliberate and personal to them, when often they’re not; James’ premise is that you should be making these deliberate and personal, providing the level of micro-targeting that’s appropriate to your business (without getting too creepy about it), but that there’s a mismatch between what customers want and what most organizations provide.

Decisions have to be built into processes and systems that manage your business, so although business may drive change, IT gets to manage it. James used the term “orthogonal” when talking about the crossover between process and rules; I used this same expression in a discussion with him yesterday in discussing how processes and decisions should not be dependent upon each other: if a decision and a process are interdependent, then you’re likely dealing with a process decision that should be embedded within the process, rather than a business decision.

A decision-centric organization is focused on the effectiveness of its decisions rather than aggregated, after-the-fact metrics; decision-making is seen as a specific competency, and resources are dedicated to making those decisions better.

Enterprise decision management, as James and Neil now define it, is an approach for managing and approving the decisions that drive your business:

  • Making the decisions explicit
  • Tracking the effectiveness of the decisions in order to improve them
  • Learning from the past to increase the precision of the decisions
  • Defining and managing these decisions for consistency
  • Ensuring that they can be changed as needed for maximum agility
  • Knowing how fast the decisions must be made in order to match the speed of the business context
  • Minimizing the cost of decisions

Using an airline pilot analogy, he discussed how business executives need a number of decision-related tools to do their job effectively:

  • Simulators (what-if analysis), to learn what impact an action might have
  • Auto-pilot, so that their business can (sometimes) work effectively without them
  • Heads-up display, so they can see what’s happening now, what’s coming up, and the available options
  • Controls, simple to use but able to control complex outcomes
  • Time, to be able to take a more strategic look at their business

Continuing on the pilot analogy, he pointed out that the term dashboard is used in business to really mean an instrument cluster: display, but no control. A true dashboard must include not just a display of what’s happening, but controls that can impact what’s happening in the business. I saw a great example of that last week at the Ultimus conference: their dashboard includes a type of interactive dial that can be used to temporarily change thresholds that control the process.

James turned the floor over to Neil, who dug further into the agility imperative: rethinking BI for processes. He sees that today’s BI tools are insufficient for monitoring and analyzing business processes, because of the agile and interconnected nature of these processes. This comes through in the results of a survey that they did about how often people are using related tools: the average hours per week that a marketing analyst spends using their BI tool was 1.2, versus 17.4 for Excel, 4.2 for Access and 6.2 for other data administration tools. I see Excel everywhere in most businesses, whereas BI tools are typically only used by specialists, so this result does not come as a big surprise.

The analytical needs of processes are inherently complex, requiring an understanding of the resources involved and process instance data, as well as the actual process flow. Processes are complex causal systems: much more than just that simple BPMN diagram that you see. A business process may span multiple automated (monitored) processes, and may be created or modified frequently. Stakeholders require different views of those processes; simple tactical needs can be served by BAM-type dashboards, but strategic needs — particularly predictive analysis — are not well-served by this technology. This is beyond BI: it’s process intelligence, where there must be understanding of other factors affecting a process, not just measuring the aggregated outcomes. He sees process intelligence as a distinct product type, not the same as BI; unfortunately, the market is being served (or not really served) by traditional query-based approaches against a relatively static data model, or what Neil refers to as a “tortured OLAP cube-based approach”.

What process intelligence really needs is the ability to analyze the timing of the traffic flow within a process model in order to provide more accurate flow predictions, while allowing for more agile process views that are generated automatically from the BPMN process models. The analytics of process intelligence are based on the process logs, not pre-determined KPIs.

Neil ended up by tying this back to decisions: basically, you can’t make good decisions if you don’t understand how your processes work in the first place.

Interesting that James and Neil deal with two very important aspects of business processes: James covers decisions, and Neil covers analytics. I’ve done presentations in the past on the crossover between BPM, BRM and BI; but they’ve dug into these concepts in much more detail. If you haven’t read their book, Smart Enough Systems, there’s a lot of great material in there on this same theme; if you’re here at the forum, you can pick up a copy at their table at the expo this afternoon.

Ultimus: Process optimization

Chris Adams is back to talk to us about process optimization, both as a concept and in the context of the Ultimus tools available to assist with this. I’m a bit surprised with the tone/content of this presentation, in which Chris is explaining why you need to optimize processes; I would have thought that anyone who has bought a BPMS probably gets the need for process optimization.

The strategies that they support:

  • Classic: updating your process and republishing it without changing work in progress
  • Iterative: focused and more specific changes updating live process instances
  • Situational/temporary: managers changing the runtime logic (really, the thresholds applied using rules) in live processes, such as changing an approval threshold during a month-end volume increase
  • Round-trip optimization: comparing live data against modeling result sets in simulation

There’s a number of tools for optimizing and updating processes:

  • Ultimus Director, allowing a business manager to change the rules in active processes
  • Studio Client, the main process design environment, which allows for versioning each artifact of a process; it also allows changes to be published back to update work in progress
  • iBAM, providing visibility into work in progress; it’s a generic dashboarding tool that can also be used for visualization of other data sets, not just Ultimus BPM instance data

He finished up with some best practices:

  • Make small optimizations to the process and update often, particularly because Ultimus allows for the easy upgrade of existing process instances
  • Use Ultimus Director to get notifications of
  • Use Ultimus iBAM interactive dials to allow executives to make temporary changes to rule thresholds that impact process flow

There was a great question from the audience about the use of engineering systems methodology in process optimization, such as theory of constraints; I don’t think that most of the vendors are addressing this explicitly, although the ideas are creeping into some of the more sophisticated simulation product.

Ultimus: Reports and Dashboards

Chris Adams is probably now thinking that I’m stalking him: not only do I attend his first two technical sessions, but when he switches to the business track for this presentation, I follow him. However, I wanted to hear about their reporting and analytics capabilities, and he covered off reporting, dashboards, BAM, alerts and using third-party analytics.

Ultimus test drive

He started out with the underlying premise that you need to have governance over your business data, or your processes won’t be effective and efficient; in order to do that, you need to identify the key performance indicators (KPIs) that will be used to measure the health of your processes. This means both real-time monitoring and historical analytics.

Ultimus iBAM provides a real-time dashboard that works with both V7 and V8. Only in V8, there’s also email alerts when specific KPI thresholds are reached.

For offline reporting, they have three types:

  • Process reports, automatically created for process instance analytics
  • User reports, also automatically created for workload and user productivity
  • Custom reports that allow for filtering of the historical data, filtered by other business data

Reports can be viewed as charts as well as tabular reports; there is a third-party report generation tool invisibly built in (Infologistics?); Chris noted that this is the only third-party OEM component in Ultimus.

If you’re using Crystal Reports or Cognos, Ultimus has now opened up and created connectors to allow for reporting on the Ultimus history data directly from those platforms; by the end of the year, they’ll add support for SQL Server Reporting Services as well.

There will be a more technical session on the reporting and analytics later today.

ProcessWorld 2008: Maureen Fleming, IDC

Maureen Fleming of IDC spoke in the Process Intelligence and Performance Management track on process measurement, and how it’s used to support decisions about a process as well as having an application context. She defines strategic measurement as guiding decisions about where to focus across processes, providing information on where to improve a process, and supporting fact-based dispute arbitration.

She showed a chart of timeliness of measurement versus complexity:

  • Simple and timely: measure and spot-check performance within a process
  • Simple and time critical: need for continuous measurement and problem identification within homogeneous processes
  • Complex and timely: regular reporting to check performance across heterogeneous process islands
  • Complex and time-critical: need for continuous measurement and problem identification across heterogeneous process islands

Leading enterprises are moving towards more complex measurement. I’m not sure I agree with her definition of “timely”, which seems to be used to mean “historical” in this context.

She breaks down measurement tools by the intention of the measurement system: what happened (process intelligence and reporting)/what will happen(analytics, complex event processing)/what is happening (BAM)/why it is happening (root cause analysis))/how we should respond (intelligent process automation).

She went through IDC’s categorization of BPMS — decision-centric automation (human-centric), sensing automation (integration-centric and complex event processing), and transaction-centric automation (integration-centric) — and discussed the problem of each BPMS vendors’ individual BAM creating islands of process measurement. Process metrics from all process automation systems need to feed into a consolidated process measurement infrastructure: likely an enterprise process warehouse with analytics/BAM tied to that more comprehensive view, such as ARIS PPM.

She discussed KPIs and how the goals for those KPIs need to consider both business objectives and past performance: you can’t understand performance variations that might occur in the present without looking at when and why they occurred in the past.

Although her presentation mostly focussed on process measurement, the Q&A was much more about sense and respond: how to have specific measurements/events trigger something back in the process side in order to respond to an event.

Agent Logic’s RulePoint and RTAM

This post has been a long time coming: I missed talking to Agent Logic at the Gartner BPM event in Orlando in September since I didn’t stick around for the CEP part of the week, they persisted and we had both an intro phone call and a longer demo session in the weeks following. Then I had a crazy period of travel, came home to a backlog of client work and a major laptop upgrade, and seemed to lose my blogging mojo for a month.

If you’re not yet familiar with the relatively new field of CEP (complex event processing), there are many references online, including a recent ebizQ white paper based on their event processing survey which determined that a majority of the survey respondents believe that event-driven architecture comprises all three of the following:

  • Real-time event notification – A business event occurs and those individuals or systems who are interested in that event are notified, and potentially act on the event.
  • Event stream processing – Many instances of an event occur, such as a stock trade, and a process filters the event stream and notifies individuals or systems only about the occurrences of interest, such as a stock price reaching a certain level.
  • Complex event processing – Different types of events, from unrelated transactions, correlated together to identify opportunities, trends, anomalies or threats.

And although the survey shows that the CEP market is dominated by IBM, BEA and TIBCO, there are a number of other significant smaller players, including Agent Logic.

In my discussions with Agent Logic, I had the chance to speak with Mike Appelbaum (CEO), Chris Bradley (EVP of Marketing) and Chris Carlson (Director of Product Management). My initial interest was to gain a better understanding of how BPM and CEP come together as well as how their product worked; I was more than a bit amused when they referred to BPM as an “event generator”. I was someone mollified when they also pointed out that business rules engines are event generators: both types of systems (and many others) generate thousands of events to their history logs as they operate, most of which are of no importance whatsoever; CEP helps to find the few unique combinations of events from multiple data feeds that are actually meaningful to the business, such as detecting credit card fraud based on geographic data, spending patterns, and historical account information.

Agent Logic - RulePoint - Home

Agent Logic has been around since 1999, and employs about 50 people. Although they initially targeted defence and intelligence industries, they’re now working with financial services and manufacturing as well. Their focus is on providing an end-user-driven CEP tool for non-technical users to write rules, rather than developers — something that distinguishes them from the big three players in the market. After taking a look at the product, I think that they got their definition of “non-technical user” from the same place as the BPM vendors: the prime target audience for their product would be a technically-minded business analyst. This definitely pushes down the control and enforcement of policies and procedures closer to the business user.

They also seem to be more focused on allowing people to respond to events in real-time (rather than, for example, spawning automated processes to react to events, although the product is certainly capable of that). As with other CEP tools, they allow multiple data feeds to be combined and analyzed, and rules set for alerts and actions to fire based on specific business events corresponding to combinations of events in the data feeds.

Agent Logic has two separate user environments (both browser-based): RulePoint, where the rules are built that will trigger alerts, and RTAM, where the alerts are monitored.

Agent Logic - RulePoint - Rule builderRulePoint is structured to allow more technical users work together with less technical users. Not only can users share rules, but a more technical user can create “topics”, which are aggregated, filtered data sources, then expose these to the less technical to be used as input for their rules. Rules can be further combined to create higher-level rules.

RulePoint has three modes for creating rules: templates, wizards and advanced. In all cases, you’re applying conditions to a data source (topic) and creating a response, but they vary widely in terms of ease of use and flexibility.

  • Templates can be used by non-technical users, who can only set parameter values for controlling filtering and responses, and save their newly-created rule for immediate use.
  • The wizard creation tool allows for much more complex conditions and responses to be created. As I mentioned previously, this is not really end-user friendly — more like business analyst friendly — but not bad.
  • The advanced creation mode allows you to write DRQL (detect and response query language) directly, for example, ‘when 1 “Stock Quote” s with s.symbol = “MSFT” and s.price > 90 then “Instant Message” with to=”[email protected]”,body=’MSFT is at ${s.price}”‘. Not for everyone, but the interesting thing is that by using template variables within the DRQL statements, you can converted rules created in advanced mode into templates for use by non-technical users: another example of how different levels of users can work together.

Agent Logic - RulePoint - WatchlistsWatchlists are lists that can be used as parameter sets, such as a list of approved airlines for rules related to travel expenses, which then become drop-down selection lists when used in templates. Watchlists can be dynamically updated by rules, such as adding a company to a list of high-risk companies if a SWIFT message is received that references both that company and a high-risk country.

Agent Logic - RulePoint - ServicesRulePoint includes a large number of predefined services that can be used as data sources or responders, including SQL, web services and RSS feeds. You can also create your own services. By providing access to web services both as a data source and as a method of responding to an alert, this allows Agent Logic to do things like kick off a new fraud review process in a BPMS when a set of events occur across a range of systems that indicate a potential for fraud.

Lastly, in terms of rule creation, there are both standard and custom responses that can be attached to a rule, ranging from sending an alert to a specific user in RTAM to sending an email message to writing a database record.

Although most of the power of Agent Logic shows up in RulePoint, we spent a bit of time looking at RTAM, the browser-based real-time alert manager. Some Agent Logic customers don’t use RTAM at all, or only for high-priority alerts, preferring to use RulePoint to send responses to other systems. However, compared to a typical BAM environment, RTAM provides pretty rich functionality: it can link to underlying data sources, for example, by linking to an external web site with criminal record data on receiving an alert that a job candidate has a record, and allows for mashups with external services such as Google maps.

Agent Logic - RTAM - AlertsIt’s also more of an alert management system rather than just monitoring: you can filter alerts by the various rules that trigger them, and perform other actions such as acknowledging the alert or forwarding it to another user.

Admittedly, I haven’t seen a lot of other CEP products to this depth to provide any fair comparison, but there were a couple of things that I really liked about Agent Logic. First of all, RulePoint provides a high degree of functionality with three different levels of interfaces for three different skill levels, allowing more technical users to create aggregated, easier-to-use data sources and services for less technical users to include in their rules. Rule creation ranges from dead simple (but inflexible) with templates to roll-your-own in advanced mode.

Secondly, the separation of RulePoint and RTAM allows the use of any BI/BAM tool instead of RTAM, or just feeding the alerts out as RSS feeds or to a portal such as Google Gadgets or Pageflakes. I saw a case study of how Bank of America is using RSS for company-wide alerts at the Enterprise 2.0 conference earlier this year, and see a natural fit between CEP and this sort of RSS usage.

Update: Agent Logic contacted me and requested that I remove a few of the screenshots that they don’t want published. Given that I always ask vendors during a demo if there is anything that I can’t blog about, I’m not sure how that misunderstanding occurred, but I’ve complied with their request.

Integration World Day 1: Peter Kurpick

Peter Kurpick, CPO (Chief Product Officer) of webMethods Business Division, gave an overview of the technology direction. He talked about the paradigm for SOA governance, with the layers of technical services, business services and policies being consumed by business processes: the addition of the policy layer (which is the SOA governance part) sets this apart from many of the visions of SOA that you see.

He brought along Susan Ganeshan, the SVP of Product Management and Product Marketing, to give a (canned) demo similar to one that we saw yesterday at the end of the analyst sessions. She showed the process map as modelled in their BPM layer, where the appropriate services were called and other points of integration using webMethods, then we saw the custom portal-type interfaces for customers, suppliers and internal workers. They have Fair Isaac’s Blaze Advisor integrated with the BPMS that allows them to change rules for in-flight processes, and their own monitoring and analytics as well as some new Cognos analytics integration. She also showed us the CentraSite integration, where information about services and their policies are stored; CentraSite can be used to dynamically select from multiple equivalent services based on policies, such as selecting from one of several suppliers. The idea of the demo is to show how all of the pieces can come together — people, web services, B2B services, legacy services, and policy governance — all using the webMethods suite.

The original core functionality provided by webMethods is the ESB (originally from the EAI space), but now that’s surrounded by BPM, composite applications, B2B integration and legacy modernization tools (from the Software AG side). Around that is BAM, which is being raised in importance from being just an adjunct to BPM to being an event-related technology in its own right. Around all of this is SOA governance, which is what CentraSite brings to this.

The next release, due sometime in 2008, will be a fully-integrated suite of the Software AG and webMethods products, although Kurpick didn’t provide a lot of information.

IQPC BPM Summit: Kirk Gould

Kirk Gould, a performance consultant with Pinnacle West Capital, talked about business processes and metrics. I like his definition of a metric: “A tool created to tie the performance of the organization to the business objectives”, and he had lots of great advice about how to — and how not to — develop metrics that work for your company.

I came right off of my presentation before this one, so I’m a bit too juiced up to focus as well on his presentation as it deserves. However, his slides are great and I’ll be reviewing them later. He also has a good handout that takes us through the 10 steps of metric development:

  1. Plan
  2. Perform
  3. Capture
  4. Analyze
  5. Display
  6. Level
  7. Automate
  8. Adjust
  9. Manage
  10. Achieve

He has a great deal more detail for each of these steps, both on the handout and in his presentation. He discussed critical success factors and performance indicators, and how they fit into a metrics framework, but the best parts were when he described the ways in which you can screw up your metrics programs: there were a lot of sheepish chuckles and head-shaking around the room, so I know that many of these hit home.

He went through the stages of metrics maturity, which I’ll definitely have to check out later since he flew through the too-dense slides pretty quickly. He quotes the oft-used (and very true) line that “what gets measured, gets managed”, a concept that is at the heart of metrics.

BAM technical session

This seemed to be a morning for networking, and I’m arriving late for a technical session on FileNet’s BAM. I missed the hands-on session this morning so wanted to get a closer look at this before it’s released sometime in the next couple of months.

The key functional things in the product are dashboards, rules and alerts. The dashboard part is pretty standard BI presentation-layer stuff: pick a data source, pick a display/graph type, and position it on the dashboard. Rules are where the smarts come in: pick a data source, configure the condition for firing an alert, then set the content and recipient of the alert. Alerts can be displayed on the recipient’s dashboard, or sent as an email or SMS, or even launch other processes or services to handle an exception condition automatically.

There’s a nice interface for configuring the dimensions (aggregations) in the underlying OLAP cubes, and for configuring buckets for running statistics. The data kept on the BAM server is cycled out pretty quickly: it’s really for tracking work in progress with just enough historical data to do some statistical smoothing.

Because they’re using a third-party OEM product for BAM, it’s open to other data sources plugged into the server, used in the OLAP cubes, combined on the dashboards or used in the rules. However, this model adds yet another server, since it pulls pre-processed work-in-progress data from the Process Analyzer (so PA is still required) and has a sufficiently hefty memory requirement since it’s maintaining the cubes in memory that it’s probably not a good idea to co-locate it on a shared application server. I suppose that this demotes PA to a data mart for historical data as well as a pre-processor, which is not a completely bad thing, but I’m imagining that a full replacement for PA might be better received by the customers.