Innovation World: Analyst/Blogger Technology Roundtable

We left the analysts and press who focus on financial stuff upstairs in the main lecture hall, and a smaller group of us congregated for a roundtable session with Peter Kurpick and a few other high-level technical people from Software AG. These notes are really disjointed since I was just trying to capture ideas as we went along; I have to say that some of these people really like to hear themselves talk, and most them like to wag their finger admonishingly while they do so.

Kurpick started out the session by talking about how Software AG has a product set that has SOA built in from the philosophy up, whereas many other vendors (like SAP) add it on for marketing purposes; I would say that the other vendors are doing it as much for connectivity as for marketing purposes, but it’s true that it’s not baked into the product in most cases. He didn’t use the phrase “lipstick on a pig”, but he sees that the challenge to the application vendors is that by tacking on a services later, it just serves to expose the weaknesses of their internal architecture when the customer wants to change a fundamental business process.

Frank Kenney of Gartner disagreed, saying that adding services to the applications gives those vendors a good way to sell more into their existing customers, and that most organizations don’t change the basic business processes provided by the application vendors anyway. (Of course, maybe they don’t change them because it’s too difficult to do so.) Most customers just want to buy something that’s pretty close to what they want already, not orchestrate their own ERP and CRM processes from scratch, especially for processes that don’t provide competitive differentiation.

Kurpick responded that many organizations no longer own their entire supply chain, and that having an entire canned process isn’t even an option: you have to be able to orchestrate your business process by picking and choosing functional components.

Ian Finley of AMR Research discussed the impact of SaaS applications and services on this equation: if you can get something as a service at a much lower capital cost, even if it’s only a portion of your entire business process, that has value. Other opinions around the table (from people who didn’t bother to set up their name tent and made some rash assumptions about their own fame) agreed that SaaS was an important part as customers modularize their business processes and look for ways to reuse and outsource components. SaaS and open source both provide a way to have lower up-front costs — important to many organizations that are cutting their budgets — while offering a growth path. Someone from Forrester said that they are seeing four disruptive trends: SaaS, offshore, open source, and SOA. All of these mean taking focus away from the monolithic packaged applications and providing alternative ways to compose your business processes from a number of disparate sources.

What does it mean, however, for customers to move to a distributed architecture that is served by a variety of services, both inside and outside the firewall? Some organizations believe that this means giving up some control of destiny, but Kenney feels that control is an illusion anyway. I tend to agree: how much control do you have over a large ERP vendor’s product direction, and what choices do you have (except stopping upgrades or switching platforms, both painful alternatives) if they go somewhere that you don’t want to go? I think that the illusion of control comes from operational control — you own the iron that it runs on — rather than strategic control, but ultimately that’s not really worth much.

Ron from ZapThink and Beth Gold-Bernstein of ebizQ started a discussion on federation of ESBs, repositories and registries, and how that’s critical for bringing all of this together in a governable environment. Susan Ganeshan (Software AG product management) discussed how their new version of CentraSite addresses some of those issues by providing a common governance platform. Distributed systems are inherently complex because they work on different platforms that may not have been intended to be integrated, and may not have any sort of federated policies around SLAs, for example. Large organizations are drowning in complexity, and the vendors need to help them to reduce complexity or at least shield them from the complexity so that they can get on with their business. SOA can serve that need by helping organizations to change the paradigm by thinking about applications in a completely different way, as a composition of services.

We shifted topics slightly to talk about how companies compete on processes: identifying the processes (or parts thereof) that provide a competitive differentiation. Neil Macehiter of MWD pointed out how critical it is for companies to use common commercial alternatives (whether packaged, SaaS or outsourced) for processes that are the same for everyone in order to drive down costs, and spend the money on the processes that make them unique. However, in this era of mergers and acquisitions, none of this is forever: business processes will change radically when two organizations are brought together, and systems will either be made to work together, or abandoned. Kenney told a story that would have you believe that no one gets fired for waiting to buy IBM or SAP or any of the big players; I think that you might not get fired, but your company might get eaten up by the competition while you’re waiting.

There’s a general impression that SOA is failing; not the general architectural principles, but the way that the market is being sold and serviced now. Large, high-profile projects will not attain their promised ROI; the customers will blame the vendors for selling them products that don’t work; the vendors will blame the customers for being dysfunctional and doing the implementation wrong; the SOA market will crash and burn; some new integration/service-oriented phoenix will rise from the ashes.

Innovation World: Media and Analyst Forum

I’m spending the morning at the media and analyst forum at Software AG’s user conference, Innovation World, in Miami. The first half of the morning covered mainframe modernization, plus a presentation by Miko Matsumura (who I met last week at the Business Rules Forum), Deputy CTO, on the state of SOA adoption. He’s just published a book — more of a booklet at 86 pages — on SOA Adoption for Dummies, continuing Software AG’s trend of using the Dummies brand to push out lengthy white papers in a lighthearted format. For example, chapter 10 is SOA Rocket Science, which covers three principles of SOA:

  1. Keep the pointy end up (instrumentation)
  2. Maintain upward momentum (organization)
  3. Don’t stop until you’re in orbit (automation)

He finished up with a discourse on SOA as IT postmodernism, casting postmodernism as an architectural pattern language: given a breakdown in the dominant metanarrative and the push towards deconstructionism, a paradigm of composition emerges…

After the break, we heard from Ian Walsh from webMethods product marketing to give us an overview of the webMethods suite:

  • webMethods BPM, including process management, rules management and simulation
  • CAF (composite application framework), for codeless application design and web-based composite applications
  • BAM, including process monitoring and alerting, and predictive management

He stated that the “pure-play” BPMS vendors (mentioning Lombardi, Savvion and Pega) are having problems because they sold on the ability to allow the  business to create their own processes quickly, but that doesn’t work in reality when you have to integrate complex systems. He also said that the platform vendors (Microsoft, IBM, Oracle) have confusing offerings that are not well integrated, hence take too long to implement. He mentioned TIBCO as a special case, neither pure-play nor platform, but sees their weakness as being very focused on events: good for their CEP strategy, but not good for their broader BPM/SOA strategy.

Walsh sees their strengths in both BPM and SOA as their differentiator: customers are buying both their BPM and SOA products together, not individually.

Bruce Williams was up next speaking on the BPM as the killer application for SOA. He’s a Six Sigma guy, so spent some time talking about BPM in the context of quality management initiatives: if we can manage processes well, we can achieve our business goals; in order to manage processes, we need some systems and infrastructure. He defines the killer app as being flexible and dynamic, not a fixed state or a system with unchangeable functionality. He sees BPM as being the language that can be spoken and understood by both business and IT: not the Tower of Babel created by technology-speak, but how process ties to business strategy.

Logistics are not great: they’ve billeted me in the down-market Marriott Courtyard next door rather than at the Hyatt where the conference is being held (I had to change rooms due to no hot water, can’t run the a/c at night because of the noise, and I have a view — complete with sound effects — of the I95 onramp), and there’s no wifi or power in the lecture hall. There’s supposed to be wifi, but it’s a hidden, protected network that only some people seem to be able to connect to (yes, I added it manually to my wireless network settings). They’ve promised us power at the desks and some assistance with the wifi after lunch.

In case my policy about vendor conferences isn’t crystal clear from previous posts, Software AG is paying my travel expenses to be here, although they are not compensating me for my time nor do they have any editorial control over what I write.

Business Rules Forum: Kevin Chase of ING

I’m squeezing in one last session before flying out: Kevin Chase, SVP of Implementation Services at ING, discussing how to use rules in a multi-client environment, specifically on the issues of reuse and reliability. I’ve done quite a bit of work implementing processes in multi-client environments — such as a mutual funds back-office outsourcing firm — and the different rules for each client can make for some challenges. in most cases, these companies are all governed by the same regulations, but have their own way that they want things done, even if they’re not the ones doing it.

In ING’s case, they’re doing benefits plan administration, such as retirement (401k) plans, for large clients, and have been using rules for about six years. They originally did a pilot project with one client, then rolled it out to all their clients, but didn’t see the benefits that they expected; that caused them to create a center of excellence, and now they’re refining their processes and expanding the use of rules to other areas.

They’re using rules for some complex pension calculations, replacing a previous proprietary system that offered no reuse for adding new clients, and didn’t have the scalability, flexibility and performance that they required to stay competitive. The pension calculator is a key component of pension administration, and calculating pensions (not processing transactions) represented a big part of their costs, which makes it a competitive differentiator. With limited budget and resources, they selected ILOG rules technology to replace their pension calculator, creating a fairly standalone calculator with specific interfaces to other systems. This limited-integrated approach worked well for them, and he recommended that if you have a complex calculator as part of your main business (think underwriting as another example), consider implementing rules to create a standalone or lightly-integrated calculator.

In their first implementation phase, they rewrote 50+ functions from their old calculator in Java, then used the rules engine to call the right function at the right time to create the first version of the new calculator. The calculations matched their old system (phwew!) and they improved their performance and maintainability. They also improved the transparency of the calculations: it was now possible to see how a particular result was reached. The rules were written directly by their business users, although those users are actuaries with heavy math backgrounds, so likely don’t represent the skill level of a typical business user in other industries. They focused on keeping it simple and not overbuilding, and used the IT staff to build tools, not create custom applications. This is a nice echo of Kathy Long’s presentation earlier today, which said to create the rules and let the business users create their own business processes. In fact, ING uses their own people for writing rules, and uses ILOG’s professional services only for strategic advice, but never for writing code.

After the initial implementation, they rolled it out to the remainder of their client base (six more organizations), representing more than 200,000 plan participants. Since they weren’t achieving the benefits that they expected, they went back to analyze where they could improve it:

  • Each new client was still being implemented by separate teams, so there was little standardization and reuse, and some significant maintenance and quality problems. It took them a while to convince management that the problem was the process of creating and maintaining rules, not the rules technology itself; eventually they created a center of excellence that isn’t just a mentoring/training group, but a group of rules experts who actually write and maintain all rules. This allows them to enforce standards, and the use of peer reviews within the CoE improves quality. They grow and shrink this team (around 12-15 people) as the workload requires, and this centralized team handles all clients to provide greater reuse and knowledge transfer.
  • They weren’t keeping up with ILOG product upgrades, mostly because it just wasn’t a priority to them, and were missing out on several major improvements as well as owning a product that was about to go out of maintenance. Since then, they’ve done some upgrades and although they’re not at the current release, they’re getting closer and have eliminated a lot of their custom code since those features are now included in the base product. The newer version also gives them better performance. I see this problem a lot with BPMS implementations as well, especially if a lot of custom code has been written that is specific to a current product version.
  • They had high infrastructure costs since each new client resulted in additional hardware and the associated CPU licensing. They’ve moved to a Linux platform (from SUN Solaris), moved from WebLogic to JBOSS, and created a farm of shared rules servers.
  • Since they reduced the time and expense of building the calculator, they’ve now exposed other areas of pension administration (such as correspondence) that are taking much longer to implement: the pension calculator used to be the bottleneck in rolling out new products, but now other areas were on the critical path. That’s a nice thing for the calculator group, but had them start to recognize the problems in other areas and systems, pushing them to expand their rules capability into areas such as regulatory updates that span clients.

This last point has led to their current state, which is one of expansion and maturity. One major challenge is the cleanliness and integrity of data: data errors can lead to the inability to make calculations (e.g., missing birthdate) or incorrect calculation of benefits. They’re now using rules to check data and identify issues prior to executing the calculation rules, checking the input data for 30+ inconsistencies that could cause a failure in the calculator, and alerting operations staff if there needs to be some sort of manual correction or followup with the client. After the calculations are done, more data cleansing rules check for another 20+ inconsistencies, and might result in holding up final outbound correspondence to the participant until the problem is resolved.

He wrapped up with their key lessons learned:

  • A strong champion at the senior executive level is required, since this is a departure from the usual way of doing things.
  • A center of excellence yields great benefits in terms of quality and reuse.
  • Leverage the vendors’ expertise strategically, not to do the bulk of your implementation; use your own staff or consultants who understand your business to do the tactical work.
  • Use an iterative and phased approach for implementation.
  • Do regular assessments of where you are, and don’t be afraid to admit that mistakes were made and improvements can be made.
  • Keep up with the technology, especially in fast-moving technologies like rules, although it’s not necessary to be right on the leading edge.

Great presentation with lots of practical tips, even if you’re not in the pension administration business.

Business Rules Forum: Kathy Long on Process and Rules

Kathy Long, who (like me) is more from the process side than the rules side, gave a breakout session on how process and rules can be combined, and particularly how to find the rules within processes. She stated that most of the improvements in business processes don’t come from improving the flow (the inputs and outputs), but in the policies, procedures, knowledge, experience and bureaucracy (the guides and enablers): about 85% of the improvement comes from the latter category. She uses an analysis technique that looks at these four types of components:

  • Input: something that is consumed or transformed by a process
  • Guide: something that determines how, why or when a process occurs, but is not consumed
  • Output: something that is produced by or results from a process
  • Enabler: something used to perform a process

There’s quite a bit of material similar to her talk last year (including the core case study); I assume that this is the methodology that she uses with clients hence it doesn’t change often. Rules fall into the “guides” category, that is the policies and procedures that dictate how, why and when a process occurs. I’m not sure that I get the distinction that she’s making between the “how” in her description of guides, and the “how” that embedded within process flows; I typically think of policies as business rules, and procedures as business processes, rather than both policies and procedures as being rules. Her interpretation is that policies aren’t actionable, but need to be converted to procedures, which are actionable; since rules are, by their nature, actionable, that’s what gets converted to rules. However, the examples of rules that she provided (“customer bill cannot exceed preset limit”) seem to be more policies than procedures to me.

In finding the rules in the process, she believes that we need to start at the top, not at the lowest atomic level: in other words, you don’t want to go right to the step level and try to figure out what rules to create to guide that step; you want to start at the top of the process and figure out if you’re even doing the right higher-level subprocesses and tasks, given that you’re implemented rules to automate some of the decisions in the process.

The SVBR (Semantics of Business Vocabulary and Business Rules) standard defines the difference between rules and advice, and breaks down rules into business rules and structural rules. From there, we end up with structural business rules — which are criteria for making decisions, and can’t be violated — and operative business rules — which are guides for conduct or action, but can be violated (potentially with a penalty, e.g., an SLA). Structural rules might be more what you think of as business rules, that is, they are the underpinning for automated decisions, or are a specific computation. On the other hand, operative business rules may be dictated by company policy or external regulation, but may be overridden; or represent a threshold at which an alert will be raised or a process escalated.

She recommends documenting rules outside the process, since the alternative is to build a decision tree into your process flow, which gets really ugly. I joked during my presentation on Tuesday that the process bigots would include all rules as explicit decision trees within the BPMS; the rules bigots would have a single step in the entire process in the BPMS, and that step would call the BRMS. Obviously, you have to find the right balance between what’s in the process map and what’s in the rules/decision service, especially when you’re creating them in separate environments.

The largest detractor from the presentation is that Long used a case study scenario to show the value of separating rules from process, but described it in large blocks of text on her slides which she just read aloud to us. She added a lot of information as she went along, but any guideline on giving a presentation tells you not to put a ton of text on your slides and just read it, for very good reasons: the audience tends to be reading the slides in case of listening to you. She might want to consider the guides that are inherent in the process of taking a case study and turning it into a presentation.

A brilliant recommendation that she ended with is to create appropriate and consistent rules across the enterprise, then let the business design their own process. Funny how some of us who are practitioners in BPM (whether at the management consulting or implementation end of things) are the biggest critics of BPM, or specifically, we see the value of using rules for agility because process often doesn’t deliver on its promises. I’ve made the statement in two presentations within the last week that BPMS implementations are becoming the new legacy systems — not (purely) because of the capability of the products, but because of how organizations are deploying them.

Business Rules Forum: Pedram Abrari on MDA, SOA and rules

Pedram Abrari, founder and CTO of Corticon, did a breakout session on model-driven architecture, SOA, and the role that rules play in all of this. I’m also in the only room in conference center that’s close enough to the lobby to pick up the hotel wifi, and I found an electrical outlet, so I’m in blogger heaven.

It’s a day for analogies, and Abrari uses the analogy of car for a business application: the driver representing business, and the mechanic representing IT. A driver needs to have control over where he’s going and how he gets there, but doesn’t need to understand the details of how the car works. The mechanic, on the other hand, doesn’t need to understand where the driver is going, but keeps the car and its controls in good working order. Think of the shift from procedural to declarative development concepts, where we’ve moved from stating how to do something, to what needs to be done. A simple example: the difference between writing code to sum a series of numbers, and just selecting a range of cells in Excel and selecting the SUM function.

The utopia of model-driven architecture (MDA) is that  business applications are modeled, not programmed; they’re abstract yet comprehensive, directly executable (or at least deployable to an execution environment without programming), the monitoring and analytics are tied directly to the model, and optimization is done directly on the model. The lack of programming required for creating an executable model is critical for keeping the development in the model, and not having it get sucked down into the morass of coding that often happens in environments that are round-trippable in theory, but end up with too much IT tweaking in the execution environment to ever return to the modeling environment.

He then moved on to define SOA: the concept of reusable software components that can be loosely coupled, and use a standard interface to allow for platform neutrality and design by contract. Compound/complex services can be built by assembling lower-level services in an orchestration, usually with BPM.

The key message here is that MDA and SOA fit together perfectly, as most of us are aware: those services that you create as part of your SOA initiative can be assembled directly by your modeling environment, since there is a standard interface for doing so, and services provide functionality without having to know how (or even where) that function is executed. When your MDA environment is a BPMS, this is a crystal-clear connection: every BPMS provides easy ways to interrogate and integrate web services directly into a process as a process step.

From all of this, it’s a simple step to see that a BRMS can provide rules/decisioning services directly to a process; essentially the same message that I discussed yesterday in my presentation, where decision services are no different than any other type of web services that you would call from a BPMS. Abrari stated, however, that the focus should not be on the rules themselves, but on the decision service that’s provided, where a decision is made up of a complete and consistent set of rules that addresses a specific business decision, within a reasonable timeframe, and with a full audit log of the rules fired to reach a specific decision in order to show the decision justification. The underlying rule set must be declarative to make it accessible to business people.

He ended up with a discussion of the necessity to extract rules out of your legacy systems and put them into a central rules repository, and a summary of the model-driven service-oriented world:

  • Applications are modeled rather than coded
  • Legacy applications are also available as web services
  • Business systems are agile and transparent
  • Enterprise knowledge assets (data, decisions, processes) are stored in a central repository
  • Management has full visibility into the past, present and future of the business
  • Enterprises are no longer held hostage by the inability of their systems to keep up with the business

Although the bits on MDA and SOA might have been new to some of the attendees, some of the rules content may have been a bit too basic for this audience, and/or already covered in the general keynotes. However, Abrari is trying to make that strong connection between MDA and rules for model-driven rules development, which is the approach that Corticon takes with their product.

Business Rules Forum: Gladys Lam on Rule Harvesting

For the first breakout this morning, I attended Gladys Lam’s session on organizing a business rule harvesting project, specifically on how to split up the tasks amongst team members. Gladys does a lot of this sort of work directly with customers, so she has a wealth of practical experience to back up her presentation.

Process rules and decisioning rulesShe first looked at the difference between business process rules and decisioning rules, and had an interesting diagram showing how specific business process rules are mapped into decisioning rules: in a BPMS, that’s the point where we would (should) be making a call to a BRMS rather than handling the logic directly in the process model.

The business processes typically drive the rule harvesting efforts, since rule harvesting is really about extracting and externalizing rules from the processes. That means that one or more analysts need to comb through the business processes and determine the rules inherent in those processes. As processes get large and complex, then the work needs to be divided up amongst an analyst team. Her recommendations:

  • If you have limited resources and there are less than 20 rules/decisions per task, divide it up by workflow
  • If there are more than 20 rules per task, divide by task

My problem here is that she doesn’t fully define task, workflow and process in this context; I think that “task” is really a “subprocess”, and “workflow” is a top-level process. Moving on:

  • If there are more than 50 rules per task, divide by decision point; e.g., a decision about eligibility for auto insurance could be broken down into decision points based on proof of insurance, driving history, insurance risk score and other factors

She later also discussed dividing by value chain function and level of composition, but didn’t specify when you would use those techniques.

The key is to look at the product value chain inherent in your process — from raw materials through production, tracking, sales and support — and what decisions are key to supporting that value chain. In health insurance, for example, you might see a value chain as follows:

  1. Develop insurance product components
  2. Create insurance products
  3. Sell insurance products to clients
  4. Sign-up clients (finalize plans)
  5. Enroll members and dependents
  6. Take claims and dispense benefits
  7. Retire products

Now, consider the rules related to each of those steps in the value chain (numbers correspond to above list):

  1. Product component rules, e.g., a scheduled payout method must have a frequency and a duration
  2. Product composition rules, e.g., the product “basic life” must include a maximum
  3. Product templating rules, e.g., the “basic life” minimum dollar amount must not be less than $1000
  4. Product component decision choice rules, e.g., a client may have a plan with the “optional life” product only if the client has a plan with a “basic life” product
  5. Membership rules, e.g., a spouse of a primary plan member must not select an option that a plan member has not selected for “basic life” product
  6. Pay-out rules, e.g., total amount paid for hospital stay must be calculated as sum of each hospital payment made for claimant within claimant’s entire coverage period
  7. Product discontinuation rules, e.g., a product that is over 5 years old and that is not a sold product must be discontinued

These rules should not be specific to being applied at specific points in the process — my earlier comment on the opening keynote on the independence of rules and process — but represent the policies that govern your business.

Drilling down into how to actually define the rules, she had a number of ways that you to consider splitting up the rules to allow them to fully defined. Keeping with the health insurance example, you would need to define product rules, e.g., coverage, and client rules, e.g., age, geographical location, marital status, and relationship to member. Then, you need to consider how those rules interact and combine to ensure that you cover all possible scenarios, a process that is served well by tools such as decision tables to compare, for example, product by geographic region.

This is going to lead to a broad set of rules covering the different business scenarios, and the constraints that those rules apply to different parts of your business processes: in the health insurance scenario that includes rules that impact how you sell the product, sign up members, and process claims.

You have to understand the scope before getting started with rule harvesting, or you risk having a rule harvesting project that balloons out of control and defines rules that may never be used. You may trade off going wide (across the value chain) versus going deep (drill down on one component of the value chain), or some combination of both, in order to address the current pain points or support a process automation initiative in one area. There are very low-level atomic rules, such as the maximum age for a dependent child, which also need to be captured: these are the sorts of rules that are often coded into multiple systems because of the mistaken belief that they will never change, which causes a lot of headaches when they do. You also need to look for patterns in rules, to allow for faster definition of the rules that follow a common pattern.

Proving that she knows a lot more than insurance, Gladys showed us some other examples of value chains and the associated rules in retailing and human resources.

Fact model example

Underlying all of the rule definitions, you also need to have a common fact model that you use as the basis for all rules: this defines the atomic elements and concepts of your business, the relationships between them, and the terminology.

In addition to a sort of entity-relationship diagram, you also need a concepts catalog that defines each term and any synonyms that might be used. This fact model and the associated terms will then provide a dictionary and framework for the rule harvesting/definition efforts.

All of this sounds a bit overwhelming and complex on the surface, but her key point is around the types of organization and structure that you need to put in place in your rules harvesting projects in order to achieve success. If you want to be really successful, I’d recommend calling Gladys. 🙂

Business Rules Forum: James Taylor and Neil Raden keynote

Opening the second conference day, James Taylor and Neil Raden gave a keynote about competing on decisions. First up was James, who started with a definition of what a decision is (and isn’t), speaking particularly about operation decisions that we often see in the context of automated business processes. He made a good point that your customers react to your business decisions as if they were deliberate and personal to them, when often they’re not; James’ premise is that you should be making these deliberate and personal, providing the level of micro-targeting that’s appropriate to your business (without getting too creepy about it), but that there’s a mismatch between what customers want and what most organizations provide.

Decisions have to be built into processes and systems that manage your business, so although business may drive change, IT gets to manage it. James used the term “orthogonal” when talking about the crossover between process and rules; I used this same expression in a discussion with him yesterday in discussing how processes and decisions should not be dependent upon each other: if a decision and a process are interdependent, then you’re likely dealing with a process decision that should be embedded within the process, rather than a business decision.

A decision-centric organization is focused on the effectiveness of its decisions rather than aggregated, after-the-fact metrics; decision-making is seen as a specific competency, and resources are dedicated to making those decisions better.

Enterprise decision management, as James and Neil now define it, is an approach for managing and approving the decisions that drive your business:

  • Making the decisions explicit
  • Tracking the effectiveness of the decisions in order to improve them
  • Learning from the past to increase the precision of the decisions
  • Defining and managing these decisions for consistency
  • Ensuring that they can be changed as needed for maximum agility
  • Knowing how fast the decisions must be made in order to match the speed of the business context
  • Minimizing the cost of decisions

Using an airline pilot analogy, he discussed how business executives need a number of decision-related tools to do their job effectively:

  • Simulators (what-if analysis), to learn what impact an action might have
  • Auto-pilot, so that their business can (sometimes) work effectively without them
  • Heads-up display, so they can see what’s happening now, what’s coming up, and the available options
  • Controls, simple to use but able to control complex outcomes
  • Time, to be able to take a more strategic look at their business

Continuing on the pilot analogy, he pointed out that the term dashboard is used in business to really mean an instrument cluster: display, but no control. A true dashboard must include not just a display of what’s happening, but controls that can impact what’s happening in the business. I saw a great example of that last week at the Ultimus conference: their dashboard includes a type of interactive dial that can be used to temporarily change thresholds that control the process.

James turned the floor over to Neil, who dug further into the agility imperative: rethinking BI for processes. He sees that today’s BI tools are insufficient for monitoring and analyzing business processes, because of the agile and interconnected nature of these processes. This comes through in the results of a survey that they did about how often people are using related tools: the average hours per week that a marketing analyst spends using their BI tool was 1.2, versus 17.4 for Excel, 4.2 for Access and 6.2 for other data administration tools. I see Excel everywhere in most businesses, whereas BI tools are typically only used by specialists, so this result does not come as a big surprise.

The analytical needs of processes are inherently complex, requiring an understanding of the resources involved and process instance data, as well as the actual process flow. Processes are complex causal systems: much more than just that simple BPMN diagram that you see. A business process may span multiple automated (monitored) processes, and may be created or modified frequently. Stakeholders require different views of those processes; simple tactical needs can be served by BAM-type dashboards, but strategic needs — particularly predictive analysis — are not well-served by this technology. This is beyond BI: it’s process intelligence, where there must be understanding of other factors affecting a process, not just measuring the aggregated outcomes. He sees process intelligence as a distinct product type, not the same as BI; unfortunately, the market is being served (or not really served) by traditional query-based approaches against a relatively static data model, or what Neil refers to as a “tortured OLAP cube-based approach”.

What process intelligence really needs is the ability to analyze the timing of the traffic flow within a process model in order to provide more accurate flow predictions, while allowing for more agile process views that are generated automatically from the BPMN process models. The analytics of process intelligence are based on the process logs, not pre-determined KPIs.

Neil ended up by tying this back to decisions: basically, you can’t make good decisions if you don’t understand how your processes work in the first place.

Interesting that James and Neil deal with two very important aspects of business processes: James covers decisions, and Neil covers analytics. I’ve done presentations in the past on the crossover between BPM, BRM and BI; but they’ve dug into these concepts in much more detail. If you haven’t read their book, Smart Enough Systems, there’s a lot of great material in there on this same theme; if you’re here at the forum, you can pick up a copy at their table at the expo this afternoon.

Business Rules Forum: Vendor Panel

All the usual suspects joined on a panel at the end of the day to discuss the vendor view of business rules: Pegasystems, InRule, Corticon, Fair Isaac ,ILOG (soon to be IBM) and Delta-R, moderated by John Rymer of Forrester.

The focus was on what happening to the rules market, especially in light of the big guys like SAP and IBM joining the rules fray. Most of them think that it’s a good thing to have the large vendors in there because it raises the profile of and validates rules as a technology; likely the smaller players can innovate faster so can still carve out a reasonable piece of the market. Having seen exactly this same scenario play out in the BPM space, I think that they’re right about this.

The ILOG/IBM speaker talked about the integration of business rules and BPM as a primary driver — which of course Pega agreed with — but also the integration of rules, ETL and other technologies. Other speakers discussed the importance of decision management as opposed to just rules management, especially with regards to detecting and ameliorating (if not actually avoiding) situations like the current financial crisis; the use of predictive analytics in the context of being able to change decisions in response to changing conditions; and the current state of standards in rules management. There was a discussion about the difference between rules management and decision management, which I don’t believe answered the question with any certainty for most of the audience: when a speaker says “there’s a subtle but important difference” while making hand motions but doesn’t really elaborate, you know that you’re deep in the weeds. The Delta-R speaker characterizes decision management as rules management plus predictive modeling; I think that all of the vendors agree that decision management is a superset of rules management, but there are at least three different views on what forms that superset.

As a BPM bigot, I see rules as just another part of the services layer; I think that there’s opportunity for BRM in the cloud to be deployed and used much more easily than BPM in the cloud (making a web services call from a process or app to an external rules system isn’t very different than making a web services call to an internal rules system), but I didn’t hear that from any of the vendors.

That’s it for the day; I know that the blogging was light today, but it should be back to normal tomorrow. I’m off to the vendor expo to check out some of the products.

Business Rules Forum: Mixing Rules and Process

I had fun with my presentation on mixing rules and process, and it was a good tweetup (meeting arranged via Twitter) opportunity: Mike Kavis sat in on the session, Miko Matsumura of Software AG caught up with me afterwards, and James Taylor even admitted to stepping in for the last few minutes.
 

I’ve removed most of the screen snapshots from the presentation since they don’t make any sense without the discussion; the text itself is pretty straightforward and, in the end, not all that representative of what I talked about. I guess you just had to be there.