Process Notations #brf

The pool at the Bellagio was a big draw, but I’ve kept on track for this afternoon’s presentations, starting with Kathy Long on process notations. She spoke about the necessity of documenting processes, as well as the levels to which documents should be documented. Documenting the current process should only be done down to a certain level; below that, it’s more likely to be an indeterminate or changeable set of tasks that aren’t even correct.

She proposes a much simpler, higher-level process model that’s a lot like IDEF0, but she uses Input, Guides, Outputs and Enablers instead:

  • Input: something that is consumed by or transformed by an activity/process
  • Guide: something that determines why, how or when an activity/process occurs but is not consumed
  • Output: something that is produced by or results from an activity/process
  • Enabler: something (person, facility, system, tools, equipment, asset or other resource) utilized to perform the activity/process

She looked at some of the problems with other modeling formats; for example, BPMN is easy to learn and communicate and shows cross-functional processes and roles, but multiple process involvement is difficult to model, and it’s hard to follow decision threads: they end up more as system flows than actual business process models.

She touched on a lot of points for making process models accurate and relevant, such as levels of decomposition, and not modeling events and rules as activities; these are things that tend to happen in BPMN swimlane diagrams, but not in IGOE models. A lot of this, in fact, is about making the distinction between events and activities; there’s some confusion about this in the audience, too, although most often what is shown as an activity (box) on a swimlane diagram should actually just be a line between activities, e.g., instead of adding an activity called “send to Accounting”, you should just have a line from the previous activity to the new activity in the Accounting swimlane. Her BPMN is a bit rusty, perhaps, because an event would not be modeled as an activity, it would be modeled as an activity; instead, she shows a customer example where she used a stoplight icon to indicate an event, although there is an event icon available in BPMN.

Regardless of the notation, however, there are things that you need to consider:

  • Understand why you’re modeling processes: documentation, understanding, communication, process optimization.
  • Simplify the models by removing events and decisions
  • Understand the goals in order to set the focus – and determine the critical path – for the process

I’m not sure that I agree with all of what she states about modeling; much of the fault that she finds with BPMN is not about BPMN, but about bad instances of BPMN or bad tools. She has one really valid point, however: most process models created today are just wallpaper, not something that is actually useful for process documentation and optimization.

This is the third year that I’ve heard her speak at BRF, and the message hasn’t changed much from last year or the year before, including the core examples, so it could use a refresh. Also, I think that she needs to get a bit more updated on some of the technology that touches on process models: she sees the business doing process modeling, then handing it over to IT for implementation (which doesn’t really account for model-driven development), and speaks only fleetingly of “workflow” systems. I realize that many process models are never slated for automation, but more and more are, and the process modeling needs to account for that.

BPM, Collaboration and Social Networking

Although social software and BPM is an underlying theme in a lot of the presentations that I give, today at the Business Rules Forum is the first time that I’ve been able to focus exclusively on that topic in a presentation for more than 3 years. Here’s the slides, and a list of the references that I used:

References:

There are many other references in this field; feel free to add your favorites in the comments section.

The Decision Dilemma #brf

The Business Rules Forum has started here in Las Vegas, and I’m here all week giving a presentation in the BPM track, facilitating a workshop and sitting on a panel. James Taylor and Eric Charpentier are also here presenting and blogging, with a focus more purely on rules and decision management; you will want to check out their blogs as well since we’ll likely all be at different sessions. I’m really impressed with what this conference has grown into: attendance is fairly low, as it has been at every conference that I’ve attended this year due to the economy, but there is a great roster of speakers and five concurrent tracks of breakout sessions including the new BPM track. As I’ve been blogging about for a while (as has James), process and rules belong together; this conference is the opportunity to learn about both as well as their overlap.

We kicked off with a welcome from Gladys Lam, followed by a keynote from Jim Sinur on making better decisions in the face of uncertainty. One thing that’s happened during the economic meltdown is that a great deal of uncertainty has been introduced into not just financial markets, but many aspects of how we do business. The result is that business processes need to be more dynamic, and need to be able to respond to emerging patterns rather than the status quo. At the Appian conference last week, Jim spoke about some of their new research on pattern-based strategies, and that’s the heart of what he’s talking about today.

One of the effects of increased connectivity on business is that it speeds the impact of change: as soon as something changes in how business works in one part of the world, it’s everywhere. This makes instability – driven by that change – the normal state rather than an exception. Although he focused on dynamic processes at the Appian conference, here he focused more the role of rules in dealing with uncertainty, which I think is a valid point since rules and decision management are much of what allow processes to dynamically shift to accommodate changing conditions; although perhaps it is more accurate to consider the role of complex event processing as well. I am, however, left with the impression that Gartner is spinning pattern-based strategy onto pretty much every technology and special interest group.

The discussion about pattern-based strategies was the same as last week (and the same, I take it, as at the recent Gartnet IT Expo where this was introduced), covering the cycle of seek, model and adapt, as well as the four disciplines of pattern seeking, performance-driven culture, optempo advantage and transparency.

There’s lots of Twitter activity about the conference, and it’s especially interesting to see reactions from other analysts such as Mike Gualtieri of Forrester.

See me at the Business Rules Forum, Las Vegas, November 1-5

I’ve been invited to speak at the Business Rules Forum, which is taking place at the Bellagio in Las Vegas on November 1-5. I’m actually doing two speaking gigs there:

  • A presentation on BPM, collaboration and social networking, which I’ve been presenting on for a few years and has lately become a hot topic of conversation. That’s on Tuesday, November 3rd.
  • On Thursday, November 5th, I’ll be facilitating a workshop session on BPM as a service, and the challenges and issues involved in deploying BPM in the cloud. If you’re at the conference, please come by and take part in the discussion – this is intended to gather everyone’s opinions, it’s not a prepared presentation.

If you haven’t signed up for the conference yet, you can get a 10% discount by using the code “9SPSK” when you register.

Lots of great speakers lined up, including keynotes by Jim Sinur of Gartner, Stephen Hendrick of IDC, and James Taylor of Decision Management Solutions. There’s also Fun Labs, where you have the opportunity to test-drive vendor products.

Business Rules Forum: Kevin Chase of ING

I’m squeezing in one last session before flying out: Kevin Chase, SVP of Implementation Services at ING, discussing how to use rules in a multi-client environment, specifically on the issues of reuse and reliability. I’ve done quite a bit of work implementing processes in multi-client environments — such as a mutual funds back-office outsourcing firm — and the different rules for each client can make for some challenges. in most cases, these companies are all governed by the same regulations, but have their own way that they want things done, even if they’re not the ones doing it.

In ING’s case, they’re doing benefits plan administration, such as retirement (401k) plans, for large clients, and have been using rules for about six years. They originally did a pilot project with one client, then rolled it out to all their clients, but didn’t see the benefits that they expected; that caused them to create a center of excellence, and now they’re refining their processes and expanding the use of rules to other areas.

They’re using rules for some complex pension calculations, replacing a previous proprietary system that offered no reuse for adding new clients, and didn’t have the scalability, flexibility and performance that they required to stay competitive. The pension calculator is a key component of pension administration, and calculating pensions (not processing transactions) represented a big part of their costs, which makes it a competitive differentiator. With limited budget and resources, they selected ILOG rules technology to replace their pension calculator, creating a fairly standalone calculator with specific interfaces to other systems. This limited-integrated approach worked well for them, and he recommended that if you have a complex calculator as part of your main business (think underwriting as another example), consider implementing rules to create a standalone or lightly-integrated calculator.

In their first implementation phase, they rewrote 50+ functions from their old calculator in Java, then used the rules engine to call the right function at the right time to create the first version of the new calculator. The calculations matched their old system (phwew!) and they improved their performance and maintainability. They also improved the transparency of the calculations: it was now possible to see how a particular result was reached. The rules were written directly by their business users, although those users are actuaries with heavy math backgrounds, so likely don’t represent the skill level of a typical business user in other industries. They focused on keeping it simple and not overbuilding, and used the IT staff to build tools, not create custom applications. This is a nice echo of Kathy Long’s presentation earlier today, which said to create the rules and let the business users create their own business processes. In fact, ING uses their own people for writing rules, and uses ILOG’s professional services only for strategic advice, but never for writing code.

After the initial implementation, they rolled it out to the remainder of their client base (six more organizations), representing more than 200,000 plan participants. Since they weren’t achieving the benefits that they expected, they went back to analyze where they could improve it:

  • Each new client was still being implemented by separate teams, so there was little standardization and reuse, and some significant maintenance and quality problems. It took them a while to convince management that the problem was the process of creating and maintaining rules, not the rules technology itself; eventually they created a center of excellence that isn’t just a mentoring/training group, but a group of rules experts who actually write and maintain all rules. This allows them to enforce standards, and the use of peer reviews within the CoE improves quality. They grow and shrink this team (around 12-15 people) as the workload requires, and this centralized team handles all clients to provide greater reuse and knowledge transfer.
  • They weren’t keeping up with ILOG product upgrades, mostly because it just wasn’t a priority to them, and were missing out on several major improvements as well as owning a product that was about to go out of maintenance. Since then, they’ve done some upgrades and although they’re not at the current release, they’re getting closer and have eliminated a lot of their custom code since those features are now included in the base product. The newer version also gives them better performance. I see this problem a lot with BPMS implementations as well, especially if a lot of custom code has been written that is specific to a current product version.
  • They had high infrastructure costs since each new client resulted in additional hardware and the associated CPU licensing. They’ve moved to a Linux platform (from SUN Solaris), moved from WebLogic to JBOSS, and created a farm of shared rules servers.
  • Since they reduced the time and expense of building the calculator, they’ve now exposed other areas of pension administration (such as correspondence) that are taking much longer to implement: the pension calculator used to be the bottleneck in rolling out new products, but now other areas were on the critical path. That’s a nice thing for the calculator group, but had them start to recognize the problems in other areas and systems, pushing them to expand their rules capability into areas such as regulatory updates that span clients.

This last point has led to their current state, which is one of expansion and maturity. One major challenge is the cleanliness and integrity of data: data errors can lead to the inability to make calculations (e.g., missing birthdate) or incorrect calculation of benefits. They’re now using rules to check data and identify issues prior to executing the calculation rules, checking the input data for 30+ inconsistencies that could cause a failure in the calculator, and alerting operations staff if there needs to be some sort of manual correction or followup with the client. After the calculations are done, more data cleansing rules check for another 20+ inconsistencies, and might result in holding up final outbound correspondence to the participant until the problem is resolved.

He wrapped up with their key lessons learned:

  • A strong champion at the senior executive level is required, since this is a departure from the usual way of doing things.
  • A center of excellence yields great benefits in terms of quality and reuse.
  • Leverage the vendors’ expertise strategically, not to do the bulk of your implementation; use your own staff or consultants who understand your business to do the tactical work.
  • Use an iterative and phased approach for implementation.
  • Do regular assessments of where you are, and don’t be afraid to admit that mistakes were made and improvements can be made.
  • Keep up with the technology, especially in fast-moving technologies like rules, although it’s not necessary to be right on the leading edge.

Great presentation with lots of practical tips, even if you’re not in the pension administration business.

Business Rules Forum: Kathy Long on Process and Rules

Kathy Long, who (like me) is more from the process side than the rules side, gave a breakout session on how process and rules can be combined, and particularly how to find the rules within processes. She stated that most of the improvements in business processes don’t come from improving the flow (the inputs and outputs), but in the policies, procedures, knowledge, experience and bureaucracy (the guides and enablers): about 85% of the improvement comes from the latter category. She uses an analysis technique that looks at these four types of components:

  • Input: something that is consumed or transformed by a process
  • Guide: something that determines how, why or when a process occurs, but is not consumed
  • Output: something that is produced by or results from a process
  • Enabler: something used to perform a process

There’s quite a bit of material similar to her talk last year (including the core case study); I assume that this is the methodology that she uses with clients hence it doesn’t change often. Rules fall into the “guides” category, that is the policies and procedures that dictate how, why and when a process occurs. I’m not sure that I get the distinction that she’s making between the “how” in her description of guides, and the “how” that embedded within process flows; I typically think of policies as business rules, and procedures as business processes, rather than both policies and procedures as being rules. Her interpretation is that policies aren’t actionable, but need to be converted to procedures, which are actionable; since rules are, by their nature, actionable, that’s what gets converted to rules. However, the examples of rules that she provided (“customer bill cannot exceed preset limit”) seem to be more policies than procedures to me.

In finding the rules in the process, she believes that we need to start at the top, not at the lowest atomic level: in other words, you don’t want to go right to the step level and try to figure out what rules to create to guide that step; you want to start at the top of the process and figure out if you’re even doing the right higher-level subprocesses and tasks, given that you’re implemented rules to automate some of the decisions in the process.

The SVBR (Semantics of Business Vocabulary and Business Rules) standard defines the difference between rules and advice, and breaks down rules into business rules and structural rules. From there, we end up with structural business rules — which are criteria for making decisions, and can’t be violated — and operative business rules — which are guides for conduct or action, but can be violated (potentially with a penalty, e.g., an SLA). Structural rules might be more what you think of as business rules, that is, they are the underpinning for automated decisions, or are a specific computation. On the other hand, operative business rules may be dictated by company policy or external regulation, but may be overridden; or represent a threshold at which an alert will be raised or a process escalated.

She recommends documenting rules outside the process, since the alternative is to build a decision tree into your process flow, which gets really ugly. I joked during my presentation on Tuesday that the process bigots would include all rules as explicit decision trees within the BPMS; the rules bigots would have a single step in the entire process in the BPMS, and that step would call the BRMS. Obviously, you have to find the right balance between what’s in the process map and what’s in the rules/decision service, especially when you’re creating them in separate environments.

The largest detractor from the presentation is that Long used a case study scenario to show the value of separating rules from process, but described it in large blocks of text on her slides which she just read aloud to us. She added a lot of information as she went along, but any guideline on giving a presentation tells you not to put a ton of text on your slides and just read it, for very good reasons: the audience tends to be reading the slides in case of listening to you. She might want to consider the guides that are inherent in the process of taking a case study and turning it into a presentation.

A brilliant recommendation that she ended with is to create appropriate and consistent rules across the enterprise, then let the business design their own process. Funny how some of us who are practitioners in BPM (whether at the management consulting or implementation end of things) are the biggest critics of BPM, or specifically, we see the value of using rules for agility because process often doesn’t deliver on its promises. I’ve made the statement in two presentations within the last week that BPMS implementations are becoming the new legacy systems — not (purely) because of the capability of the products, but because of how organizations are deploying them.

Business Rules Forum: Pedram Abrari on MDA, SOA and rules

Pedram Abrari, founder and CTO of Corticon, did a breakout session on model-driven architecture, SOA, and the role that rules play in all of this. I’m also in the only room in conference center that’s close enough to the lobby to pick up the hotel wifi, and I found an electrical outlet, so I’m in blogger heaven.

It’s a day for analogies, and Abrari uses the analogy of car for a business application: the driver representing business, and the mechanic representing IT. A driver needs to have control over where he’s going and how he gets there, but doesn’t need to understand the details of how the car works. The mechanic, on the other hand, doesn’t need to understand where the driver is going, but keeps the car and its controls in good working order. Think of the shift from procedural to declarative development concepts, where we’ve moved from stating how to do something, to what needs to be done. A simple example: the difference between writing code to sum a series of numbers, and just selecting a range of cells in Excel and selecting the SUM function.

The utopia of model-driven architecture (MDA) is that  business applications are modeled, not programmed; they’re abstract yet comprehensive, directly executable (or at least deployable to an execution environment without programming), the monitoring and analytics are tied directly to the model, and optimization is done directly on the model. The lack of programming required for creating an executable model is critical for keeping the development in the model, and not having it get sucked down into the morass of coding that often happens in environments that are round-trippable in theory, but end up with too much IT tweaking in the execution environment to ever return to the modeling environment.

He then moved on to define SOA: the concept of reusable software components that can be loosely coupled, and use a standard interface to allow for platform neutrality and design by contract. Compound/complex services can be built by assembling lower-level services in an orchestration, usually with BPM.

The key message here is that MDA and SOA fit together perfectly, as most of us are aware: those services that you create as part of your SOA initiative can be assembled directly by your modeling environment, since there is a standard interface for doing so, and services provide functionality without having to know how (or even where) that function is executed. When your MDA environment is a BPMS, this is a crystal-clear connection: every BPMS provides easy ways to interrogate and integrate web services directly into a process as a process step.

From all of this, it’s a simple step to see that a BRMS can provide rules/decisioning services directly to a process; essentially the same message that I discussed yesterday in my presentation, where decision services are no different than any other type of web services that you would call from a BPMS. Abrari stated, however, that the focus should not be on the rules themselves, but on the decision service that’s provided, where a decision is made up of a complete and consistent set of rules that addresses a specific business decision, within a reasonable timeframe, and with a full audit log of the rules fired to reach a specific decision in order to show the decision justification. The underlying rule set must be declarative to make it accessible to business people.

He ended up with a discussion of the necessity to extract rules out of your legacy systems and put them into a central rules repository, and a summary of the model-driven service-oriented world:

  • Applications are modeled rather than coded
  • Legacy applications are also available as web services
  • Business systems are agile and transparent
  • Enterprise knowledge assets (data, decisions, processes) are stored in a central repository
  • Management has full visibility into the past, present and future of the business
  • Enterprises are no longer held hostage by the inability of their systems to keep up with the business

Although the bits on MDA and SOA might have been new to some of the attendees, some of the rules content may have been a bit too basic for this audience, and/or already covered in the general keynotes. However, Abrari is trying to make that strong connection between MDA and rules for model-driven rules development, which is the approach that Corticon takes with their product.

Business Rules Forum: Gladys Lam on Rule Harvesting

For the first breakout this morning, I attended Gladys Lam’s session on organizing a business rule harvesting project, specifically on how to split up the tasks amongst team members. Gladys does a lot of this sort of work directly with customers, so she has a wealth of practical experience to back up her presentation.

Process rules and decisioning rulesShe first looked at the difference between business process rules and decisioning rules, and had an interesting diagram showing how specific business process rules are mapped into decisioning rules: in a BPMS, that’s the point where we would (should) be making a call to a BRMS rather than handling the logic directly in the process model.

The business processes typically drive the rule harvesting efforts, since rule harvesting is really about extracting and externalizing rules from the processes. That means that one or more analysts need to comb through the business processes and determine the rules inherent in those processes. As processes get large and complex, then the work needs to be divided up amongst an analyst team. Her recommendations:

  • If you have limited resources and there are less than 20 rules/decisions per task, divide it up by workflow
  • If there are more than 20 rules per task, divide by task

My problem here is that she doesn’t fully define task, workflow and process in this context; I think that “task” is really a “subprocess”, and “workflow” is a top-level process. Moving on:

  • If there are more than 50 rules per task, divide by decision point; e.g., a decision about eligibility for auto insurance could be broken down into decision points based on proof of insurance, driving history, insurance risk score and other factors

She later also discussed dividing by value chain function and level of composition, but didn’t specify when you would use those techniques.

The key is to look at the product value chain inherent in your process — from raw materials through production, tracking, sales and support — and what decisions are key to supporting that value chain. In health insurance, for example, you might see a value chain as follows:

  1. Develop insurance product components
  2. Create insurance products
  3. Sell insurance products to clients
  4. Sign-up clients (finalize plans)
  5. Enroll members and dependents
  6. Take claims and dispense benefits
  7. Retire products

Now, consider the rules related to each of those steps in the value chain (numbers correspond to above list):

  1. Product component rules, e.g., a scheduled payout method must have a frequency and a duration
  2. Product composition rules, e.g., the product “basic life” must include a maximum
  3. Product templating rules, e.g., the “basic life” minimum dollar amount must not be less than $1000
  4. Product component decision choice rules, e.g., a client may have a plan with the “optional life” product only if the client has a plan with a “basic life” product
  5. Membership rules, e.g., a spouse of a primary plan member must not select an option that a plan member has not selected for “basic life” product
  6. Pay-out rules, e.g., total amount paid for hospital stay must be calculated as sum of each hospital payment made for claimant within claimant’s entire coverage period
  7. Product discontinuation rules, e.g., a product that is over 5 years old and that is not a sold product must be discontinued

These rules should not be specific to being applied at specific points in the process — my earlier comment on the opening keynote on the independence of rules and process — but represent the policies that govern your business.

Drilling down into how to actually define the rules, she had a number of ways that you to consider splitting up the rules to allow them to fully defined. Keeping with the health insurance example, you would need to define product rules, e.g., coverage, and client rules, e.g., age, geographical location, marital status, and relationship to member. Then, you need to consider how those rules interact and combine to ensure that you cover all possible scenarios, a process that is served well by tools such as decision tables to compare, for example, product by geographic region.

This is going to lead to a broad set of rules covering the different business scenarios, and the constraints that those rules apply to different parts of your business processes: in the health insurance scenario that includes rules that impact how you sell the product, sign up members, and process claims.

You have to understand the scope before getting started with rule harvesting, or you risk having a rule harvesting project that balloons out of control and defines rules that may never be used. You may trade off going wide (across the value chain) versus going deep (drill down on one component of the value chain), or some combination of both, in order to address the current pain points or support a process automation initiative in one area. There are very low-level atomic rules, such as the maximum age for a dependent child, which also need to be captured: these are the sorts of rules that are often coded into multiple systems because of the mistaken belief that they will never change, which causes a lot of headaches when they do. You also need to look for patterns in rules, to allow for faster definition of the rules that follow a common pattern.

Proving that she knows a lot more than insurance, Gladys showed us some other examples of value chains and the associated rules in retailing and human resources.

Fact model example

Underlying all of the rule definitions, you also need to have a common fact model that you use as the basis for all rules: this defines the atomic elements and concepts of your business, the relationships between them, and the terminology.

In addition to a sort of entity-relationship diagram, you also need a concepts catalog that defines each term and any synonyms that might be used. This fact model and the associated terms will then provide a dictionary and framework for the rule harvesting/definition efforts.

All of this sounds a bit overwhelming and complex on the surface, but her key point is around the types of organization and structure that you need to put in place in your rules harvesting projects in order to achieve success. If you want to be really successful, I’d recommend calling Gladys. 🙂

Business Rules Forum: James Taylor and Neil Raden keynote

Opening the second conference day, James Taylor and Neil Raden gave a keynote about competing on decisions. First up was James, who started with a definition of what a decision is (and isn’t), speaking particularly about operation decisions that we often see in the context of automated business processes. He made a good point that your customers react to your business decisions as if they were deliberate and personal to them, when often they’re not; James’ premise is that you should be making these deliberate and personal, providing the level of micro-targeting that’s appropriate to your business (without getting too creepy about it), but that there’s a mismatch between what customers want and what most organizations provide.

Decisions have to be built into processes and systems that manage your business, so although business may drive change, IT gets to manage it. James used the term “orthogonal” when talking about the crossover between process and rules; I used this same expression in a discussion with him yesterday in discussing how processes and decisions should not be dependent upon each other: if a decision and a process are interdependent, then you’re likely dealing with a process decision that should be embedded within the process, rather than a business decision.

A decision-centric organization is focused on the effectiveness of its decisions rather than aggregated, after-the-fact metrics; decision-making is seen as a specific competency, and resources are dedicated to making those decisions better.

Enterprise decision management, as James and Neil now define it, is an approach for managing and approving the decisions that drive your business:

  • Making the decisions explicit
  • Tracking the effectiveness of the decisions in order to improve them
  • Learning from the past to increase the precision of the decisions
  • Defining and managing these decisions for consistency
  • Ensuring that they can be changed as needed for maximum agility
  • Knowing how fast the decisions must be made in order to match the speed of the business context
  • Minimizing the cost of decisions

Using an airline pilot analogy, he discussed how business executives need a number of decision-related tools to do their job effectively:

  • Simulators (what-if analysis), to learn what impact an action might have
  • Auto-pilot, so that their business can (sometimes) work effectively without them
  • Heads-up display, so they can see what’s happening now, what’s coming up, and the available options
  • Controls, simple to use but able to control complex outcomes
  • Time, to be able to take a more strategic look at their business

Continuing on the pilot analogy, he pointed out that the term dashboard is used in business to really mean an instrument cluster: display, but no control. A true dashboard must include not just a display of what’s happening, but controls that can impact what’s happening in the business. I saw a great example of that last week at the Ultimus conference: their dashboard includes a type of interactive dial that can be used to temporarily change thresholds that control the process.

James turned the floor over to Neil, who dug further into the agility imperative: rethinking BI for processes. He sees that today’s BI tools are insufficient for monitoring and analyzing business processes, because of the agile and interconnected nature of these processes. This comes through in the results of a survey that they did about how often people are using related tools: the average hours per week that a marketing analyst spends using their BI tool was 1.2, versus 17.4 for Excel, 4.2 for Access and 6.2 for other data administration tools. I see Excel everywhere in most businesses, whereas BI tools are typically only used by specialists, so this result does not come as a big surprise.

The analytical needs of processes are inherently complex, requiring an understanding of the resources involved and process instance data, as well as the actual process flow. Processes are complex causal systems: much more than just that simple BPMN diagram that you see. A business process may span multiple automated (monitored) processes, and may be created or modified frequently. Stakeholders require different views of those processes; simple tactical needs can be served by BAM-type dashboards, but strategic needs — particularly predictive analysis — are not well-served by this technology. This is beyond BI: it’s process intelligence, where there must be understanding of other factors affecting a process, not just measuring the aggregated outcomes. He sees process intelligence as a distinct product type, not the same as BI; unfortunately, the market is being served (or not really served) by traditional query-based approaches against a relatively static data model, or what Neil refers to as a “tortured OLAP cube-based approach”.

What process intelligence really needs is the ability to analyze the timing of the traffic flow within a process model in order to provide more accurate flow predictions, while allowing for more agile process views that are generated automatically from the BPMN process models. The analytics of process intelligence are based on the process logs, not pre-determined KPIs.

Neil ended up by tying this back to decisions: basically, you can’t make good decisions if you don’t understand how your processes work in the first place.

Interesting that James and Neil deal with two very important aspects of business processes: James covers decisions, and Neil covers analytics. I’ve done presentations in the past on the crossover between BPM, BRM and BI; but they’ve dug into these concepts in much more detail. If you haven’t read their book, Smart Enough Systems, there’s a lot of great material in there on this same theme; if you’re here at the forum, you can pick up a copy at their table at the expo this afternoon.

Business Rules Forum: Vendor Panel

All the usual suspects joined on a panel at the end of the day to discuss the vendor view of business rules: Pegasystems, InRule, Corticon, Fair Isaac ,ILOG (soon to be IBM) and Delta-R, moderated by John Rymer of Forrester.

The focus was on what happening to the rules market, especially in light of the big guys like SAP and IBM joining the rules fray. Most of them think that it’s a good thing to have the large vendors in there because it raises the profile of and validates rules as a technology; likely the smaller players can innovate faster so can still carve out a reasonable piece of the market. Having seen exactly this same scenario play out in the BPM space, I think that they’re right about this.

The ILOG/IBM speaker talked about the integration of business rules and BPM as a primary driver — which of course Pega agreed with — but also the integration of rules, ETL and other technologies. Other speakers discussed the importance of decision management as opposed to just rules management, especially with regards to detecting and ameliorating (if not actually avoiding) situations like the current financial crisis; the use of predictive analytics in the context of being able to change decisions in response to changing conditions; and the current state of standards in rules management. There was a discussion about the difference between rules management and decision management, which I don’t believe answered the question with any certainty for most of the audience: when a speaker says “there’s a subtle but important difference” while making hand motions but doesn’t really elaborate, you know that you’re deep in the weeds. The Delta-R speaker characterizes decision management as rules management plus predictive modeling; I think that all of the vendors agree that decision management is a superset of rules management, but there are at least three different views on what forms that superset.

As a BPM bigot, I see rules as just another part of the services layer; I think that there’s opportunity for BRM in the cloud to be deployed and used much more easily than BPM in the cloud (making a web services call from a process or app to an external rules system isn’t very different than making a web services call to an internal rules system), but I didn’t hear that from any of the vendors.

That’s it for the day; I know that the blogging was light today, but it should be back to normal tomorrow. I’m off to the vendor expo to check out some of the products.