Using a center of excellence to deliver BPM #GartnerBPM

Michelle Lagna of JPMorgan Chase, a Pegasystems customer, gave a presentation on their CoE as one of the solution providers sessions. Their CoE focuses on the use of BPM tools (primarily Pegasystems) to support their 30+ active systems. It was instrumental in allowing them to break down the departmental silos within the organization, establishing standard governance models, standardizing training and contributing to reusable assets.

The CoE supports all lines of business in planning and implementing BPM initiatives:

  • Creating and maintaining architectural standards
  • Centralizing and formalizing the housing and reuse of business-configurable assets
  • Promoting standard methodologies, tools and education

They use the Agile development methodology (and promote and educate on Agile across the organization), and believe that it is instrumental to their success by reducing time to market and aligning business and IT. They’ve made a gradual transition from waterfall to Agile in order to ease into the new methodology.

They’ve developed a standard engagement model (unfortunately depicted on the presentation slide in micro-print and low contrast colors):

  • Operational walkthrough and end-to-end review, including identification of process improvements and ROI
  • Impact analysis review, identifying execution gaps and automated solutions, plus IT and business sizing
  • Project initiation training, including both BPM and Agile training
  • Application profile, high level use case requirements and reusable asset review
  • Project setup and design review, including identifying assets leveraged from other projects, functionality specifications and a design compliance review
  • Environment build-out, including generating a base framework
  • Bootstrap session, which equips the project team to complete use cases on their own
  • Direct capture of objectives to elaborate use cases, design specifications and traceability matrix; this is specifically assisted by the Pega project
  • Identification of reusable assets, then harvesting those assets and making them available for reuse by other projects

The CoE is heavily involved in the early phases, but by the time that they get halfway through the project, the project team is running on their own and the CoE is just checking in occasionally to make sure that things are proceeding as planned, and to help resolve any issues. They had to make some organizational changes to ensure that the CoE is engaged at the right time and that siloed solutions are avoided.

She presented some of the key benefits provided by the CoE:

  • Common class structure for reusability
  • Library of reusable assets with tools to track usage
  • Standardized engagement model, including a “Perfect Start” training and certification stage
  • Monthly educational webcast
  • Improved release planning process (which I’ve seen listed as a key benefit of a CoE at other customers that use other BPM products)
  • Allowing for faster changes to improve business agility

The CoE has been backed by senior executive sponsors within JPMC, which has been key to its acceptance. They are run (and funded) as a shared service, so there are normally no direct chargebacks to the projects unless the CoE team is required to be onsite for an extended period of time due to a rush or urgent situation. Interestingly, the CoE is not all co-located: there are five offshore development resources that handle harvesting the reusable assets, although they are managed from an onshore resource.

Great case study, and a lot of material that is of use regardless of which BPM product that you’re using.

SAP NetWeaver BPM

This post is both long, and long overdue. It’s based on several online sessions with Donka Dimitrova and Jie Deng of the SAP NetWeaver BPM product management team, then an update with Wolfgang Hilpert and Thomas Volmering at SAPPHIRE in May when the product entered unrestricted release. In the past few weeks, there’s been a series of “Introduction to SAP NetWeaver BPM” posts by Arafat Farooqui of Wipro on the SAP SDN site (part 1, part 2, part 3 and part 4, which are really about how to hook up a Web Dynpro UI to a human task in BPM, then invoke a process instance using web services from a portal), and I’m inspired to finally gather up all my notes and impressions.

The driver for BPM with SAP is pretty obvious: Business Workflow within the SAP ERP suite just isn’t agile or functional enough to compete with what’s been happening in BPM now, and SAP customers have been bringing in other BPM suites for years to complement their SAP systems. I had to laugh at one of Dimitrova’s comments on the justification for BPM during our discussion – "process changes in an ERP are difficult and require many hours from developers" – oh, the irony of this coming from an SAP employee!

The Eclipse-based Process Composer is part of the NetWeaver Developers’ Studio, and is used to create processes in the context of other development tools, such as the Yasu rules engine (which they bought) and user interfaces. Like most modern BPMS’, what you draw in the Process Composer (in BPMN) is directly executed, although user interfaces must be created in other development tools such as Web Dynpro or Adobe Interactive forms, then linked to the process steps. There are future plans to generate a UI from the process context data or provide some sort of graphics forms designer in place, but that’s not there yet.

SAP NetWeaver BPM perspectivesAs with most Eclipse-based process modelers that I’ve seen, Process Composer has multiple perspectives for different types of process design participants, with a shared process model. Initially, there is only a process architect (technical) perspective in the modeler, and the business analyst view will be released this year. Future releases will include a line-of-business manager view to see task sequences and parallelism, but no details of gateways; and an executive view of major phases with analytics and KPI dashboards.

There is no link between ARIS-based modeling (SAP Enterprise Modeling Applications by IDS Scheer) and the NetWeaver BPM in this version; integration is planned for later version, although it will be interesting to see how that plays out now that IDS Scheer has been purchased by Software AG, which competes with SAP in (at least) the BPM arena.

Although all you can do now is create your BPM processes in this environment, in the future, there’s plans to have a common modeler and composition environment provide visibility into ERP processes, too, which will be a huge selling point for existing SAP customers who need more agility in their ERP processes. This common process layer will provide not just a unified design experience, but common runtime services, such as monitoring and performance management.

One huge issue from an orchestration standpoint is the lack of support for asynchronous web services calls, meaning that you have to use the existing NetWeaver Process Integrator (PI) environment to create system-centric processes, then invoke those (as synchronous web services) from NetWeaver BPM as required. I didn’t get a clear answer on future plans to merge the two process management platforms; keeping them separate will definitely cause some customer pushback, since most organizations don’t want to buy two different products to manage system-centric and human-centric processes, as they are encouraged to do by stack vendors such as IBM and Oracle.

SAP NetWeaver BPM Process ComposerTaking a look at the Process Composer environment, this is a fairly standard Eclipse-based BPMN process modeling environment: you create a process, add pools, add steps and link them together. For human-facing tasks, you use the properties of the step to connect it to a UI for that step, which must already be built by a developer using something like Web Dynpro. As I mentioned previously, the first version only has the “process architect” perspective, and is targeted at creating human-centric processes without full orchestration capabilities, since that’s what SAP’s customers who were involved in the product development cycle said that they most wanted. The environment is fairly technical, and I wouldn’t put it in front of any but the most technical of business analysts.

Roles can be set by lanes and overridden by task role assignment, which allows using the lanes for a department (for example) and overriding manager-specific tasks without moving them to another lane. Also, expressions can be used to assign roles, such as manager of the user that started the process. User IDs, roles and groups are pulled from the NetWeaver user management engine (UME).

Each step can have other properties, including deadlines (and the events that occur when they are exceeded) and user texts that appear for this step in the user worklist, which can include parameters from the process instance. These are all maintained (I think) in a task object, which is then associated with a step on the process map; that allows the same task to be easily reused within the same process or across processes.

SAP NetWeaver BPM Process ComposerThere are a number of things that I like about Process Composer:

  • Some nice UI pop-ups on the process map to make specifying the next step easier.
  • An explicit process data model, called the process context, driven by master data management concepts; this is used for expressions and conditions in gateways, and to map to the inputs and outputs of the UI of human steps or the service interface of automated steps. It can be imported as an XSD file if you already have the schema elsewhere.
  • The visuals used to map and transform from the process context to a human or web service step make it obvious what’s getting mapped where, while allowing for sophisticated transformations as part of the mapping. Furthermore, a mapping – including transformation functions – can be saved and reused in other processes that have the same process context parameters.
  • Lots of fairly obvious drag-and-drop functionality: drag a task to create a step on a process map, drag a role to assign to a pool, or drag a WSDL service definition to create a system task.
  • Nice integration of the Yasu rules engine, which can be purely within the context of the process with rules appearing as functions available when selecting gateway conditions, or as a more loosely-coupled full rules engine.

Process Composer is just one tab within the whole NetWeaver Project Explorer environment: you can open other tabs for UI design, rules and other types of components. This allows the process to be visible while rules are being modeled, for example: handy for those of us with a short attention span. Rules are created using decision tables, or by writing in a Java-based rules language; Dimitrova referred to the latter as being “a bit complicated for business people”, which is a bit of an understatement, although decision tables are readily usable by business analysts. Future releases will have a business perspective in the rules modeler.

The Rules Composer is a full rules modeling environment, including debugging for incomplete or over-constrained rules in a decision table, and rules versioning. Parameters from a process context can be passed in to rules. Rules can be exposed as web services and called just like any other web service; in fact, although there is tight integration between the rules and process environment allowing for easy creation of a rule directly from within the Process Composer perspective, the rules management system is a separate entity and can be used independent of BPM: really the best of both worlds.

SAP Universal WorklistHaving spent about 3 sessions going through the design environments, we moved on to process execution. Processes can be initiated using a web services call, from an Adobe form, or manually by an administrator. Since process models are versioned, all of the versions available on the server can be seen and instantiated.

Human tasks can be seen in the SAP Universal Worklist (UWL) through a connector that I heard about at SAPPHIRE, appearing along with any other tasks that are surfaced there including SAP ERP tasks or other systems that have developed a connector into the UWL. I like the unified inbox approach that they’re presenting: other BPM systems could, in fact, add their own human tasks in here, and it provides a common inbox that is focused on human workflow. Although an email inbox could be used for the same purpose, it doesn’t provide adequate management of tasks from a BPMS. The UWL is fairly independent from NetWeaver BPM; this is just one way to provide a worklist of BPM tasks that is provided by SAP in a portal environment, but it doesn’t have to be done that way.

SAP NetWeaver BPM Task InterfaceOnce a task is selected and opened, there is a frame across the top with standard task information that will be common across all tasks: information such as start date, deadline and status; common task functions of Close, Delegate and Revoke; and notes and attachments to the task. Below that is the Web Dynpro UI form that was connected to that task in the Process Composer, which contains the instance data that is specific to the process context for this process. The user can interact with that form in whatever manner specified by the Web Dynpro developer, which might involve accessing data from databases or ERP systems; that part is completely independent of NetWeaver BPM.

The user can also click through to a process view showing where they are in the context of the entire process map, plus runtime task parameters such as priority and start date.

Considering the all-important areas of monitoring and management of work in progress, that’s a bit weak in the first version. In the next version, there will be a dashboard showing process status and cycle time, with drill-down to process instances, combining exported BI data and realtime work in progress statistics. There is no way to update the process design of work in progress; there are actually only a few BPMS that do this very well, and most either don’t do it at all or require manual modification of each instance. Wherever possible, things that might change should be put into business rules, so that the correct rule is invoked at the point in time that it is required, not when the process instance was created.

At the end of all the demos, I was impressed with what SAP has released for a version 1.0, especially some of the nice handling of data and rules, yet aware of the many things that are still missing:

  • task UI generation
  • simulation
  • KPI measurement
  • asynchronous web services calls
  • links to/from ARIS
  • common process composition environment across BPM and ERP processes
  • BPEL translation
  • business analyst perspective in process and rules modelers
  • BPMN 2.0 support
  • strategy for merging or coexisting with NetWeaver process orchestration platform

In the briefing at SAPPHIRE, I did see a bit of the roadmap for what’s coming in the next year or two. In 2009, the focus will be on releasing the common process layer to allow for discovery, design and management of processes that include core (ERP) processes, human tasks in BPM, and service orchestration. This, in my opinion, is the make-or-break feature for NetWeaver BPM: if they can’t show much deeper integration with their ERP suite than any other BPMS vendor can offer, then they’re just another behind-the-curve BPMS struggling for market share. If they do this right, they will be positioned to win deals against other BPMS vendors that target SAP customers, as well as having a pre-existing relationship with SAP customers who may not yet have considered BPM.

Also in 2009, expect to see convergence of their BPM and BI, which is badly needed in order to provide dashboard/monitoring capabilities for BPM.

Further out, they’re planning to introduce a UI generator that will create a simple forms-based UI for tasks based on the process context (data model), as well as reports generated from the process definition and point-and-click integration of analytics at process steps. There will be more robust event provisioning tied to the existing event structure in the ERP layer, allowing events to be propagated to external applications such as BPM, and intermediate message events integrated with Business Suite. As mentioned previously, there will be new perspectives in the Process Composer, initially a business analyst perspective with a different focus than the existing technical perspective, not just a dumbed-down version as I’ve seen in other tools, and eventually they’ll use the Eclipse rich client platform (RCP) for an even lighter weight (and less geeky) Eclipse interface. There are plans for allowing ad hoc collaboration at a process step – necessary for case management functionality – as well as allowing operation managers to have control over interactive rule thresholds, providing greater business control over processes once they are in operation.

There’s a lot still missing in this first version; : simulation, KPIs, asynchronous web services calls just to name a few. That doesn’t mean, however, that it’s not usable – I know many customers using BPMS’ that do support those functions, but the customers never use them: great demo and sales tools, but not always used in reality.

NetWeaver BPM is not the best BPMS on the market. However, they don’t need to be the best BPMS on the market: they need to be the best BPMS for SAP customers. They’re not quite there yet, but it’s an achievable goal.

SAP BPM 2008: Business Rules Management

I was up bright and early today to hear Soum Chatterjee from SAP Labs give an introduction to their business rules product, the recently-acquired Yasu (which Chatterjee claims stands for Yet Another Start-Up). I’ve had a bit of a look at it in the context of the NetWeaver BPM demos that I’ve had, but wanted to hear about their roadmap for the product.

He started with some very fundamental information on business rules, and made an interesting comment (considering who writes his paycheck): maybe embedding rules in the code of systems like SAP’s ERP was not a great idea. Of course, neither was having rules embedded in database triggers or non-automated methods such as documenting them in procedures guides or just having them in people’s heads. In these cases, we might see lack of flexibility, lack of visibility and lack of enforcement/standardization as well as having the business rules scattered around the organization where they can’t be properly managed. The solution, of course, is SAP NetWeaver BRM 🙂  Consider that the audience is mostly SAP customers who are very used to the idea of business rules embedded within their ERP code, some of these ideas are pretty radical, but he does a good job of laying out the value proposition of business rules, not just a product overview. He put it in the context of BPM, where the ability to change the rules within processes provides maximum agility.

From a rules product standpoint, they have a suite including:

  • A composer for modeling rules, in an Eclipse-based environment that can be used by a business analyst. It uses a natural language-like representation of the rules, and provides conflict resolution and other up-front analysis of the rules being modeled. Rules can be represented as a decision table, classic if-the-else code, or as a graphical rule flow (which is a sort of decision tree). I’ve also seen this integrated into the process modeling environment in their BPM product.
  • A rules manager for deploying and managing rules.
  • A rules engine to execute the rules. Rules can be consumed as web services (and therefore by their BPM or any other composite application modeling environment) and ABAP business applications.
  • A repository for storing the rules assets.
  • A rules analyzer for optimization (not released yet).

They’ve focused on fast methods of testing and refining rules, particularly by a business analyst. They also have a lot of change management and governance built in.

He covered how BRM and BPM will work together:

  • Complex rule-based decisions (pricing, credit decisions, etc.)
  • Responsibility determination (rule-based task assignment)
  • Recognition of business events
  • Routing rules
  • Parameter thresholds and tolerance (constraints)

Rules can be modeled in the rules composer or in the process composer. He showed us a (canned) demo of the rules composer that would have been a lot more compelling if he had narrated it in a bit more detail: I was sitting at the front of the room so could see the screen, but I’m sure that those at the back of the room couldn’t read it and there wasn’t enough narration to follow along with what was happening in the screen playback. Eight minutes into the video (only halfway!), we move from code-based rules to decision tables, which is a bit more interesting from a demo standpoint, but I really doubt if anyone who didn’t already know something about rules modeling would have gained a lot of information from watching this. It also made the composer look a lot more difficult that it actually is, as evidenced from an audience question about whether they expected business users to use this (in a disbelieving voice).

He finished up with the product roadmap:

  • This year, they’ve delivered the business rules composition and execution environment, available for invocation from the various SAP product lines, and integrated with the BPM composition environment.
  • In 2009, there will be more complex decision sequences, integrated support for rule refinement and validation, end-to-end change management, and improved business user participation and collaboration in rules authoring and change management.
  • In 2010, the plan (which of course can change) is to have real-time rule-based responses to business events, advanced rules analysis capabilities with alignment to business goals, and better modeling capabilities for business analysts.

Business Rules Forum: Kevin Chase of ING

I’m squeezing in one last session before flying out: Kevin Chase, SVP of Implementation Services at ING, discussing how to use rules in a multi-client environment, specifically on the issues of reuse and reliability. I’ve done quite a bit of work implementing processes in multi-client environments — such as a mutual funds back-office outsourcing firm — and the different rules for each client can make for some challenges. in most cases, these companies are all governed by the same regulations, but have their own way that they want things done, even if they’re not the ones doing it.

In ING’s case, they’re doing benefits plan administration, such as retirement (401k) plans, for large clients, and have been using rules for about six years. They originally did a pilot project with one client, then rolled it out to all their clients, but didn’t see the benefits that they expected; that caused them to create a center of excellence, and now they’re refining their processes and expanding the use of rules to other areas.

They’re using rules for some complex pension calculations, replacing a previous proprietary system that offered no reuse for adding new clients, and didn’t have the scalability, flexibility and performance that they required to stay competitive. The pension calculator is a key component of pension administration, and calculating pensions (not processing transactions) represented a big part of their costs, which makes it a competitive differentiator. With limited budget and resources, they selected ILOG rules technology to replace their pension calculator, creating a fairly standalone calculator with specific interfaces to other systems. This limited-integrated approach worked well for them, and he recommended that if you have a complex calculator as part of your main business (think underwriting as another example), consider implementing rules to create a standalone or lightly-integrated calculator.

In their first implementation phase, they rewrote 50+ functions from their old calculator in Java, then used the rules engine to call the right function at the right time to create the first version of the new calculator. The calculations matched their old system (phwew!) and they improved their performance and maintainability. They also improved the transparency of the calculations: it was now possible to see how a particular result was reached. The rules were written directly by their business users, although those users are actuaries with heavy math backgrounds, so likely don’t represent the skill level of a typical business user in other industries. They focused on keeping it simple and not overbuilding, and used the IT staff to build tools, not create custom applications. This is a nice echo of Kathy Long’s presentation earlier today, which said to create the rules and let the business users create their own business processes. In fact, ING uses their own people for writing rules, and uses ILOG’s professional services only for strategic advice, but never for writing code.

After the initial implementation, they rolled it out to the remainder of their client base (six more organizations), representing more than 200,000 plan participants. Since they weren’t achieving the benefits that they expected, they went back to analyze where they could improve it:

  • Each new client was still being implemented by separate teams, so there was little standardization and reuse, and some significant maintenance and quality problems. It took them a while to convince management that the problem was the process of creating and maintaining rules, not the rules technology itself; eventually they created a center of excellence that isn’t just a mentoring/training group, but a group of rules experts who actually write and maintain all rules. This allows them to enforce standards, and the use of peer reviews within the CoE improves quality. They grow and shrink this team (around 12-15 people) as the workload requires, and this centralized team handles all clients to provide greater reuse and knowledge transfer.
  • They weren’t keeping up with ILOG product upgrades, mostly because it just wasn’t a priority to them, and were missing out on several major improvements as well as owning a product that was about to go out of maintenance. Since then, they’ve done some upgrades and although they’re not at the current release, they’re getting closer and have eliminated a lot of their custom code since those features are now included in the base product. The newer version also gives them better performance. I see this problem a lot with BPMS implementations as well, especially if a lot of custom code has been written that is specific to a current product version.
  • They had high infrastructure costs since each new client resulted in additional hardware and the associated CPU licensing. They’ve moved to a Linux platform (from SUN Solaris), moved from WebLogic to JBOSS, and created a farm of shared rules servers.
  • Since they reduced the time and expense of building the calculator, they’ve now exposed other areas of pension administration (such as correspondence) that are taking much longer to implement: the pension calculator used to be the bottleneck in rolling out new products, but now other areas were on the critical path. That’s a nice thing for the calculator group, but had them start to recognize the problems in other areas and systems, pushing them to expand their rules capability into areas such as regulatory updates that span clients.

This last point has led to their current state, which is one of expansion and maturity. One major challenge is the cleanliness and integrity of data: data errors can lead to the inability to make calculations (e.g., missing birthdate) or incorrect calculation of benefits. They’re now using rules to check data and identify issues prior to executing the calculation rules, checking the input data for 30+ inconsistencies that could cause a failure in the calculator, and alerting operations staff if there needs to be some sort of manual correction or followup with the client. After the calculations are done, more data cleansing rules check for another 20+ inconsistencies, and might result in holding up final outbound correspondence to the participant until the problem is resolved.

He wrapped up with their key lessons learned:

  • A strong champion at the senior executive level is required, since this is a departure from the usual way of doing things.
  • A center of excellence yields great benefits in terms of quality and reuse.
  • Leverage the vendors’ expertise strategically, not to do the bulk of your implementation; use your own staff or consultants who understand your business to do the tactical work.
  • Use an iterative and phased approach for implementation.
  • Do regular assessments of where you are, and don’t be afraid to admit that mistakes were made and improvements can be made.
  • Keep up with the technology, especially in fast-moving technologies like rules, although it’s not necessary to be right on the leading edge.

Great presentation with lots of practical tips, even if you’re not in the pension administration business.

Business Rules Forum: Kathy Long on Process and Rules

Kathy Long, who (like me) is more from the process side than the rules side, gave a breakout session on how process and rules can be combined, and particularly how to find the rules within processes. She stated that most of the improvements in business processes don’t come from improving the flow (the inputs and outputs), but in the policies, procedures, knowledge, experience and bureaucracy (the guides and enablers): about 85% of the improvement comes from the latter category. She uses an analysis technique that looks at these four types of components:

  • Input: something that is consumed or transformed by a process
  • Guide: something that determines how, why or when a process occurs, but is not consumed
  • Output: something that is produced by or results from a process
  • Enabler: something used to perform a process

There’s quite a bit of material similar to her talk last year (including the core case study); I assume that this is the methodology that she uses with clients hence it doesn’t change often. Rules fall into the “guides” category, that is the policies and procedures that dictate how, why and when a process occurs. I’m not sure that I get the distinction that she’s making between the “how” in her description of guides, and the “how” that embedded within process flows; I typically think of policies as business rules, and procedures as business processes, rather than both policies and procedures as being rules. Her interpretation is that policies aren’t actionable, but need to be converted to procedures, which are actionable; since rules are, by their nature, actionable, that’s what gets converted to rules. However, the examples of rules that she provided (“customer bill cannot exceed preset limit”) seem to be more policies than procedures to me.

In finding the rules in the process, she believes that we need to start at the top, not at the lowest atomic level: in other words, you don’t want to go right to the step level and try to figure out what rules to create to guide that step; you want to start at the top of the process and figure out if you’re even doing the right higher-level subprocesses and tasks, given that you’re implemented rules to automate some of the decisions in the process.

The SVBR (Semantics of Business Vocabulary and Business Rules) standard defines the difference between rules and advice, and breaks down rules into business rules and structural rules. From there, we end up with structural business rules — which are criteria for making decisions, and can’t be violated — and operative business rules — which are guides for conduct or action, but can be violated (potentially with a penalty, e.g., an SLA). Structural rules might be more what you think of as business rules, that is, they are the underpinning for automated decisions, or are a specific computation. On the other hand, operative business rules may be dictated by company policy or external regulation, but may be overridden; or represent a threshold at which an alert will be raised or a process escalated.

She recommends documenting rules outside the process, since the alternative is to build a decision tree into your process flow, which gets really ugly. I joked during my presentation on Tuesday that the process bigots would include all rules as explicit decision trees within the BPMS; the rules bigots would have a single step in the entire process in the BPMS, and that step would call the BRMS. Obviously, you have to find the right balance between what’s in the process map and what’s in the rules/decision service, especially when you’re creating them in separate environments.

The largest detractor from the presentation is that Long used a case study scenario to show the value of separating rules from process, but described it in large blocks of text on her slides which she just read aloud to us. She added a lot of information as she went along, but any guideline on giving a presentation tells you not to put a ton of text on your slides and just read it, for very good reasons: the audience tends to be reading the slides in case of listening to you. She might want to consider the guides that are inherent in the process of taking a case study and turning it into a presentation.

A brilliant recommendation that she ended with is to create appropriate and consistent rules across the enterprise, then let the business design their own process. Funny how some of us who are practitioners in BPM (whether at the management consulting or implementation end of things) are the biggest critics of BPM, or specifically, we see the value of using rules for agility because process often doesn’t deliver on its promises. I’ve made the statement in two presentations within the last week that BPMS implementations are becoming the new legacy systems — not (purely) because of the capability of the products, but because of how organizations are deploying them.

Business Rules Forum: Pedram Abrari on MDA, SOA and rules

Pedram Abrari, founder and CTO of Corticon, did a breakout session on model-driven architecture, SOA, and the role that rules play in all of this. I’m also in the only room in conference center that’s close enough to the lobby to pick up the hotel wifi, and I found an electrical outlet, so I’m in blogger heaven.

It’s a day for analogies, and Abrari uses the analogy of car for a business application: the driver representing business, and the mechanic representing IT. A driver needs to have control over where he’s going and how he gets there, but doesn’t need to understand the details of how the car works. The mechanic, on the other hand, doesn’t need to understand where the driver is going, but keeps the car and its controls in good working order. Think of the shift from procedural to declarative development concepts, where we’ve moved from stating how to do something, to what needs to be done. A simple example: the difference between writing code to sum a series of numbers, and just selecting a range of cells in Excel and selecting the SUM function.

The utopia of model-driven architecture (MDA) is that  business applications are modeled, not programmed; they’re abstract yet comprehensive, directly executable (or at least deployable to an execution environment without programming), the monitoring and analytics are tied directly to the model, and optimization is done directly on the model. The lack of programming required for creating an executable model is critical for keeping the development in the model, and not having it get sucked down into the morass of coding that often happens in environments that are round-trippable in theory, but end up with too much IT tweaking in the execution environment to ever return to the modeling environment.

He then moved on to define SOA: the concept of reusable software components that can be loosely coupled, and use a standard interface to allow for platform neutrality and design by contract. Compound/complex services can be built by assembling lower-level services in an orchestration, usually with BPM.

The key message here is that MDA and SOA fit together perfectly, as most of us are aware: those services that you create as part of your SOA initiative can be assembled directly by your modeling environment, since there is a standard interface for doing so, and services provide functionality without having to know how (or even where) that function is executed. When your MDA environment is a BPMS, this is a crystal-clear connection: every BPMS provides easy ways to interrogate and integrate web services directly into a process as a process step.

From all of this, it’s a simple step to see that a BRMS can provide rules/decisioning services directly to a process; essentially the same message that I discussed yesterday in my presentation, where decision services are no different than any other type of web services that you would call from a BPMS. Abrari stated, however, that the focus should not be on the rules themselves, but on the decision service that’s provided, where a decision is made up of a complete and consistent set of rules that addresses a specific business decision, within a reasonable timeframe, and with a full audit log of the rules fired to reach a specific decision in order to show the decision justification. The underlying rule set must be declarative to make it accessible to business people.

He ended up with a discussion of the necessity to extract rules out of your legacy systems and put them into a central rules repository, and a summary of the model-driven service-oriented world:

  • Applications are modeled rather than coded
  • Legacy applications are also available as web services
  • Business systems are agile and transparent
  • Enterprise knowledge assets (data, decisions, processes) are stored in a central repository
  • Management has full visibility into the past, present and future of the business
  • Enterprises are no longer held hostage by the inability of their systems to keep up with the business

Although the bits on MDA and SOA might have been new to some of the attendees, some of the rules content may have been a bit too basic for this audience, and/or already covered in the general keynotes. However, Abrari is trying to make that strong connection between MDA and rules for model-driven rules development, which is the approach that Corticon takes with their product.

Business Rules Forum: Gladys Lam on Rule Harvesting

For the first breakout this morning, I attended Gladys Lam’s session on organizing a business rule harvesting project, specifically on how to split up the tasks amongst team members. Gladys does a lot of this sort of work directly with customers, so she has a wealth of practical experience to back up her presentation.

Process rules and decisioning rulesShe first looked at the difference between business process rules and decisioning rules, and had an interesting diagram showing how specific business process rules are mapped into decisioning rules: in a BPMS, that’s the point where we would (should) be making a call to a BRMS rather than handling the logic directly in the process model.

The business processes typically drive the rule harvesting efforts, since rule harvesting is really about extracting and externalizing rules from the processes. That means that one or more analysts need to comb through the business processes and determine the rules inherent in those processes. As processes get large and complex, then the work needs to be divided up amongst an analyst team. Her recommendations:

  • If you have limited resources and there are less than 20 rules/decisions per task, divide it up by workflow
  • If there are more than 20 rules per task, divide by task

My problem here is that she doesn’t fully define task, workflow and process in this context; I think that “task” is really a “subprocess”, and “workflow” is a top-level process. Moving on:

  • If there are more than 50 rules per task, divide by decision point; e.g., a decision about eligibility for auto insurance could be broken down into decision points based on proof of insurance, driving history, insurance risk score and other factors

She later also discussed dividing by value chain function and level of composition, but didn’t specify when you would use those techniques.

The key is to look at the product value chain inherent in your process — from raw materials through production, tracking, sales and support — and what decisions are key to supporting that value chain. In health insurance, for example, you might see a value chain as follows:

  1. Develop insurance product components
  2. Create insurance products
  3. Sell insurance products to clients
  4. Sign-up clients (finalize plans)
  5. Enroll members and dependents
  6. Take claims and dispense benefits
  7. Retire products

Now, consider the rules related to each of those steps in the value chain (numbers correspond to above list):

  1. Product component rules, e.g., a scheduled payout method must have a frequency and a duration
  2. Product composition rules, e.g., the product “basic life” must include a maximum
  3. Product templating rules, e.g., the “basic life” minimum dollar amount must not be less than $1000
  4. Product component decision choice rules, e.g., a client may have a plan with the “optional life” product only if the client has a plan with a “basic life” product
  5. Membership rules, e.g., a spouse of a primary plan member must not select an option that a plan member has not selected for “basic life” product
  6. Pay-out rules, e.g., total amount paid for hospital stay must be calculated as sum of each hospital payment made for claimant within claimant’s entire coverage period
  7. Product discontinuation rules, e.g., a product that is over 5 years old and that is not a sold product must be discontinued

These rules should not be specific to being applied at specific points in the process — my earlier comment on the opening keynote on the independence of rules and process — but represent the policies that govern your business.

Drilling down into how to actually define the rules, she had a number of ways that you to consider splitting up the rules to allow them to fully defined. Keeping with the health insurance example, you would need to define product rules, e.g., coverage, and client rules, e.g., age, geographical location, marital status, and relationship to member. Then, you need to consider how those rules interact and combine to ensure that you cover all possible scenarios, a process that is served well by tools such as decision tables to compare, for example, product by geographic region.

This is going to lead to a broad set of rules covering the different business scenarios, and the constraints that those rules apply to different parts of your business processes: in the health insurance scenario that includes rules that impact how you sell the product, sign up members, and process claims.

You have to understand the scope before getting started with rule harvesting, or you risk having a rule harvesting project that balloons out of control and defines rules that may never be used. You may trade off going wide (across the value chain) versus going deep (drill down on one component of the value chain), or some combination of both, in order to address the current pain points or support a process automation initiative in one area. There are very low-level atomic rules, such as the maximum age for a dependent child, which also need to be captured: these are the sorts of rules that are often coded into multiple systems because of the mistaken belief that they will never change, which causes a lot of headaches when they do. You also need to look for patterns in rules, to allow for faster definition of the rules that follow a common pattern.

Proving that she knows a lot more than insurance, Gladys showed us some other examples of value chains and the associated rules in retailing and human resources.

Fact model example

Underlying all of the rule definitions, you also need to have a common fact model that you use as the basis for all rules: this defines the atomic elements and concepts of your business, the relationships between them, and the terminology.

In addition to a sort of entity-relationship diagram, you also need a concepts catalog that defines each term and any synonyms that might be used. This fact model and the associated terms will then provide a dictionary and framework for the rule harvesting/definition efforts.

All of this sounds a bit overwhelming and complex on the surface, but her key point is around the types of organization and structure that you need to put in place in your rules harvesting projects in order to achieve success. If you want to be really successful, I’d recommend calling Gladys. 🙂

Business Rules Forum: James Taylor and Neil Raden keynote

Opening the second conference day, James Taylor and Neil Raden gave a keynote about competing on decisions. First up was James, who started with a definition of what a decision is (and isn’t), speaking particularly about operation decisions that we often see in the context of automated business processes. He made a good point that your customers react to your business decisions as if they were deliberate and personal to them, when often they’re not; James’ premise is that you should be making these deliberate and personal, providing the level of micro-targeting that’s appropriate to your business (without getting too creepy about it), but that there’s a mismatch between what customers want and what most organizations provide.

Decisions have to be built into processes and systems that manage your business, so although business may drive change, IT gets to manage it. James used the term “orthogonal” when talking about the crossover between process and rules; I used this same expression in a discussion with him yesterday in discussing how processes and decisions should not be dependent upon each other: if a decision and a process are interdependent, then you’re likely dealing with a process decision that should be embedded within the process, rather than a business decision.

A decision-centric organization is focused on the effectiveness of its decisions rather than aggregated, after-the-fact metrics; decision-making is seen as a specific competency, and resources are dedicated to making those decisions better.

Enterprise decision management, as James and Neil now define it, is an approach for managing and approving the decisions that drive your business:

  • Making the decisions explicit
  • Tracking the effectiveness of the decisions in order to improve them
  • Learning from the past to increase the precision of the decisions
  • Defining and managing these decisions for consistency
  • Ensuring that they can be changed as needed for maximum agility
  • Knowing how fast the decisions must be made in order to match the speed of the business context
  • Minimizing the cost of decisions

Using an airline pilot analogy, he discussed how business executives need a number of decision-related tools to do their job effectively:

  • Simulators (what-if analysis), to learn what impact an action might have
  • Auto-pilot, so that their business can (sometimes) work effectively without them
  • Heads-up display, so they can see what’s happening now, what’s coming up, and the available options
  • Controls, simple to use but able to control complex outcomes
  • Time, to be able to take a more strategic look at their business

Continuing on the pilot analogy, he pointed out that the term dashboard is used in business to really mean an instrument cluster: display, but no control. A true dashboard must include not just a display of what’s happening, but controls that can impact what’s happening in the business. I saw a great example of that last week at the Ultimus conference: their dashboard includes a type of interactive dial that can be used to temporarily change thresholds that control the process.

James turned the floor over to Neil, who dug further into the agility imperative: rethinking BI for processes. He sees that today’s BI tools are insufficient for monitoring and analyzing business processes, because of the agile and interconnected nature of these processes. This comes through in the results of a survey that they did about how often people are using related tools: the average hours per week that a marketing analyst spends using their BI tool was 1.2, versus 17.4 for Excel, 4.2 for Access and 6.2 for other data administration tools. I see Excel everywhere in most businesses, whereas BI tools are typically only used by specialists, so this result does not come as a big surprise.

The analytical needs of processes are inherently complex, requiring an understanding of the resources involved and process instance data, as well as the actual process flow. Processes are complex causal systems: much more than just that simple BPMN diagram that you see. A business process may span multiple automated (monitored) processes, and may be created or modified frequently. Stakeholders require different views of those processes; simple tactical needs can be served by BAM-type dashboards, but strategic needs — particularly predictive analysis — are not well-served by this technology. This is beyond BI: it’s process intelligence, where there must be understanding of other factors affecting a process, not just measuring the aggregated outcomes. He sees process intelligence as a distinct product type, not the same as BI; unfortunately, the market is being served (or not really served) by traditional query-based approaches against a relatively static data model, or what Neil refers to as a “tortured OLAP cube-based approach”.

What process intelligence really needs is the ability to analyze the timing of the traffic flow within a process model in order to provide more accurate flow predictions, while allowing for more agile process views that are generated automatically from the BPMN process models. The analytics of process intelligence are based on the process logs, not pre-determined KPIs.

Neil ended up by tying this back to decisions: basically, you can’t make good decisions if you don’t understand how your processes work in the first place.

Interesting that James and Neil deal with two very important aspects of business processes: James covers decisions, and Neil covers analytics. I’ve done presentations in the past on the crossover between BPM, BRM and BI; but they’ve dug into these concepts in much more detail. If you haven’t read their book, Smart Enough Systems, there’s a lot of great material in there on this same theme; if you’re here at the forum, you can pick up a copy at their table at the expo this afternoon.

Business Rules Forum: Vendor Panel

All the usual suspects joined on a panel at the end of the day to discuss the vendor view of business rules: Pegasystems, InRule, Corticon, Fair Isaac ,ILOG (soon to be IBM) and Delta-R, moderated by John Rymer of Forrester.

The focus was on what happening to the rules market, especially in light of the big guys like SAP and IBM joining the rules fray. Most of them think that it’s a good thing to have the large vendors in there because it raises the profile of and validates rules as a technology; likely the smaller players can innovate faster so can still carve out a reasonable piece of the market. Having seen exactly this same scenario play out in the BPM space, I think that they’re right about this.

The ILOG/IBM speaker talked about the integration of business rules and BPM as a primary driver — which of course Pega agreed with — but also the integration of rules, ETL and other technologies. Other speakers discussed the importance of decision management as opposed to just rules management, especially with regards to detecting and ameliorating (if not actually avoiding) situations like the current financial crisis; the use of predictive analytics in the context of being able to change decisions in response to changing conditions; and the current state of standards in rules management. There was a discussion about the difference between rules management and decision management, which I don’t believe answered the question with any certainty for most of the audience: when a speaker says “there’s a subtle but important difference” while making hand motions but doesn’t really elaborate, you know that you’re deep in the weeds. The Delta-R speaker characterizes decision management as rules management plus predictive modeling; I think that all of the vendors agree that decision management is a superset of rules management, but there are at least three different views on what forms that superset.

As a BPM bigot, I see rules as just another part of the services layer; I think that there’s opportunity for BRM in the cloud to be deployed and used much more easily than BPM in the cloud (making a web services call from a process or app to an external rules system isn’t very different than making a web services call to an internal rules system), but I didn’t hear that from any of the vendors.

That’s it for the day; I know that the blogging was light today, but it should be back to normal tomorrow. I’m off to the vendor expo to check out some of the products.

Business Rules Forum: Mixing Rules and Process

I had fun with my presentation on mixing rules and process, and it was a good tweetup (meeting arranged via Twitter) opportunity: Mike Kavis sat in on the session, Miko Matsumura of Software AG caught up with me afterwards, and James Taylor even admitted to stepping in for the last few minutes.
 

I’ve removed most of the screen snapshots from the presentation since they don’t make any sense without the discussion; the text itself is pretty straightforward and, in the end, not all that representative of what I talked about. I guess you just had to be there.