Innovation World: Analyst/Blogger Technology Roundtable

We left the analysts and press who focus on financial stuff upstairs in the main lecture hall, and a smaller group of us congregated for a roundtable session with Peter Kurpick and a few other high-level technical people from Software AG. These notes are really disjointed since I was just trying to capture ideas as we went along; I have to say that some of these people really like to hear themselves talk, and most them like to wag their finger admonishingly while they do so.

Kurpick started out the session by talking about how Software AG has a product set that has SOA built in from the philosophy up, whereas many other vendors (like SAP) add it on for marketing purposes; I would say that the other vendors are doing it as much for connectivity as for marketing purposes, but it’s true that it’s not baked into the product in most cases. He didn’t use the phrase “lipstick on a pig”, but he sees that the challenge to the application vendors is that by tacking on a services later, it just serves to expose the weaknesses of their internal architecture when the customer wants to change a fundamental business process.

Frank Kenney of Gartner disagreed, saying that adding services to the applications gives those vendors a good way to sell more into their existing customers, and that most organizations don’t change the basic business processes provided by the application vendors anyway. (Of course, maybe they don’t change them because it’s too difficult to do so.) Most customers just want to buy something that’s pretty close to what they want already, not orchestrate their own ERP and CRM processes from scratch, especially for processes that don’t provide competitive differentiation.

Kurpick responded that many organizations no longer own their entire supply chain, and that having an entire canned process isn’t even an option: you have to be able to orchestrate your business process by picking and choosing functional components.

Ian Finley of AMR Research discussed the impact of SaaS applications and services on this equation: if you can get something as a service at a much lower capital cost, even if it’s only a portion of your entire business process, that has value. Other opinions around the table (from people who didn’t bother to set up their name tent and made some rash assumptions about their own fame) agreed that SaaS was an important part as customers modularize their business processes and look for ways to reuse and outsource components. SaaS and open source both provide a way to have lower up-front costs — important to many organizations that are cutting their budgets — while offering a growth path. Someone from Forrester said that they are seeing four disruptive trends: SaaS, offshore, open source, and SOA. All of these mean taking focus away from the monolithic packaged applications and providing alternative ways to compose your business processes from a number of disparate sources.

What does it mean, however, for customers to move to a distributed architecture that is served by a variety of services, both inside and outside the firewall? Some organizations believe that this means giving up some control of destiny, but Kenney feels that control is an illusion anyway. I tend to agree: how much control do you have over a large ERP vendor’s product direction, and what choices do you have (except stopping upgrades or switching platforms, both painful alternatives) if they go somewhere that you don’t want to go? I think that the illusion of control comes from operational control — you own the iron that it runs on — rather than strategic control, but ultimately that’s not really worth much.

Ron from ZapThink and Beth Gold-Bernstein of ebizQ started a discussion on federation of ESBs, repositories and registries, and how that’s critical for bringing all of this together in a governable environment. Susan Ganeshan (Software AG product management) discussed how their new version of CentraSite addresses some of those issues by providing a common governance platform. Distributed systems are inherently complex because they work on different platforms that may not have been intended to be integrated, and may not have any sort of federated policies around SLAs, for example. Large organizations are drowning in complexity, and the vendors need to help them to reduce complexity or at least shield them from the complexity so that they can get on with their business. SOA can serve that need by helping organizations to change the paradigm by thinking about applications in a completely different way, as a composition of services.

We shifted topics slightly to talk about how companies compete on processes: identifying the processes (or parts thereof) that provide a competitive differentiation. Neil Macehiter of MWD pointed out how critical it is for companies to use common commercial alternatives (whether packaged, SaaS or outsourced) for processes that are the same for everyone in order to drive down costs, and spend the money on the processes that make them unique. However, in this era of mergers and acquisitions, none of this is forever: business processes will change radically when two organizations are brought together, and systems will either be made to work together, or abandoned. Kenney told a story that would have you believe that no one gets fired for waiting to buy IBM or SAP or any of the big players; I think that you might not get fired, but your company might get eaten up by the competition while you’re waiting.

There’s a general impression that SOA is failing; not the general architectural principles, but the way that the market is being sold and serviced now. Large, high-profile projects will not attain their promised ROI; the customers will blame the vendors for selling them products that don’t work; the vendors will blame the customers for being dysfunctional and doing the implementation wrong; the SOA market will crash and burn; some new integration/service-oriented phoenix will rise from the ashes.

Innovation World: Peter Kürpick

Dr. Peter Kürpick, Chief Product Officer, was up next to give us more of the technology strategy and vision. He covered some generic views of ESB, BPM and BAM and how they fit together, then showed the roadmap for the webMethods suite with respect to their releases throughout 2008: building the functionality, then adding performance, stability and other non-functional considerations.

They really like to show the recent Forrester wave report that puts them in the lead, but he neglects to point out that this is for integration-centric BPM; they don’t fare nearly as well in the other BPM waves that cover human-centric (where they’re just barely in the leader category) and document-centric (where they’re not even in the running). That comes through in the Gartner magic quadrant, which combines all BPM vendors and puts them fairly low in the leader quadrant. That being said, they are still in the leader category with both major analysts, which is definitely a good place to be.

He also talked briefly about CentraSite Active SOA, their SOA governance platform that builds on the multi-vendor CentraSite platform.

Kürpick fielded questions about open source and cloud computing, and brushed them both off as being not relevant to what Software AG is doing: they’re not seeing open source ESB vendors as competition in their large strategic accounts, and they see cloud computing as something for applications, not middleware. Miko Matsumura stepped in to give a bit of support for the vision of providing services to the cloud from a Software AG platform, but it’s not much of a story yet. They will be releasing an AppExchange connector for integrating Salesforce.com with their ESB, and therefore with other internal applications; they are themselves a Salesforce customer so that’s coming first, but it does sound like they want to do more connectors to common SaaS applications. Streibich was back up to enforce that they are strictly a middleware vendor (a description that fits as long as you can accept BAM and BPM as middleware), targeting large customer organizations. Interestingly, Kürpick described the use of BPM in some of their customer organizations as effectively a graphical software development environment, as the developers move off the ADABAS/Natural platforms; I’m seeing this use of a BPMS as one of the big philosophical dividing lines in BPM today as we struggle with whether it’s a development tool or a business modeling tool.

This was pretty light on content about the products themselves for a technology strategy session, and I’m hoping to hear a lot more detail over the next day. The Q&A was definitely the best part of the afternoon, as you would expect with a room full of very keen analysts and journalists.

I have to admit that something nice about conferences run by German companies (I’ve noticed this at SAP and IDS Scheer conferences, too) is that they start and finish each session on time. The only exception was Bruce Sterling, who caricatured American business style by actually showing up late for his own session this morning.

Innovation World: Karl-Heinz Streibich

First up after lunch at the Media and Analyst Forum was Karl-Heinz Streibich, CEO of Software AG. This is better attended than this morning’s session (which was marked “optional” on my schedule, and likely on that of many others), and we’re hearing from the more senior people in this session.

It’s impossible for any business leader not to talk about the financial crisis in a presentation these days, and Streibich’s focus is on Software AG’s path to SOA leadership beyond the current crisis. 20% of Software AG employees are in the US, which means a potential big hit in licensing and services revenue for them, but he’s bullish on their strategy. He said “the United States is the most important market in the world to Software AG” (I find it amazing that he would single it out like this, since there are critical emerging markets in China and India, as well as the collective EU market), and pointed out that Software AG is now the 3rd largest SOA/BPM vendor after IBM and Oracle. He quoted the projected growth numbers — I think that they’re Forrester’s — that shows the current market of $1.4B growing at around 14% per year to $2.3B by 2012: optimistic figures from last year that I think no longer hold true in the current market as the purse strings tighten around the globe, especially in that “most important market”.

However, Streibich believes that Software AG is relatively immune to the turmoil because of their ADABAS/Natural product line and the strength of the webMethods product suite, and that they will improve their market share. Interestingly, although the ADABAS/Natural products aren’t BPM/SOA, it does provide them with a steady flow of revenue since mainframe modernization projects do well during an economic downturn (when it’s too expensive to rip and replace). If anything, he thinks that the financial crisis will add urgency to BPM/SOA adoption; I agree with this, but don’t think that it necessarily lead to a lot of new sales for the vendors. Software AG has a range of tools, however, to help organizations most from siloed IT to SOA.

Business Rules Forum: Kevin Chase of ING

I’m squeezing in one last session before flying out: Kevin Chase, SVP of Implementation Services at ING, discussing how to use rules in a multi-client environment, specifically on the issues of reuse and reliability. I’ve done quite a bit of work implementing processes in multi-client environments — such as a mutual funds back-office outsourcing firm — and the different rules for each client can make for some challenges. in most cases, these companies are all governed by the same regulations, but have their own way that they want things done, even if they’re not the ones doing it.

In ING’s case, they’re doing benefits plan administration, such as retirement (401k) plans, for large clients, and have been using rules for about six years. They originally did a pilot project with one client, then rolled it out to all their clients, but didn’t see the benefits that they expected; that caused them to create a center of excellence, and now they’re refining their processes and expanding the use of rules to other areas.

They’re using rules for some complex pension calculations, replacing a previous proprietary system that offered no reuse for adding new clients, and didn’t have the scalability, flexibility and performance that they required to stay competitive. The pension calculator is a key component of pension administration, and calculating pensions (not processing transactions) represented a big part of their costs, which makes it a competitive differentiator. With limited budget and resources, they selected ILOG rules technology to replace their pension calculator, creating a fairly standalone calculator with specific interfaces to other systems. This limited-integrated approach worked well for them, and he recommended that if you have a complex calculator as part of your main business (think underwriting as another example), consider implementing rules to create a standalone or lightly-integrated calculator.

In their first implementation phase, they rewrote 50+ functions from their old calculator in Java, then used the rules engine to call the right function at the right time to create the first version of the new calculator. The calculations matched their old system (phwew!) and they improved their performance and maintainability. They also improved the transparency of the calculations: it was now possible to see how a particular result was reached. The rules were written directly by their business users, although those users are actuaries with heavy math backgrounds, so likely don’t represent the skill level of a typical business user in other industries. They focused on keeping it simple and not overbuilding, and used the IT staff to build tools, not create custom applications. This is a nice echo of Kathy Long’s presentation earlier today, which said to create the rules and let the business users create their own business processes. In fact, ING uses their own people for writing rules, and uses ILOG’s professional services only for strategic advice, but never for writing code.

After the initial implementation, they rolled it out to the remainder of their client base (six more organizations), representing more than 200,000 plan participants. Since they weren’t achieving the benefits that they expected, they went back to analyze where they could improve it:

  • Each new client was still being implemented by separate teams, so there was little standardization and reuse, and some significant maintenance and quality problems. It took them a while to convince management that the problem was the process of creating and maintaining rules, not the rules technology itself; eventually they created a center of excellence that isn’t just a mentoring/training group, but a group of rules experts who actually write and maintain all rules. This allows them to enforce standards, and the use of peer reviews within the CoE improves quality. They grow and shrink this team (around 12-15 people) as the workload requires, and this centralized team handles all clients to provide greater reuse and knowledge transfer.
  • They weren’t keeping up with ILOG product upgrades, mostly because it just wasn’t a priority to them, and were missing out on several major improvements as well as owning a product that was about to go out of maintenance. Since then, they’ve done some upgrades and although they’re not at the current release, they’re getting closer and have eliminated a lot of their custom code since those features are now included in the base product. The newer version also gives them better performance. I see this problem a lot with BPMS implementations as well, especially if a lot of custom code has been written that is specific to a current product version.
  • They had high infrastructure costs since each new client resulted in additional hardware and the associated CPU licensing. They’ve moved to a Linux platform (from SUN Solaris), moved from WebLogic to JBOSS, and created a farm of shared rules servers.
  • Since they reduced the time and expense of building the calculator, they’ve now exposed other areas of pension administration (such as correspondence) that are taking much longer to implement: the pension calculator used to be the bottleneck in rolling out new products, but now other areas were on the critical path. That’s a nice thing for the calculator group, but had them start to recognize the problems in other areas and systems, pushing them to expand their rules capability into areas such as regulatory updates that span clients.

This last point has led to their current state, which is one of expansion and maturity. One major challenge is the cleanliness and integrity of data: data errors can lead to the inability to make calculations (e.g., missing birthdate) or incorrect calculation of benefits. They’re now using rules to check data and identify issues prior to executing the calculation rules, checking the input data for 30+ inconsistencies that could cause a failure in the calculator, and alerting operations staff if there needs to be some sort of manual correction or followup with the client. After the calculations are done, more data cleansing rules check for another 20+ inconsistencies, and might result in holding up final outbound correspondence to the participant until the problem is resolved.

He wrapped up with their key lessons learned:

  • A strong champion at the senior executive level is required, since this is a departure from the usual way of doing things.
  • A center of excellence yields great benefits in terms of quality and reuse.
  • Leverage the vendors’ expertise strategically, not to do the bulk of your implementation; use your own staff or consultants who understand your business to do the tactical work.
  • Use an iterative and phased approach for implementation.
  • Do regular assessments of where you are, and don’t be afraid to admit that mistakes were made and improvements can be made.
  • Keep up with the technology, especially in fast-moving technologies like rules, although it’s not necessary to be right on the leading edge.

Great presentation with lots of practical tips, even if you’re not in the pension administration business.

Business Rules Forum: Pedram Abrari on MDA, SOA and rules

Pedram Abrari, founder and CTO of Corticon, did a breakout session on model-driven architecture, SOA, and the role that rules play in all of this. I’m also in the only room in conference center that’s close enough to the lobby to pick up the hotel wifi, and I found an electrical outlet, so I’m in blogger heaven.

It’s a day for analogies, and Abrari uses the analogy of car for a business application: the driver representing business, and the mechanic representing IT. A driver needs to have control over where he’s going and how he gets there, but doesn’t need to understand the details of how the car works. The mechanic, on the other hand, doesn’t need to understand where the driver is going, but keeps the car and its controls in good working order. Think of the shift from procedural to declarative development concepts, where we’ve moved from stating how to do something, to what needs to be done. A simple example: the difference between writing code to sum a series of numbers, and just selecting a range of cells in Excel and selecting the SUM function.

The utopia of model-driven architecture (MDA) is that  business applications are modeled, not programmed; they’re abstract yet comprehensive, directly executable (or at least deployable to an execution environment without programming), the monitoring and analytics are tied directly to the model, and optimization is done directly on the model. The lack of programming required for creating an executable model is critical for keeping the development in the model, and not having it get sucked down into the morass of coding that often happens in environments that are round-trippable in theory, but end up with too much IT tweaking in the execution environment to ever return to the modeling environment.

He then moved on to define SOA: the concept of reusable software components that can be loosely coupled, and use a standard interface to allow for platform neutrality and design by contract. Compound/complex services can be built by assembling lower-level services in an orchestration, usually with BPM.

The key message here is that MDA and SOA fit together perfectly, as most of us are aware: those services that you create as part of your SOA initiative can be assembled directly by your modeling environment, since there is a standard interface for doing so, and services provide functionality without having to know how (or even where) that function is executed. When your MDA environment is a BPMS, this is a crystal-clear connection: every BPMS provides easy ways to interrogate and integrate web services directly into a process as a process step.

From all of this, it’s a simple step to see that a BRMS can provide rules/decisioning services directly to a process; essentially the same message that I discussed yesterday in my presentation, where decision services are no different than any other type of web services that you would call from a BPMS. Abrari stated, however, that the focus should not be on the rules themselves, but on the decision service that’s provided, where a decision is made up of a complete and consistent set of rules that addresses a specific business decision, within a reasonable timeframe, and with a full audit log of the rules fired to reach a specific decision in order to show the decision justification. The underlying rule set must be declarative to make it accessible to business people.

He ended up with a discussion of the necessity to extract rules out of your legacy systems and put them into a central rules repository, and a summary of the model-driven service-oriented world:

  • Applications are modeled rather than coded
  • Legacy applications are also available as web services
  • Business systems are agile and transparent
  • Enterprise knowledge assets (data, decisions, processes) are stored in a central repository
  • Management has full visibility into the past, present and future of the business
  • Enterprises are no longer held hostage by the inability of their systems to keep up with the business

Although the bits on MDA and SOA might have been new to some of the attendees, some of the rules content may have been a bit too basic for this audience, and/or already covered in the general keynotes. However, Abrari is trying to make that strong connection between MDA and rules for model-driven rules development, which is the approach that Corticon takes with their product.

Business Rules Forum: Gladys Lam on Rule Harvesting

For the first breakout this morning, I attended Gladys Lam’s session on organizing a business rule harvesting project, specifically on how to split up the tasks amongst team members. Gladys does a lot of this sort of work directly with customers, so she has a wealth of practical experience to back up her presentation.

Process rules and decisioning rulesShe first looked at the difference between business process rules and decisioning rules, and had an interesting diagram showing how specific business process rules are mapped into decisioning rules: in a BPMS, that’s the point where we would (should) be making a call to a BRMS rather than handling the logic directly in the process model.

The business processes typically drive the rule harvesting efforts, since rule harvesting is really about extracting and externalizing rules from the processes. That means that one or more analysts need to comb through the business processes and determine the rules inherent in those processes. As processes get large and complex, then the work needs to be divided up amongst an analyst team. Her recommendations:

  • If you have limited resources and there are less than 20 rules/decisions per task, divide it up by workflow
  • If there are more than 20 rules per task, divide by task

My problem here is that she doesn’t fully define task, workflow and process in this context; I think that “task” is really a “subprocess”, and “workflow” is a top-level process. Moving on:

  • If there are more than 50 rules per task, divide by decision point; e.g., a decision about eligibility for auto insurance could be broken down into decision points based on proof of insurance, driving history, insurance risk score and other factors

She later also discussed dividing by value chain function and level of composition, but didn’t specify when you would use those techniques.

The key is to look at the product value chain inherent in your process — from raw materials through production, tracking, sales and support — and what decisions are key to supporting that value chain. In health insurance, for example, you might see a value chain as follows:

  1. Develop insurance product components
  2. Create insurance products
  3. Sell insurance products to clients
  4. Sign-up clients (finalize plans)
  5. Enroll members and dependents
  6. Take claims and dispense benefits
  7. Retire products

Now, consider the rules related to each of those steps in the value chain (numbers correspond to above list):

  1. Product component rules, e.g., a scheduled payout method must have a frequency and a duration
  2. Product composition rules, e.g., the product “basic life” must include a maximum
  3. Product templating rules, e.g., the “basic life” minimum dollar amount must not be less than $1000
  4. Product component decision choice rules, e.g., a client may have a plan with the “optional life” product only if the client has a plan with a “basic life” product
  5. Membership rules, e.g., a spouse of a primary plan member must not select an option that a plan member has not selected for “basic life” product
  6. Pay-out rules, e.g., total amount paid for hospital stay must be calculated as sum of each hospital payment made for claimant within claimant’s entire coverage period
  7. Product discontinuation rules, e.g., a product that is over 5 years old and that is not a sold product must be discontinued

These rules should not be specific to being applied at specific points in the process — my earlier comment on the opening keynote on the independence of rules and process — but represent the policies that govern your business.

Drilling down into how to actually define the rules, she had a number of ways that you to consider splitting up the rules to allow them to fully defined. Keeping with the health insurance example, you would need to define product rules, e.g., coverage, and client rules, e.g., age, geographical location, marital status, and relationship to member. Then, you need to consider how those rules interact and combine to ensure that you cover all possible scenarios, a process that is served well by tools such as decision tables to compare, for example, product by geographic region.

This is going to lead to a broad set of rules covering the different business scenarios, and the constraints that those rules apply to different parts of your business processes: in the health insurance scenario that includes rules that impact how you sell the product, sign up members, and process claims.

You have to understand the scope before getting started with rule harvesting, or you risk having a rule harvesting project that balloons out of control and defines rules that may never be used. You may trade off going wide (across the value chain) versus going deep (drill down on one component of the value chain), or some combination of both, in order to address the current pain points or support a process automation initiative in one area. There are very low-level atomic rules, such as the maximum age for a dependent child, which also need to be captured: these are the sorts of rules that are often coded into multiple systems because of the mistaken belief that they will never change, which causes a lot of headaches when they do. You also need to look for patterns in rules, to allow for faster definition of the rules that follow a common pattern.

Proving that she knows a lot more than insurance, Gladys showed us some other examples of value chains and the associated rules in retailing and human resources.

Fact model example

Underlying all of the rule definitions, you also need to have a common fact model that you use as the basis for all rules: this defines the atomic elements and concepts of your business, the relationships between them, and the terminology.

In addition to a sort of entity-relationship diagram, you also need a concepts catalog that defines each term and any synonyms that might be used. This fact model and the associated terms will then provide a dictionary and framework for the rule harvesting/definition efforts.

All of this sounds a bit overwhelming and complex on the surface, but her key point is around the types of organization and structure that you need to put in place in your rules harvesting projects in order to achieve success. If you want to be really successful, I’d recommend calling Gladys. 🙂

Business Rules Forum: Vendor Panel

All the usual suspects joined on a panel at the end of the day to discuss the vendor view of business rules: Pegasystems, InRule, Corticon, Fair Isaac ,ILOG (soon to be IBM) and Delta-R, moderated by John Rymer of Forrester.

The focus was on what happening to the rules market, especially in light of the big guys like SAP and IBM joining the rules fray. Most of them think that it’s a good thing to have the large vendors in there because it raises the profile of and validates rules as a technology; likely the smaller players can innovate faster so can still carve out a reasonable piece of the market. Having seen exactly this same scenario play out in the BPM space, I think that they’re right about this.

The ILOG/IBM speaker talked about the integration of business rules and BPM as a primary driver — which of course Pega agreed with — but also the integration of rules, ETL and other technologies. Other speakers discussed the importance of decision management as opposed to just rules management, especially with regards to detecting and ameliorating (if not actually avoiding) situations like the current financial crisis; the use of predictive analytics in the context of being able to change decisions in response to changing conditions; and the current state of standards in rules management. There was a discussion about the difference between rules management and decision management, which I don’t believe answered the question with any certainty for most of the audience: when a speaker says “there’s a subtle but important difference” while making hand motions but doesn’t really elaborate, you know that you’re deep in the weeds. The Delta-R speaker characterizes decision management as rules management plus predictive modeling; I think that all of the vendors agree that decision management is a superset of rules management, but there are at least three different views on what forms that superset.

As a BPM bigot, I see rules as just another part of the services layer; I think that there’s opportunity for BRM in the cloud to be deployed and used much more easily than BPM in the cloud (making a web services call from a process or app to an external rules system isn’t very different than making a web services call to an internal rules system), but I didn’t hear that from any of the vendors.

That’s it for the day; I know that the blogging was light today, but it should be back to normal tomorrow. I’m off to the vendor expo to check out some of the products.

Business Rules Forum: Ron Ross keynote

The good news is that it’s a lovely sunny, breezy and cool day: perfect fall weather for Toronto. The bad news is that I’m in Orlando, and was hoping to wear shorts more than sweaters this week. However, I’m here to attend — and speak at — the Business Rules Forum, not sit by the pool.

Ron Ross started the conference with a keynote called From Here to Agility; agility, of course, is one of the key reasons that you consider implementing business rules, whether in the context of BPM or other applications. It’s pretty well attended — probably 200 people here at the opening keynote, and likely a lot of vendors off setting up their booths for later today.

He started with a couple of case studies, both of companies that could really use rules due to the lack of agility in their legacy systems, and of companies that have successfully implemented rules and achieved their ROI on the first project. He then looked at what might be motivating people to attend this conference and what they can expect; a bit of an unnecessary sales pitch, considering that these people are already here.

He talked about the importance of decisioning, and how it’s a much better opportunity for business improvement than process; I’d have to agree that it’s a much greater contributor to agility, but not necessarily a better opportunity for improvement overall. I’ll have to think that through before my presentation this afternoon on mixing rules and process. He did have some convincing quotes from Tom Davenport’s “Competing on Analytics”, such as Davenport’s conclusion that automated decisioning will be the next competitive battleground for organizations.

The goals to creating business agility:

  • No artificial constraints in the representation of business products and your capacity to deliver them to customers — this is primarily a cultural issue, including a vocabulary to define your business practices, not a technical issue.
  • All operational business practices represented as rules.
  • All rules in a form such that they can be readily found, analyzed, modified and redeployed by qualified business people and product specialists.

Examples of operational business decisions:

  • How do we price our product for this transaction?
  • What credit do we give to this customer at this point in time?
  • What resource do we assign to this task right now?
  • Do we suspect fraud on this particular transaction?
  • What product configuration do we recommend for this request?
  • Can we confirm this reservation?

Note that these really are low-level, moderate complexity operational decisions, not strategic decisions: thousands or even millions of these decisions may be made every day in your business processes, and having agility in this type of decision can provide significant agility and competitive differentiation.

James Taylor and Neil Raden will be here later to talk about enterprise design management (EDM), but Ron gave us some of the basics: closed-loop decisioning that captures data about decisions, analyzing that data, then uses those results to make changes in a timely manner to the operational decisions. The “in a timely manner” part of that is where business rules come in, of course. That round-trip from analysis to deployment to execution to capture is key: we talk about it in BPM, but the analysis and deployment parts often require a great deal of an analyst’s time in order to determine the necessary improvements.

He went on to talk in more detail about why a focus on “business process” isn’t enough, since it doesn’t make the business adaptive, create consistent and reusable rules, or a number of other factors that are better served by business rules. To achieve business agility, then, he feels that you need:

  • Business-level rule management: having the business make changes to rules
  • Business-level change deployment: having the business in charge of the governance process for changing and rolling out changes to rules
  • Business-level organizational function to support the previous two activities

Looking at the problem decisions in existing legacy systems, look at the redundant, overlapping and conflicting rules; these could manifest as data quality problems, frequent change requests, or customer service problems. In many cases, these conflicting rules may be running on different platforms and address different channels. The key is to externalize these rules from the legacy systems into a decision service: a business rules management system that maintains the rules repository and is available to any application via a standard web services interface. This allows for a gradual transition from having these rules embedded within the legacy systems to centralizing them into a common repository that ensures consistent results regardless of channel or application. This provides consistency across channels, selective customer treatment and competitive time-to-market as well as rather painless compliance since your policies are embedded within the rules themselves and the rules management system can track what rules are executed at any given point in time.

Now, think of your BPMS as your legacy system in the context of the above paragraph…

Logistics: no wifi (there is wifi in the conference area but BRF didn’t spring for the password), requiring a trip to the lobby or my room in order to post — obviously, that will delay things somewhat. No power at the tables, which is not a big deal since I don’t use a lot of power with the wifi off. My blogging will be a bit light today until after my presentation this afternoon.

HD antenna

HD OTA antennaFor those of you in the conversation at last week’s after-conference drinks about HD digital over-the-air (OTA) antennae, and how my husband built one out of a salad spoon and tin foil, here’s the details (on his blog).

And yes, for those of you who read his text, he really did make a working antenna out of a tea strainer and a metal tape measure, but I was laughing too hard to take the picture.