Launching #BPMcamp

Almost four years ago, I wrote a post about how we needed a BPM unconference. Today, Scott Francis of BP3 announced that they’re organizing one, although it’s focused on Lombardi customers and products. As I said on my comment on his post:

I believe that there is a place for a vendor-independent BPM camp, but using a single vendor’s clients to kick things off is a promising start to test the format. The biggest challenges, I believe, will be encouraging people who are accustomed to being spoon-fed at typical conferences to create and facilitate their own sessions, as well as get the corporate approval necessary for attending an unconference.

I’ve attended a lot of unconferences over the past few years, and the format can really work well if the right framework is in place and attendees are willing to participate [note that by “unconference”, I mean the self-organizing type that use something like Open Space as an organizational framework, not the fake unconferences that are actually pre-scheduled webinars].

I’m very excited to see what happens with this; the time could be right for unconferences to make an impact on the enterprise.

Smarter Systems for Uncertain Times #brf

I facilitated a breakfast session this morning discussing BPM in the cloud, which was a lot of fun, and now I’m in the keynote listening to James Taylor on the role of decision management in agile, smarter systems. Much of this is based on his book, Smart (Enough) Systems, which I reviewed shortly after its release.

Our systems need to be smarter because we live in a time of constant, rapid change – regulations change; competition changes due to globalization; business models and methods change – and businesses need to respond to this change or risk losing their competitive edge. It’s not just enough to be a smarter organization, however: you have to have smarter systems because of the volume and complexity of the events that drive businesses today, the need to respond in real time, and the complex network of delivery systems by which products and services are delivered to customers.

Smarter systems have four characteristics:

  • They’re action-oriented, making decisions and taking action on your behalf instead of just presenting information and waiting for you to decide what to do.
  • They’re flexible, allowing corrections and changes to be made by business people in a short period of time.
  • They’re forward-looking, being able to use historic events and data to predict likely events in the future, and respond to them proactively.
  • They learn, based on testing combinations of business decisions and actions in order to detect patterns and determine the optimal parameters (for example, testing pricing models to maximize revenue).

Decision management is an approach – not a technology stack – that allows you to add decisioning to your current systems in order to make them smarter. You also need to consider the management discipline around this, that will allow systems to not just become smarter, but begin to make decisions and take actions without human intervention.

James had a number of great examples of smarter systems in practice, and wrapped up with the key to smarter systems: have a management focus on decisions, find the decisions that make a difference to your business, externalize those decisions from other systems, and put the processes in place to automate those decisions and their actions.

Business Rules and Business Events: Where CEP Helps Decisions #brf

To finish off the second day of Business Rules Forum, Paul Vincent of TIBCO spoke about events and event-driven architecture as a useful way of dealing with business rules. TIBCO is best known for their messaging bus (although some of us know it more for the BPM side), and events are obviously one of the things that can be carried by the bus, or generated from other messages on the bus. The three major analysts who presented here this week – Jim Sinur of Gartner, Mike Gualtieri of Forrester, and Steve Hendrick of IDC – all stressed the importance of events and CEP; in fact, Gualtieri stated that CEP is the future of business rules in his breakfast roundtable this morning.

Going back to the basics of business rules, rules can be restrictions, guidelines, computations, inferences, timings and triggers; the last two are where events start to come into play. Rules are defined through terms and facts; some facts may be events, and rules enforced as events occur. Business rules drive process definitions and the decisions made within business processes, and mapping between rules, processes and decisions is easiest done from an event perspective. Events are key to business rule evaluation and enforcement, where events are triggers for both processes and the rules that determine the decisions within those processes: an event triggers a process, which in turn calls a decision service; or an event triggers a change to a rule, which in turn changes the decisions returned to a process. In fact, there’s a fine line between business processes and event processing if you consider how an event might impact an in-flight event-triggered process, and Paul declared that BPM is really just a constrained case of CEP.

Having taken over the world of BPM, he moved on to BRM, and showed how CEP systems are better for managing automated rules (when all you have is a CEP system, everything looks like an event, I suppose 🙂 ) since all decisions are event-driven, and CEP systems monitor simple events and decisions to identify patterns in real time by combining rules, events and real-time data in the same system to allow organizations to react intelligently to business events. He walked through an example architecture for real-time event processing (which happens to be TIBCO’s CEP architecture): a distributed CEP framework including an event bus and data grid, plus rule maintenance and execution, and real-time analytics. This allows historic patterns to be detected in real time (which sounds like a contradiction), while providing the decision management interfaces, rule agents and real-time dashboards. Rather than having a listener application feeding a rules engine, events are fed directly to the event processing engine in an event-driven architecture. He walked through other aspects, such as rule definition and decision services, showing how EDA provides a simpler and more powerful environment than standard BRMS and SOA.

Business rules are used in sense and respond, track and trace, and situation awareness CEP applications; business users (or at least business analysts) need to be able to understand and model events independent of any particular infrastructure. I completely agree with this, since I find that business analysts focused on process are woefully unaware of how to identify and model events and how those events impact their business processes.

Comprehensive Decision Management: Leveraging Business Process, Case Management and CEP #brf

Steve Zisk of Pegasystems discussed decision management with a perspective on the combination of BPMS and BRMS (as you might expect from someone from Pega, whose product is a BPMS built on a BRMS): considering where change occurs and has to be managed, how rules and process interact, and what types of rules belong in which system.

In many cases, rules are integrated with process through loose integration: a process makes a call to a rules engine and passes over the information required to make a decision, and the rules engine sends back the decision or any resulting error conditions. This loose coupling makes for good separation of rules and process, and treats the rules engine as a black box from the point of view of the process, but doesn’t allow you to see how rules may interact. It also makes it difficult when the parameters that drive rules change: there may be a new parameter that has to be collected at the UI level, held within process instance parameters and passed through to the rules engine, not just a change to the rules. Pega believes that you have to have rules infused into the process in order to make process and rules work together, and to be completely agile.

Looking at an event-driven architecture, you can consider three classes of events: business, external and systems. We’re concerned primarily with business events here, since those are the ones that have to be displayed to a user, used to derive recommendations to display to a user, or used by users in order to make business decisions. Systems that need to involved human decisions with many complex events need to have a tighter integration between events, processes and rules.

Case management is about more than just collections of information: a case is the coordination of multiple units of work towards a concrete objective to meet a business need of an external customer, an internal customer, a partner or another agency. Cases respond to and generate events, both human events (such as phone calls) and automated events (such as followup reminders or fraud detection alerts).

Steve covered a number of case studies and use cases discussing the interaction between rules, processes and events, highlighting the need for close integration between these, as well as the need for rules versioning.

Business Rules Governance and Management #brf

Eric Charpentier, an independent rules consultant who has also been blogging from BRF, gave a presentation on business rules governance and management. He makes a distinction between governance and management, although that is primarily in the level: governance deals with the higher-level issues of establishing leadership, organizational structures, communication and processes, whereas management is tied to the operational issues of creating rules and the day-to-day operational issues. He proposes a number of ingredients for rules management and governance:

  • Leadership and stakeholders, including identifying stakeholders, classifying them by attitude, power and interest, and identifying roles, responsibilities and skills
  • Communication plans for each stakeholder type
  • Identifying types of rules, particularly around complexity and dependencies
  • Understanding the lifecycle of rules within your organization, which shows the process of creating, reviewing, testing, deploying and retiring rules, and the roles associated with each step in that process
  • Rule management processes, with details on rule discovery, authoring and other management tasks, right down to rule retirement
  • Execution monitoring and failure management
  • Change management, including security and access control over different types of changes
  • Building a rules center of excellence; interestingly, Jim Sinur recommended a joint BRM-BPM CoE in his presentation this morning, although I’m not sure that I completely agree with that since the efforts are often quite disjoint within organizations (or maybe Jim’s point is to force them closer together)

Eric obviously has a huge amount of knowledge on organizing rules projects, but he also proved his practical experience at this by discussing two case studies where he is involved as a facilitator, rule author, rule administrator and developer: one in a Canadian government project, and the other with Accovia, a travel technology provider.

The government project is a multi-year renewal project where legacy AS/400 systems are being converted to service-oriented architecture, and rules were identified as a key technology. In order to gain an early win, they extracted rules from the legacy system and put them in a BRMS, exposed them as web services and then call them from the legacy system. In the future, they’ll be able to built new systems that call those same rules, now that they’re properly externalized from the applications. They’re using a fairly simple rule lifecycle (develop, test, deploy) that combines authoring and change management because they have a small team, but have specific timelines for some rules that change on an annual basis. They have processes (or procedures) for deployment, execution monitoring, failure management, testing and validation, and simulation, although they have no rule retirement process because of the current nature of their rules. The simulation, a new process, takes the data from the previous year and runs it through new potential rules in order to understand the different impact on their costs; this then allows assessment of the new rules, and the appropriate policy set that in turn selects the rules for production.

The Accovia project is focused on their off-the-shelf software product, where they are embedding rules as a key component of the software. They have some rules that are internal to the software, and others where they allow the client to customize the rules; this means that the typical rules project roles are split between Accovia and their clients. The clients won’t be able to change the basic rules models, so the challenges are around creating the environment that allows the clients to make changes that are meaningful to them, but are also resilient to upgrades in the core product. They haven’t solved all of these problems yet, but have identified six possible rule lifecycles that need to be managed.

Some key lessons learned about rules governance and management as a wrapup: this takes time, and needs good stakeholder analysis. You may also need to do some research and consult your technical team in order to understand all of the issues that might be involved. Very comprehensive presentation.

BRMS at a Crossroads #brf

Steve Hendrick of IDC gave the morning keynote with predictions for the next five years of business rules management systems. He sees BRMS as being at a crossroads, currently being used as passive decisioning systems with limited scope, but with changes coming in visibility, open source, decision management platforms and cloud services.

He took a look at the state of the BRMS market: in 2008, license and maintenance revenue (but not services) was $285M, representing 10.5% growth; significant in its own right, but just a rounding error in the general software market that is about 1000 times that size. He plotted out the growth rate versus total revenue for the top 10 BRMS vendors; no one is in the top right, IBM (ILOG), CA and FICO are in the lower right with high revenues but low growth, Pegasystems, SAP (Yasu rebranded as NetWeaver BRM), Object Connections, Oracle, ESI and Corticon are in the top left with lower revenues but higher growth, and IDS Scheer (?) is in the bottom left.

Forecasting growth of the BRMS market is based on a number of factors: drivers include market momentum, business awareness driven mostly by business process focus, changes in worldwide IT spending, and GRC initiatives, whereas the shift to open source and cloud services have a neutral impact on the growth. He put forward their forecast scenarios for now through 2013: the expected growth will rise from 5% in 2009 to just over 10% by 2013. The downside growth scenario is closer to 6% in 2013, while the upside is 15%.

Many of the large ISVs such as IBM and SAP have just started in the BRMS space through recent acquisitions, but IBM tops his list of leading BRMS vendors because of the ILOG acquisition; FICO, Oracle and Red Hat are also on that list. He also lists the BPMS vendors with a strong legacy in BRMS, including CA (?), Pegasystems and TICBO. Open source vendors such as Red Hat are starting to gain a foothold – 5-10% of installed base – and are a significant threat to some of the large players (with equally large price tags) in the pure play BRMS space. Decision services and decision tables, which are the focus of much of the BRMS market today, can be easily commoditized, making it easier for many organizations to consider open source alternatives, although there are differences in the authoring and validation/verification functionality.

He spoke about moving towards a decision management platform, which includes looking at the relationships between rules and analytics: data can be used to inform rules definitions, as well as for decision refinement. CEP is an integral part of a decision management platform, with some overlapping functionality with a BRMS, but some complementary functions as well. He puts all of this together – data preparation, BRMS, decision refinement, CEP and state – into a core state machine, with links to an event server for receiving inbound events and a process server for initiating actions based on decisions, blending together an event, decision and process architecture. The benefits of this type of architecture:

  • Sense and respond provides granular support for all activities
  • Feedback allows immediate corrections to reduce risk
  • Decision models become complex but processes become simple
  • Model-driven with implicit impact analysts and change management
  • GRC and consistency are derivative benefits
  • Scalable and cloud-ready

This last point led to a discussion about decision management platforms as cloud services to handle scalability issues as well as reduce costs; aside from the usual fears about data security, this seems like a good fit.

His recommendations to vendors over the next five years:

  • Add analytics to complement the BRMS functionality
  • Bring BRMS, CEP, BPM, analytics and EDA together into a decision management platform
  • Watch out for the penetration of open source solutions
  • Cloud DMP services cater to the unpredictable resource requirements and scalability

Recommendations for user organizations:

  • Understand the value that analytics brings to BRMS, and ensure that you have the right people to manage this
  • Commit to leveraging real-time events and activities as part of your business processes and decisions
  • Watch the BRMS, CEP, BPM and analytics markets over the next 12 months to understand the changing landscape of products and capabilities

A good look forward, and some sound recommendations.

Business Rules Management: The Misunderstood Partner to Process #brf

The title of Jim Sinur’s breakfast session this morning is based on the “lack of respect” that rules have as a key component in business processes: as he pointed out, it’s very difficult to explain to a business executive what business rules do and their value (something to which I can personally attest). I’ve been talking about the value of externalizing rules from processes for a number of years, and Jim and I are definitely on the same page here. He has some numbers to back this up: a rules management system can expect to show an IRR of 15%, and in some industries that are very rules-intensive, it can be much higher.

Rules are everywhere: embodied in multiple systems, as well as in manual procedures and within employee’s heads; it goes without saying that there can be inconsistent versions of what should be the same rule in different places, leading to inconsistent business processes and outcomes. Extracting the rules out of the systems – and with more difficulty, from people’s heads – and managing them in a common rules system allows those rules to become more transparent (everyone can see what the rules are) and agile (a rule change can be made in one place but may impact multiple business processes and scenarios). Or as he said, rules are much easier to manage when they are managed. 🙂

Not all rules, however, are business rules and therefore are a fit for externalization: the best fit are those that truly have a business focus and have some degree of volatility, such as regulatory compliance rules or the rules that you use for underwriting; those with a poor fit for BRMS are system rules that might be better left in code. Once the business rules have been identified, the next challenge is to figure out which of these should actually be managed directly by the business. IT will tell you that allowing the business to change any rule without a full regression testing is dangerous; they’re wrong, of course, since your initial testing of rules should test the envelope within which the business can make rule changes. However, Jim’s suggestion is to have business and IT each state which rules that they want to manage, and just deal with those that claimed by both, by examining the impact of the changing rules within that area of overlap. Basically, if a change to a rule can’t result in any system meltdown or violation of business practices, there’s usually not a good reason not to allow the business to manage it directly.

As with the Gartner definition of BPM, BRM is defined as both a management discipline as well as the tools and technology: just as we have to get organizations to think in a process-centric manner in order to implement effective BPM systems, organizations have to think about rules management as a first-class member of their analysis and management tools. Compared to BPM, however, BRM is further back in the hype cycle: just approaching the peak of inflated expectations, whereas BPA is far out in the plateau of productivity and BPMS is dipping into the trough of disillusionment. Jim predicts that BRM will become important (especially in the context of BPM) in 5-10 years, unless some catastrophic event or legislation causes this to accelerate; this is expected to show high benefit, although not necessarily transformational as BPM is expected to be.

There’s been a lot of acquisition in the rules space, and the number of significant players has dropped from 40+ to about 15 in the past few years. There’s still quite a bit of variability in BRM offerings, however, ranging from the simple routing and branching available within BPMS, to inference engines where rules are fired based on terms and facts either through forward-chaining or backward-chaining, to event-based engines that fire based on the correlation of business and system events. Really, however, the first case is a BPMS, the second is a typical BRMS, and the third is complex event processing, but these boundaries are starting to shift. Rules technology is being seen in BPMS and CEP, but also within application development environments and packaged applications.

He did an overview of BRMS technology, starting with business rule representation: there’s a whole spectrum of rule representation, ranging from proto-natural languages through to the (as yet non-existent) natural language rules. In order to be considered a BRMS (as opposed to just a BRE), a product needs to include a rules repository, modeling and simulation, monitoring and analysis, management and administration, templates, and an integrated development environment, all centered around the rule execution engine.

Combining rules and process is really the sweet spot for both sides: allowing business rules to be externalized from processes so that they can be reused across processes (and other applications), and changed as required by the business, even for in-flight processes. Rules can be used as constraints for unstructured processes, where you don’t need to know ahead of time exactly what the process will look like, but the goals must be achieved – and validated by the rules – before the process instance is completed. The simple routing rules that exist within some BPMS just isn’t sufficient for this, and most BPM vendors are starting to realize that they either need to build their own BRMS or learn to integrate well with some of the full-featured BRMS.

He wrapped up with some key takeaways and recommendations: focus on real business rules; learn how BRM can become part of your management practices as well as technology portfolio; marry BPM and BRM, potentially within the same CoE; and see rules and processes as metadata-driven assets.

The Decision Dilemma #brf

The Business Rules Forum has started here in Las Vegas, and I’m here all week giving a presentation in the BPM track, facilitating a workshop and sitting on a panel. James Taylor and Eric Charpentier are also here presenting and blogging, with a focus more purely on rules and decision management; you will want to check out their blogs as well since we’ll likely all be at different sessions. I’m really impressed with what this conference has grown into: attendance is fairly low, as it has been at every conference that I’ve attended this year due to the economy, but there is a great roster of speakers and five concurrent tracks of breakout sessions including the new BPM track. As I’ve been blogging about for a while (as has James), process and rules belong together; this conference is the opportunity to learn about both as well as their overlap.

We kicked off with a welcome from Gladys Lam, followed by a keynote from Jim Sinur on making better decisions in the face of uncertainty. One thing that’s happened during the economic meltdown is that a great deal of uncertainty has been introduced into not just financial markets, but many aspects of how we do business. The result is that business processes need to be more dynamic, and need to be able to respond to emerging patterns rather than the status quo. At the Appian conference last week, Jim spoke about some of their new research on pattern-based strategies, and that’s the heart of what he’s talking about today.

One of the effects of increased connectivity on business is that it speeds the impact of change: as soon as something changes in how business works in one part of the world, it’s everywhere. This makes instability – driven by that change – the normal state rather than an exception. Although he focused on dynamic processes at the Appian conference, here he focused more the role of rules in dealing with uncertainty, which I think is a valid point since rules and decision management are much of what allow processes to dynamically shift to accommodate changing conditions; although perhaps it is more accurate to consider the role of complex event processing as well. I am, however, left with the impression that Gartner is spinning pattern-based strategy onto pretty much every technology and special interest group.

The discussion about pattern-based strategies was the same as last week (and the same, I take it, as at the recent Gartnet IT Expo where this was introduced), covering the cycle of seek, model and adapt, as well as the four disciplines of pattern seeking, performance-driven culture, optempo advantage and transparency.

There’s lots of Twitter activity about the conference, and it’s especially interesting to see reactions from other analysts such as Mike Gualtieri of Forrester.

Lean Process Improvement Revisited #appianforum

As with Jim Sinur, my schedule is overlapping with that of Clay Richardson of Forrester several times this month. This morning, I heard some new material from Jim, but Clay had much the same presentation that I saw him give at the Forrester Business Technology Forum a couple of weeks ago so I don’t have a lot to add here, although it’s worth reviewing the original post since he had a good presentation on the implications of Lean principles on BPM.

He did have a new bloated-lean-anemic case study about the Territory Insurance Office based on Tom Higgins’ presentation at the BTF; hopefully we’ll see a paper from Clay on this soon.

Don’t Underestimate the Impact of BPM #appianforum

It’s the third time this month that I’ve been at a conference with Jim Sinur of Gartner, and he’s giving the opening keynote here at Appian’s user conference. Although a lot of the local people are held up due to weather and traffic today, they’re expecting over 300 people here: a huge success given the poor attendance and even cancellations that we’ve seen with other BPM events this year.

He started out with some stats on the companies who submitted their achievements for Gartner’s BPM excellence awards: some outstanding examples of executive support and ROI, although you have to keep in mind that these are self-selected as “excellent”. There were, however, some unexpected results and out of the box thinking, where benefits from one organization were used to help those who were less fortunate, or unstructured processes were used to gain process improvement.

Unstructured processes used to handle exceptions within a more structured process are no longer considered unusual, but are a standard part of many processes that need to adapt to shifting conditions: they need to be considered an integral part of a business process rather than something to be avoided. Today’s agile processes allow businesses to deal with known exceptions, by allowing rules or processes to be changed on the fly, but future-thinking organizations have to be looking for unknown exceptions, and allowing their processes to be adapted for any scenario that might arise. There’s a huge amount of information that drives these scenarios and their early detection, including events from multiple disparate systems: the key is to look for patterns and understand the impact that they will have on your organization.

He outlined four disciplines of pattern-based strategy:

  • Pattern seeking, to seek and exploit signals that apply to you, particularly through collaborative knowledge
  • Optempo (operational tempo) advantage, to dynamically match organizational pace to changing conditions, requiring a harmonized and synchronized view of patterns across the organization
  • Performance-driven culture, to adapt to changing patterns in order to achieve target results
  • Transparency, enabling pattern-based strategy by exposing signals earlier

BPM is one of the technologies that helps organizations to adapt to the patterns, once they have been discovered and modeled in a seek-model-adapt cycle. We’re moving from managing processes to managing chaos, and pattern-based strategies are part of that.