Smarter Systems for Uncertain Times #brf

I facilitated a breakfast session this morning discussing BPM in the cloud, which was a lot of fun, and now I’m in the keynote listening to James Taylor on the role of decision management in agile, smarter systems. Much of this is based on his book, Smart (Enough) Systems, which I reviewed shortly after its release.

Our systems need to be smarter because we live in a time of constant, rapid change – regulations change; competition changes due to globalization; business models and methods change – and businesses need to respond to this change or risk losing their competitive edge. It’s not just enough to be a smarter organization, however: you have to have smarter systems because of the volume and complexity of the events that drive businesses today, the need to respond in real time, and the complex network of delivery systems by which products and services are delivered to customers.

Smarter systems have four characteristics:

  • They’re action-oriented, making decisions and taking action on your behalf instead of just presenting information and waiting for you to decide what to do.
  • They’re flexible, allowing corrections and changes to be made by business people in a short period of time.
  • They’re forward-looking, being able to use historic events and data to predict likely events in the future, and respond to them proactively.
  • They learn, based on testing combinations of business decisions and actions in order to detect patterns and determine the optimal parameters (for example, testing pricing models to maximize revenue).

Decision management is an approach – not a technology stack – that allows you to add decisioning to your current systems in order to make them smarter. You also need to consider the management discipline around this, that will allow systems to not just become smarter, but begin to make decisions and take actions without human intervention.

James had a number of great examples of smarter systems in practice, and wrapped up with the key to smarter systems: have a management focus on decisions, find the decisions that make a difference to your business, externalize those decisions from other systems, and put the processes in place to automate those decisions and their actions.

Comprehensive Decision Management: Leveraging Business Process, Case Management and CEP #brf

Steve Zisk of Pegasystems discussed decision management with a perspective on the combination of BPMS and BRMS (as you might expect from someone from Pega, whose product is a BPMS built on a BRMS): considering where change occurs and has to be managed, how rules and process interact, and what types of rules belong in which system.

In many cases, rules are integrated with process through loose integration: a process makes a call to a rules engine and passes over the information required to make a decision, and the rules engine sends back the decision or any resulting error conditions. This loose coupling makes for good separation of rules and process, and treats the rules engine as a black box from the point of view of the process, but doesn’t allow you to see how rules may interact. It also makes it difficult when the parameters that drive rules change: there may be a new parameter that has to be collected at the UI level, held within process instance parameters and passed through to the rules engine, not just a change to the rules. Pega believes that you have to have rules infused into the process in order to make process and rules work together, and to be completely agile.

Looking at an event-driven architecture, you can consider three classes of events: business, external and systems. We’re concerned primarily with business events here, since those are the ones that have to be displayed to a user, used to derive recommendations to display to a user, or used by users in order to make business decisions. Systems that need to involved human decisions with many complex events need to have a tighter integration between events, processes and rules.

Case management is about more than just collections of information: a case is the coordination of multiple units of work towards a concrete objective to meet a business need of an external customer, an internal customer, a partner or another agency. Cases respond to and generate events, both human events (such as phone calls) and automated events (such as followup reminders or fraud detection alerts).

Steve covered a number of case studies and use cases discussing the interaction between rules, processes and events, highlighting the need for close integration between these, as well as the need for rules versioning.

Business Rules Governance and Management #brf

Eric Charpentier, an independent rules consultant who has also been blogging from BRF, gave a presentation on business rules governance and management. He makes a distinction between governance and management, although that is primarily in the level: governance deals with the higher-level issues of establishing leadership, organizational structures, communication and processes, whereas management is tied to the operational issues of creating rules and the day-to-day operational issues. He proposes a number of ingredients for rules management and governance:

  • Leadership and stakeholders, including identifying stakeholders, classifying them by attitude, power and interest, and identifying roles, responsibilities and skills
  • Communication plans for each stakeholder type
  • Identifying types of rules, particularly around complexity and dependencies
  • Understanding the lifecycle of rules within your organization, which shows the process of creating, reviewing, testing, deploying and retiring rules, and the roles associated with each step in that process
  • Rule management processes, with details on rule discovery, authoring and other management tasks, right down to rule retirement
  • Execution monitoring and failure management
  • Change management, including security and access control over different types of changes
  • Building a rules center of excellence; interestingly, Jim Sinur recommended a joint BRM-BPM CoE in his presentation this morning, although I’m not sure that I completely agree with that since the efforts are often quite disjoint within organizations (or maybe Jim’s point is to force them closer together)

Eric obviously has a huge amount of knowledge on organizing rules projects, but he also proved his practical experience at this by discussing two case studies where he is involved as a facilitator, rule author, rule administrator and developer: one in a Canadian government project, and the other with Accovia, a travel technology provider.

The government project is a multi-year renewal project where legacy AS/400 systems are being converted to service-oriented architecture, and rules were identified as a key technology. In order to gain an early win, they extracted rules from the legacy system and put them in a BRMS, exposed them as web services and then call them from the legacy system. In the future, they’ll be able to built new systems that call those same rules, now that they’re properly externalized from the applications. They’re using a fairly simple rule lifecycle (develop, test, deploy) that combines authoring and change management because they have a small team, but have specific timelines for some rules that change on an annual basis. They have processes (or procedures) for deployment, execution monitoring, failure management, testing and validation, and simulation, although they have no rule retirement process because of the current nature of their rules. The simulation, a new process, takes the data from the previous year and runs it through new potential rules in order to understand the different impact on their costs; this then allows assessment of the new rules, and the appropriate policy set that in turn selects the rules for production.

The Accovia project is focused on their off-the-shelf software product, where they are embedding rules as a key component of the software. They have some rules that are internal to the software, and others where they allow the client to customize the rules; this means that the typical rules project roles are split between Accovia and their clients. The clients won’t be able to change the basic rules models, so the challenges are around creating the environment that allows the clients to make changes that are meaningful to them, but are also resilient to upgrades in the core product. They haven’t solved all of these problems yet, but have identified six possible rule lifecycles that need to be managed.

Some key lessons learned about rules governance and management as a wrapup: this takes time, and needs good stakeholder analysis. You may also need to do some research and consult your technical team in order to understand all of the issues that might be involved. Very comprehensive presentation.

BRMS at a Crossroads #brf

Steve Hendrick of IDC gave the morning keynote with predictions for the next five years of business rules management systems. He sees BRMS as being at a crossroads, currently being used as passive decisioning systems with limited scope, but with changes coming in visibility, open source, decision management platforms and cloud services.

He took a look at the state of the BRMS market: in 2008, license and maintenance revenue (but not services) was $285M, representing 10.5% growth; significant in its own right, but just a rounding error in the general software market that is about 1000 times that size. He plotted out the growth rate versus total revenue for the top 10 BRMS vendors; no one is in the top right, IBM (ILOG), CA and FICO are in the lower right with high revenues but low growth, Pegasystems, SAP (Yasu rebranded as NetWeaver BRM), Object Connections, Oracle, ESI and Corticon are in the top left with lower revenues but higher growth, and IDS Scheer (?) is in the bottom left.

Forecasting growth of the BRMS market is based on a number of factors: drivers include market momentum, business awareness driven mostly by business process focus, changes in worldwide IT spending, and GRC initiatives, whereas the shift to open source and cloud services have a neutral impact on the growth. He put forward their forecast scenarios for now through 2013: the expected growth will rise from 5% in 2009 to just over 10% by 2013. The downside growth scenario is closer to 6% in 2013, while the upside is 15%.

Many of the large ISVs such as IBM and SAP have just started in the BRMS space through recent acquisitions, but IBM tops his list of leading BRMS vendors because of the ILOG acquisition; FICO, Oracle and Red Hat are also on that list. He also lists the BPMS vendors with a strong legacy in BRMS, including CA (?), Pegasystems and TICBO. Open source vendors such as Red Hat are starting to gain a foothold – 5-10% of installed base – and are a significant threat to some of the large players (with equally large price tags) in the pure play BRMS space. Decision services and decision tables, which are the focus of much of the BRMS market today, can be easily commoditized, making it easier for many organizations to consider open source alternatives, although there are differences in the authoring and validation/verification functionality.

He spoke about moving towards a decision management platform, which includes looking at the relationships between rules and analytics: data can be used to inform rules definitions, as well as for decision refinement. CEP is an integral part of a decision management platform, with some overlapping functionality with a BRMS, but some complementary functions as well. He puts all of this together – data preparation, BRMS, decision refinement, CEP and state – into a core state machine, with links to an event server for receiving inbound events and a process server for initiating actions based on decisions, blending together an event, decision and process architecture. The benefits of this type of architecture:

  • Sense and respond provides granular support for all activities
  • Feedback allows immediate corrections to reduce risk
  • Decision models become complex but processes become simple
  • Model-driven with implicit impact analysts and change management
  • GRC and consistency are derivative benefits
  • Scalable and cloud-ready

This last point led to a discussion about decision management platforms as cloud services to handle scalability issues as well as reduce costs; aside from the usual fears about data security, this seems like a good fit.

His recommendations to vendors over the next five years:

  • Add analytics to complement the BRMS functionality
  • Bring BRMS, CEP, BPM, analytics and EDA together into a decision management platform
  • Watch out for the penetration of open source solutions
  • Cloud DMP services cater to the unpredictable resource requirements and scalability

Recommendations for user organizations:

  • Understand the value that analytics brings to BRMS, and ensure that you have the right people to manage this
  • Commit to leveraging real-time events and activities as part of your business processes and decisions
  • Watch the BRMS, CEP, BPM and analytics markets over the next 12 months to understand the changing landscape of products and capabilities

A good look forward, and some sound recommendations.

Business Rules Management: The Misunderstood Partner to Process #brf

The title of Jim Sinur’s breakfast session this morning is based on the “lack of respect” that rules have as a key component in business processes: as he pointed out, it’s very difficult to explain to a business executive what business rules do and their value (something to which I can personally attest). I’ve been talking about the value of externalizing rules from processes for a number of years, and Jim and I are definitely on the same page here. He has some numbers to back this up: a rules management system can expect to show an IRR of 15%, and in some industries that are very rules-intensive, it can be much higher.

Rules are everywhere: embodied in multiple systems, as well as in manual procedures and within employee’s heads; it goes without saying that there can be inconsistent versions of what should be the same rule in different places, leading to inconsistent business processes and outcomes. Extracting the rules out of the systems – and with more difficulty, from people’s heads – and managing them in a common rules system allows those rules to become more transparent (everyone can see what the rules are) and agile (a rule change can be made in one place but may impact multiple business processes and scenarios). Or as he said, rules are much easier to manage when they are managed. 🙂

Not all rules, however, are business rules and therefore are a fit for externalization: the best fit are those that truly have a business focus and have some degree of volatility, such as regulatory compliance rules or the rules that you use for underwriting; those with a poor fit for BRMS are system rules that might be better left in code. Once the business rules have been identified, the next challenge is to figure out which of these should actually be managed directly by the business. IT will tell you that allowing the business to change any rule without a full regression testing is dangerous; they’re wrong, of course, since your initial testing of rules should test the envelope within which the business can make rule changes. However, Jim’s suggestion is to have business and IT each state which rules that they want to manage, and just deal with those that claimed by both, by examining the impact of the changing rules within that area of overlap. Basically, if a change to a rule can’t result in any system meltdown or violation of business practices, there’s usually not a good reason not to allow the business to manage it directly.

As with the Gartner definition of BPM, BRM is defined as both a management discipline as well as the tools and technology: just as we have to get organizations to think in a process-centric manner in order to implement effective BPM systems, organizations have to think about rules management as a first-class member of their analysis and management tools. Compared to BPM, however, BRM is further back in the hype cycle: just approaching the peak of inflated expectations, whereas BPA is far out in the plateau of productivity and BPMS is dipping into the trough of disillusionment. Jim predicts that BRM will become important (especially in the context of BPM) in 5-10 years, unless some catastrophic event or legislation causes this to accelerate; this is expected to show high benefit, although not necessarily transformational as BPM is expected to be.

There’s been a lot of acquisition in the rules space, and the number of significant players has dropped from 40+ to about 15 in the past few years. There’s still quite a bit of variability in BRM offerings, however, ranging from the simple routing and branching available within BPMS, to inference engines where rules are fired based on terms and facts either through forward-chaining or backward-chaining, to event-based engines that fire based on the correlation of business and system events. Really, however, the first case is a BPMS, the second is a typical BRMS, and the third is complex event processing, but these boundaries are starting to shift. Rules technology is being seen in BPMS and CEP, but also within application development environments and packaged applications.

He did an overview of BRMS technology, starting with business rule representation: there’s a whole spectrum of rule representation, ranging from proto-natural languages through to the (as yet non-existent) natural language rules. In order to be considered a BRMS (as opposed to just a BRE), a product needs to include a rules repository, modeling and simulation, monitoring and analysis, management and administration, templates, and an integrated development environment, all centered around the rule execution engine.

Combining rules and process is really the sweet spot for both sides: allowing business rules to be externalized from processes so that they can be reused across processes (and other applications), and changed as required by the business, even for in-flight processes. Rules can be used as constraints for unstructured processes, where you don’t need to know ahead of time exactly what the process will look like, but the goals must be achieved – and validated by the rules – before the process instance is completed. The simple routing rules that exist within some BPMS just isn’t sufficient for this, and most BPM vendors are starting to realize that they either need to build their own BRMS or learn to integrate well with some of the full-featured BRMS.

He wrapped up with some key takeaways and recommendations: focus on real business rules; learn how BRM can become part of your management practices as well as technology portfolio; marry BPM and BRM, potentially within the same CoE; and see rules and processes as metadata-driven assets.

The Decision Dilemma #brf

The Business Rules Forum has started here in Las Vegas, and I’m here all week giving a presentation in the BPM track, facilitating a workshop and sitting on a panel. James Taylor and Eric Charpentier are also here presenting and blogging, with a focus more purely on rules and decision management; you will want to check out their blogs as well since we’ll likely all be at different sessions. I’m really impressed with what this conference has grown into: attendance is fairly low, as it has been at every conference that I’ve attended this year due to the economy, but there is a great roster of speakers and five concurrent tracks of breakout sessions including the new BPM track. As I’ve been blogging about for a while (as has James), process and rules belong together; this conference is the opportunity to learn about both as well as their overlap.

We kicked off with a welcome from Gladys Lam, followed by a keynote from Jim Sinur on making better decisions in the face of uncertainty. One thing that’s happened during the economic meltdown is that a great deal of uncertainty has been introduced into not just financial markets, but many aspects of how we do business. The result is that business processes need to be more dynamic, and need to be able to respond to emerging patterns rather than the status quo. At the Appian conference last week, Jim spoke about some of their new research on pattern-based strategies, and that’s the heart of what he’s talking about today.

One of the effects of increased connectivity on business is that it speeds the impact of change: as soon as something changes in how business works in one part of the world, it’s everywhere. This makes instability – driven by that change – the normal state rather than an exception. Although he focused on dynamic processes at the Appian conference, here he focused more the role of rules in dealing with uncertainty, which I think is a valid point since rules and decision management are much of what allow processes to dynamically shift to accommodate changing conditions; although perhaps it is more accurate to consider the role of complex event processing as well. I am, however, left with the impression that Gartner is spinning pattern-based strategy onto pretty much every technology and special interest group.

The discussion about pattern-based strategies was the same as last week (and the same, I take it, as at the recent Gartnet IT Expo where this was introduced), covering the cycle of seek, model and adapt, as well as the four disciplines of pattern seeking, performance-driven culture, optempo advantage and transparency.

There’s lots of Twitter activity about the conference, and it’s especially interesting to see reactions from other analysts such as Mike Gualtieri of Forrester.

Process Design Slam 2009 – The Final Judgement #SAPTechEd09 #BPXslam09

To wrap up the proceedings from last night, I was asked to critique the efforts of the groups and pick a winner: as it turned out, I was the only judge. Each of the groups did great work, and I want to call out some of the specific efforts:

  • The Business Use Case group had a great written story, including a lot of cultural and social background for our fictional city in order to provide context for the implementation.
  • The BPM Methodologies group had excellent documentation on the wiki, including graphics and charts to make it clear how the methodologies fit with the other groups.
  • The Business Rules group were stars at collaboration with the other groups, in part because everyone quickly realized the importance of business rules to data, UI and process, and solicited their input.
  • The UI and Dashboards group created mockups of monitoring dashboards that provide a starting point for future design slam work.
  • The Collaborative Modeling group led at international collaboration, using Gravity (process modeling within Google Wave) interactively with team members in Europe during the session, and produced a business process model.
  • The Service Implementation group also kicked off implementation, creating a service orchestration process model as a starting point.

In general, everyone seemed to have a good understanding of the importance of data, rules and process, but there could have been better cross-pollination between the groups; in future design slams, that could be helped by requiring some group members to move partway through the evening in order to ensure that there is a better understanding on both sides, something that is fairly common in real-life businesses where people are seconded from one department to another for part of a project. Although a certain amount of collaboration did occur, that was one area that requires more work. I saw one tweet that referred to the design slam as crowdsourced rather than collaborative, although I’m not sure that I would say that: crowdsourcing usually has more of a flavor of individuals contributing in order to achieve their own goals, whereas this was a collaboration with common goals. However, those goals were a bit fragmented by group.

Another issue that I had was the lack of an architectural view of process design: although all of the groups are contributing to a common process (or set of processes), there is little thought around the transformations required to move the process list developed by the Business Use Case group to the process model developed by the Collaborative Modeling group to the process design developed by the Service Implementation group. In enterprise architecture terms, this is a case of transforming models from one layer to another within the process column of the architecture (column 2 if you’re a Zachman fan); understanding these transformations is key so that you don’t reinvent the process at each layer. One of the goals of model-driven design is that you don’t do a business-level process model, then redraw it in another tool; instead, the business-level process model can be augmented with service-level information to become an executable process without recreating the model in another tool. In reality, that often doesn’t happen, and the business analysts draws a process in one tool (such as Visio, or in the case of the design slam, Gravity), then IT redraws it in a tool that will create an executable process (NetWeaver in this case). I have a couple of suggestions here:

  • Combine the Business Use Case and Collaborative Modeling groups into a single group, since they are both doing high-level business analysis. This would allow the process list to be directly modeled in the same group without hand-off of information.
  • Reconsider the use of tools. Although I have a great deal of appreciation for Gravity (I am, after all, a geek), the fact that it does not share a model with the execution environment is problematic since the two groups creating process models were really off doing their own thing using different tools. Consider using NetWeaver 7.2, which has a business analyst perspective in the process composer, and having the business use case/collaborative modeling group create their initial non-technical models in that environment, then allow the service implementation team to add the technical underpinnings. The cool Wave collaboration won’t be there, or maybe only as an initial sketching tool, but the link will be made between the business process models and the executable models.

When it came down to a decision, my choice of the winner was more a product of the early state of the design slam rather than the efforts or skills of the group: I suspect that my view would change if I were judging in Vienna or Bangalore when the process is further along. I selected the Business Use Case group as the winner at this point based on the four judging criteria: although they failed to include alternative media, their story was clear and well-written, it fit well with the other groups’ efforts, and they used good social and collaborative methods within their group for driving out the initial solutions.

The winning team was made up of Greg Chase, Ulrich Scholl and Claus von Riegen, all of SAP, with input from a few others as subject-matter experts on public utilities and electricity production, and started the discussions on pricing plans that ended up driving much of the Business Rules group’s work. Ulrich also has solar cells on his house that connect to the grid, so he has in-depth knowledge of the issues involved with micro-generation, and was very helpful at determining the roles involved and how people could take on multiple roles. They leveraged a lot of the content that was already on the wiki, especially references to communities with experience in micro-generation and virtual power plants. Besides this initial leg up on their work, they were forced to work fast to produce the initial use cases and processes, since that provided necessary input to the other groups to get started with their work, which left them with more of the evening to write a great story around the use case (but, apparently, not enough time to add any graphics or multimedia).

There was a huge amount of effort put into the design slam, both in the preceding weeks through conference calls and content added to the wiki, and at the session last night in Phoenix. I believe that a huge amount of groundwork has been laid for the design slams upcoming in Vienna and Bangalore, including process model, service orchestration diagrams, business rules decision tables, and monitoring dashboard mockups.

I had a great time last night, and would happily participate in a future process design slam.

Process Design Slam 2009 #SAPTechEd09 #BPXslam09

8pm

We’re just getting started with the Process Design Slam: one of the face-to-face sessions that make up the collaborative design process that started a couple of months ago on the Design Slam wiki. Marilyn Pratt has identified the six groups that will each work on their part of the design, collaborating between groups (a.k.a. poaching talent) as required, and even bringing in people from the Hacker Night and Business Objects events going on in the same area.

  • Business Use Case, led by Greg Chase
  • Collaborative Modeling, led by David Herrema
  • Business Rules, led by James Taylor
  • Service Implementation, led by John Harrikey
  • BPM Methodologies, led by Ann Rosenberg
  • UI and Dashboards, led by Michelle Crapo

Right now, everyone has formed into initial groups based on their interests, and is having some initial discussions before the food and beer arrives at 8:30. Since there was an initial story and process model developed by the online community, everyone is starting at something close to a common point. Participants within a group (and even the leaders) could change throughout the evening.

By the end of the night, each team will have created a story about their work, and give a 5-minute presentation on it. The story must include additional media such as video and images, and in addition to the presentation, it must be documented on the wiki. Each story must also be related to the output of the other teams – requiring some amount of collaboration throughout the evening – and include pointers on what worked and didn’t work about their process, and what they would do differently in the future.

At that point, the judging panel, which includes me plus Marc Rosson, Uli Scholl, Ann Rosenberg and Dick Hirsch, will render our judgment on the creations of the groups based on the following criteria:

  • Clarity and completeness of the story on the wiki, particularly if it could be understood without the presentation.
  • Creative use of media.
  • How well this story ties into the overall storyline of the night.
  • The social process that was used to create the story.

I’m floating around between groups to listen in on what they’re doing and some of their initial thoughts.

8:30pm

Beer o’clock. The Business Rules team is still deep in conversation, however, and Business Use Case comes over to them to ask for help in bringing the business rules and business use case together. Business Use Case outlines the actors that they have identified, and the high-level business processes that they have identified in addition to the initial business process of bringing new consumer-producers online.

9pm

BPM Methodologies has a much wider view than just this project: developing methodologies that can be used across (SAP) BPM projects, including assessing the business process maturity of an organization in order to determine where they need to start, and identifying the design roles. In the context of the design slam, they will be helping to coordinate movement of people between the teams in order to achieve the overall goals.

9:30pm

Service Implementation – viewed by groups such as Business Use Case as “the implementers” – have revised the original process map from a service standpoint; looking at the services that were required led to a process redesign. They are using the Composite Designer to model the service orchestration, including the interfaces to the services that they need and external services such as FirstLook, an wind assessment service based on location data. In their service orchestration process, they assume that the process is initiated with the data gathered from a user interface form, and they focus primarily on the automated process steps. Ginger Gatling doesn’t let me leave the table until I tell them what they have to do to win; I advise them to update the wiki with their story.

9:50pm

The Collaborative Modeling group is modeling the business process using Gravity, online with a couple of participants in Europe. This is a process model from a business standpoint, not an executable model; there is no concept of the linkage between this and what is being done by the Service Implementation team. I suggest that they should head over there to compare processes, since these should (at some level) just be different perspectives on the same process.

10pm

Business Use Case is identifying the necessary processes based on their earlier collaboration with Business Rules: this has given them a good understanding of business case, goals and incentives. They’re considering both human and automated usages, and have fed their results to the UI, Business Rules and Collaborative Modeling teams.

10:10pm

Business Rules states that they’ve had to gather information from numerous sources, and the challenge is to sequence it properly: data is captured by the UI, but is driven by the Business Use Case. They didn’t work with the Collaborative Modeling group directly, but there are links between what they do and what’s happening in the process. They’re also interested in using historical usage data to determine when to switch consumers between usage plans.

10:20pm

UI and Dashboards managed to recruit a developer who is actually coding some of their interfaces; they were visited by many of the other groups to discuss the UI aspects, since the data gathered by the UI drives the rest of the process and rules, and the data generated by the process drives the dashboard interfaces. They feel that they had the best job since they could just be consumers and visualize the solutions that they would like to have.

10:35pm

Presentations start. Marilyn Pratt is being the MC, and Greg Chase is wrangling the wiki to show what has been documented by each of the groups. Half of the Service Implementation team just bailed out. I have to start paying attention now. Checking out the wiki pages and summarizing the presentations:

  • Business Use Case worked with the UI, Collaborative Modeling and Business Rules teams, since those teams required the business use cases in order to start their work. They developed a good written story including cultural/social background about the fictional city where the power generation plan would go into effect. They defined the roles that would be involved (where one person could take on more than one role, such as a consumer that is also a producer), and the processes that are required in order to handle all of the use cases. They did not use any presentation/documentation media besides plain text.
  • BPM Methodologies had excellent documentation with the use of graphics and tables to illustrate their points, but this was a quite general methodology, not just specific to this evening’s activities. They worked briefly with the other groups and created a chart of the activities that each of these groups would do relative to the different phases in the methodology. I found the methodology a bit too waterfall-like, and not necessarily a good fit with the more agile collaborative methods needed in today’s BPM.
  • Business Rules focused on the rules related to signing up a new user with the correct pricing plan, documenting the data that must be collected and an initial decision table used to select a plan, although no graphics or other non-text media. They worked with the Business Use Case team and the UI team to drive the underlying business use cases and data collection.
  • UI and dashboards created the initial mockups that can be used as a starting point for the design slam in Vienna in a couple of weeks. They worked with Business Rules and Business Use Case in order to nail down the required user data inputs, and what is required for monitoring purposes, and included some great graphics of the monitoring dashboards (although not the data collection form).
  • Collaborative Modeling used Gravity (process modeling in Google Wave) not just for modeling with the group around the table, but also with participants in Germany and the Netherlands. They included photos of the team as well as screen snaps of the Gravity Wave that they created, although the text of the story documented on the wiki isn’t really understandable on its own. I’m not sure that they spent enough time with other groups, especially the Service Implementation group.
  • Service Implementation talked to the Business Rules and UI teams to discuss rules and data, but felt that they were running blind since there wasn’t enough of the up-front work done for them to do any substantial work. They used placeholders for a lot of the things that they didn’t know yet, and modeled the service orchestration. The documentation in the wiki is very rudimentary, although includes the process map that they developed; it’s not clear, however, how the process model developed in Collaborative Modeling relates to their map.

11:30pm

And now, on to the judging – I’ll write up the critique and results in a later post.

NetWeaver BPM update #SAPTechEd09

Wolfgang Hilpert and Thomas Volmering gave us an update on NetWeaver BPM, since I was last updated at SAPPHIRE when they were releasing the product to full general availability. They’re readying the next wave of BPM – NetWeaver 7.2 – with beta customers now, for ramp-up near the beginning of the year and GA in spring of 2010.

There are a number of enhancements in this version, based on increasing productivity and incorporating feedback from customers:

  • Creating user interfaces: instead of just Web DynPro for manual creation of UI using code, they can auto-generate a UI for a human-facing task step.
  • New functions in notifications.
  • Handling intermediate events for asynchronous interfaces with other systems and services.
  • More complete coverage of BPMN in terms of looping, boundary events, exception handling and other constructs;
  • Allowing a process participant to invite other people on their team to participate in a task, even if not defined in the process model (ad hoc collaboration at a step).
  • The addition of a reporting activity to the process model in order to help merge the process instance data and the process flow data to make available for in-process analytics using a tool such as BusinessObjects – the reporting activity takes a snapshot of the process instance data to the reporting database at that point in the process without having to call APIs.
  • Deeper integration with other SAP business services, making it easier to discover and consume those services directly within the NetWeaver Process Composer even if the customer hasn’t upgraded to a version of SAP ERP that has SOA capabilities
  • Better integration of the rules management (the former Yasu product) to match the NetWeaver UI paradigms, expose more of the functionality in the Composer and allow better use of rules flow for defining rules as well as rules testing.
  • Business analyst perspective in process modeler so that the BA can sketch out a model, then allow a developer to do more of the technical underpinnings; this uses a shared model so that the BA can return to make modifications to the process model at a later time.

I’d like to see more about the ad hoc runtime collaboration at a task (being able to invite team members to participate in a task) as well as the BA perspective in the process modeler and the auto-generation of user interfaces; I’m sure that there’s a 7.2 demo in my future sometime soon.

They also talked briefly about plans for post-7.2:

  • Gravity and similar concepts for collaborative process modeling.
  • Common process model to allow for modeling of the touchpoints of ERP processes in BPM, in order to leverage their natural advantage of direct access to SAP business applications.
  • Push further into the business through more comprehensive business-focused modeling tools.
  • Goal-driven processes where the entire structure of the process model is not defined at design time, only the goals.

In the future, there will continue to be a focus on productivity with the BPM tools, greater evolution of the common process model, and better use of BI and analytics as the BusinessObjects assets are leveraged in the context of BPM.

Advanced decisioning #GartnerBPM

I managed to get out of bed and down to the conference in time for James Taylor’s 7am presentation on advanced decisioning. If you’ve been reading here for a while, you know that I’m a big proponent of using decisioning in the context of processes, and James sums up the reasons why: it makes your processes simpler, smarter and more agile.

Simpler: If you build all of your rules and decisioning logic within your processes – essentially turning your process map into a decision tree – then your processes will very quickly become completely unreadable. Separating decisions from the process map, allowing them to become the driver for the process or available at specific points within the process, makes the process itself simpler

More agile: If you don’t put your decisioning in your processes, then you may have written it in code, either in legacy systems or in new code that you create just to support these decisions. In other words, you tried to write your own decisioning system in some format, but probably created something that’s much harder to change than if you’re using a rules management system to build your decisions. Furthermore, decisions typically change more frequently than processes; consider a process like insurance underwriting, where the basic flow rarely changes, but the rules that are applied and the decisions made at each step may change frequently due to company policy or regulatory changes. Using decision management not only allows for easier modification of the rules and decisions, it also allows these to be changed without changing the processes. This is key, since many BPMS don’t easily allow for processes that are already in progress to be easily changed: that nice graphical process modeler that they show you will make changes to the process model for process instances created after that point, but don’t impact in-flight instances. If a decision management system is called at specific points in a process, it will use the correct version of the rules and decisions at that point in time, not the point at which the process was instantiated.

Smarter: This is where analytics comes into play, with knowledge about processes fed into the decisioning in order to make better decisions in an automated fashion. Having more information about your processes increases the likelihood that you can implement straight-through processes with no human intervention. This is not just about automating decisions based on some initial data: it’s using the analytics that you continue to gather about the processes to feed into those decisions in order to constantly improve them. In other words, apply analytics to make decisions smarter and make more automated decisions.

To wrap up James’ five core principles of decisioning:

  • Identify, separate and manage decisions
  • Use business rules to define decisions
  • Analytics to make decisions smarter
  • No answer is static
  • Decision-making is a process

He then walked through the steps to apply advanced decisioning, starting with identifying and automating the current manual decisions in the process, then applying analytics to constantly optimize those decisions.

He closed with an action plan for moving to decisioning:

  • Identify your decisions
  • Adopt decisioning technology
  • Think about decisions and processes, and how those can be managed as separate entities.

Good presentation as always – well worth getting up early.