IBM Vision for BPM, ODM and SOA

Opening day at IBM Impact 2012 (there were some sessions yesterday, but today is the real start), and a good keynote focused on innovation. The wifi is appalling – if IBM can’t get this right with their messages about scalability, who can? – so not sure if I’ll have the chance to post any of this throughout the day, or if you’ll get it all when I get back to my hotel room.

This post is based on a pre-conference briefing that I had a week or two ago, a regular conference breakout session this morning, and the analyst briefing this afternoon, covering  IBM’s vision for BPM, ODM (decision management) and SOA. Their customers are using technology to drive process innovation, and the IBM portfolio is working to address those needs. Cross-functional business outcomes, which in turn require cross-functional processes, are enabled by collaboration and by better technical integration across silos. And, not surprisingly, their message is moving towards the Gartner upcoming iBPMS vision: support for structured and unstructured process; flexible integration; and rules and analytics for repeatable, flexible decisions. Visibility, collaboration and governance are key, not just within departmental processes, but when linking together all processes in an organization into an enterprise process architecture.

The key capabilities that they offer to help clients achieve process innovation include:

  • Process discovery and design (Blueworks Live)
  • Business process management (Process Server and Process Center)
  • Operational decision management (Decision Server and Decision Center)
  • Advanced case management (Case Manager, which is the FileNet-based offering that not part of this portfolio, but integrated)
  • Business monitoring (Business Monitor)

Underpinning these are master data management, integration, analytics and enterprise content management, surrounded by industry expertise and solutions. IBM is using the term intelligent business operations (which was front and center at Gartner BPM last week) to describe the platform of process, events and decision, plus appropriate user interfaces for visibility and governance.

Blueworks Live is positioned not just as a front-end design tool for process automation, but as a tool for documenting processes. Many of the 300,000 processes that have been documented in Blueworks Live are never automated in IBM BPM or any other “real” BPMS, but it acts as a repository for discovering and documenting processes in a collaborative environment, and allowing process stakeholders to track changes to processes and see how it impacts their business. There is an expanded library of templates, plus an insurance framework and other templates/frameworks coming up.

One exciting new feature (okay, exciting to me) is that Blueworks Live now allows decision tasks to be defined in process models, including the creation of decision tables: this provides an integrated process/decision discovery environment. As with process, these decisions do not need to become automated in a decision management system; this may just document the business rules and decisions as they are applied in manual processes or other systems.

Looking at IBM BPM v8, which is coming up soon, Ottosson took us through the main features:

  • IBM BPM inbox showing inline task approvalSocial collaboration to allow users to work together on tasks via real-time interactions, view activity streams, and locate experts. That manifests in the redesigned task interface, or “coach”, with a sidebar that includes task details, the activity stream for the entire process, and experts that are either recommended by the system based on past performance or by others through manual curation. Experts can be requested to collaboration on a task with another user – it includes presence, so that you can tell who is online at any given time – allowing the expert to view the work that the user is doing, and offer assistance. Effectively, multiple people are being given access to same piece of work, and updates made by anyone are shown to all participants; this can be asynchronous or synchronous.
  • There is also a redesigned inbox UI, with a more up-to-date look and feel with lots of AJAX-y goodness, sorting and coloring by priority, plus the ability to respond to simple tasks inline directly in the inbox rather than opening a separate task view. It provides a single task inbox for a variety of sources, including IBM BPM, Blueworks workflows and Case Manager tasks.
  • Situational awareness with process monitoring and analysis in a performance data warehouse.
  • iPhone app task listMobile access via an iOS application that can interface with Blueworks Live and IBM BPM; if you search for “IBM BPM” in the iTunes app store (but not, unfortunately, in the Android Market), you’ll find it. It supports viewing the task list, task completion, attach documents and add comments. They are considering releases the source code to allow developers to use it as a template, since there is likely to be a demand for a customized or branded version of this. In conjunction with this, they’ve released a REST API tester similar to the sort of sandbox offered by Google, which allows developers to create REST-based applications (mobile or otherwise) without having to own the entire back-end platform. This will certainly open up the add-on BPM application market to smaller developers, where we are likely to see more innovation.
  • Enhancements to Process Center for federation of different Process Centers, each of which implies a different server instance. This allows departmental instances to share assets, as well as draw from an internal center of excellence plus one hosted by IBM for industry standards and best practices.
  • Support for the CMIS standard to link to any standard ECM repository, as well as direct integration to FileNet ECM, to link documents directly into processes through a drag-and-drop interface in the process designer.
  • There are also some improvements to the mashup tool used for forms design using a variety of integration methods, which I saw in a pre-conference briefing last week. This uses some of the resources from IBM Mashup Centre development team, but the tool was built new within IBM BPM.
  • Cloud support through IBM SmartCloud which appears to be more of a managed server environment if you want full IBM BPM, but does offer BPM Express as a pre-installed cloud offering. At last year’s Impact, their story was that they were not doing BPM (that is, execution, not the Blueworks-type modeling and lightweight workflow) in the cloud since their customers weren’t interested in that; at that time, I said that they needed to rethink their strategy on this and and stop offering expensive custom hosted solutions. They’ve taken a small step by offering a pre-installed version of BPM Express, but I still think these needs to advance further.

WebSphere Operational Decision Management (ODM) is a integration/bundling of WebSphere Business Event Manager and ILOG, bringing together events and rules into a single decision management platform for creating policies and deploying decision services. It has a number of new features:

  • ODM event streamSocial interface for business people to interact with rules design: decisions are assets that are managed and modified, and the event stream/conversation shows how those assets are being managed. This interface makes it possible to subscribe to changes on specific rules.
  • Full text searching across rules, rule flows, decision tables and folders within a project, with filtering by type, status and date.
  • Improved decision table interface, making it easier to see what a specific table is doing.
  • Track rule versions through a timeline (weirdly reminiscent of Facebook’s Timeline), including snapshots that provide a view of rules at a specific point in time.
  • Any rule can emit an event to be consumed/managed by the event execution engine; conversely, events can invoke rulesets. This close integration of the two engines within ODM (rules and events) is a natural fit for agile and rapid automated decisions.

There’s also zOS news: IBM BPM v8 will run on zOS (not sure if that includes all server components), and the ODM support for zOS is improved, including COBOL support in rules. It would be interesting to see the cost relative to other server platforms, and the compelling reasons to deploy on zOS versus those other platforms, which I assume are mostly around integrating with other zOS applications for better runtime performance.

Since last year’s big announcement about bringing the platforms together, they appear to have been working on integration and design, putting a more consistent and seamless user interface on the portfolio as well as enhancing the capabilities. One of the other analysts (who will remain nameless unless he chooses to identify himself) pointed out that a lot of this is not all that innovative relative to market leaders – he characterized the activity stream social interface as being like Appian Tempo three years ago, and some of the functionality as just repackaged Lombardi – but I don’t think that it’s necessarily IBM’s role to be at the very forefront of technology innovation in application software. By being (fairly) fast followers, they have the effect of validating the market for the new features, such as mobile and social, and introducing their more conservative customer base to what might seem like pretty scary concepts.

NSERC BI Network at CASCON2011 (Part 2)

The second half of the workshop started with Renée Miller from University of Toronto digging into the deeper database levels of BI, and the evolving role of schema from a prescriptive role (time-invariant, used to ensure data consistency) to a descriptive role (describe/understand data, capture business knowledge). In the old world, a schema was meant to reduce redundancy (via Boyce-Codd normal form), whereas the new world schema is used to understand data, and the schema may evolve. There are a lot of reasons why data can be “dirty” – my other half, who does data warehouse/BI for a living, is often telling me about how web developers create their operational database models mostly by accident, then don’t constrain data values at the UI – but the fact remains that no matter how clean you try to make it, there are always going to be operational data stores with data that needs some sort of cleansing before effective BI. In some cases, rules can be used to maintain data consistency, especially where those rules are context-dependent. In cases where the constraints are inconsistent with the existing data (besides asking the question of how that came to be), you can either repair the data, or discover new constraints from the data and repair the constraints. Some human judgment may be involved in determining whether the data or the constraint requires repair, although statistical models can be used to understand when a constraint is likely invalid and requires repair based on data semantics. In large enterprise databases as well as web databases, this sort of schema management and discovery could be used to identify and leverage redundancy in data to discover metadata such as rules and constraints, which in turn could be used to modify the data in classic data repair scenarios, or modify the schema to adjust for a changing reality.

Sheila McIlraith from University of Toronto presented on a use-centric model of data for customizing and constraining processes. I spoke last week at Building Business Capability on some of the links between data and processes, and McIlraith characterized processes as a purposeful view of data: processes provide a view of the data, and impose policies on data relative to some metrics. Processes are also, as she pointed out, are a delivery vehicle for BI – from a BPM standpoint, this is a bit of a trivial publishing process – to ensure that the right data gets to the right stakeholder. The objective of her research is to develop business process modeling formalism that treats data and processes as first class citizens, and supports specification of abstract (ad hoc) business processes while allowing the specification of stakeholder policies, preferences and priorities. Sounds like data+process+rules to me. The approach is to specify processes as flexible templates, with policies as further constraints; although she represents this as allowing for customizable processes, it really just appears to be a few pre-defined variations on a process model with a strong reliance on rules (in linear temporal logic) for policy enforcement, not full dynamic process definition.

Lastly, we heard from Rock Leung from SAP’s academic research center and Stephan Jou from IBM CAS on industry challenges: SAP and IBM are industry partners to the NSERC Business Intelligence Network. They listed 10 industry challenges for BI, but focused on big data, mobility, consumable analytics, and geospatial and temporal analytics.

  • Big data: Issues focus on volume of data, variety of information and sources, and velocity of decision-making. Watson has raised expectations about what can be done with big data, but there are challenges on how to model, navigate, analyze and visualize it.
  • Consumable analytics: There is a need to increase usability and offering new interactions, making the analytics consumable by everyone – not just statistical wizards – on every type of device.
  • Mobility: Since users need to be connected anywhere, there is a need to design for smaller devices (and intermittent connectivity) so that information can be represented effectively, and seamless with representations on other devices. Both presenters said that there is nothing that their respective companies are doing where mobile device support is not at least a topic of conversation, if not already a reality.
  • Geospatial and temporal analytics: Geospatial data isn’t just about Google Maps mashups any more: location and time are being used as key constraints in any business analytics, especially when you want to join internal business information with external events.

They touched briefly on social in response to a question (it was on their list of 10, but not the short list), seeing it as a way to make decisions better.

For a workshop on business intelligence, I was surprised at how many of the presentations included aspects of business rules and business process, as well as the expected data and analytics. Maybe I shouldn’t have been surprised, since data, rules and process are tightly tied in most business environments. A fascinating morning, and I’m looking forward to the keynote and other presentations this afternoon.

NSERC BI Network at CASCON2011 (Part 1)

I only have one day to attend CASCON this year due to a busy schedule this week, so I am up in Markham (near the IBM Toronto software lab) to attend the NSERC Business Intelligence Network workshop this morning. CASCON is the conference run by IBM’s Centers for Advanced Studies throughout the world, including the Toronto lab (where CAS originated), as a place for IBM researchers, university researchers and industry to come together to discuss many different areas of technology. Sometimes, this includes BPM-related research, but this year the schedule is a bit light on that; however, the BI workshop promises to provide some good insights into the state of analytics research.

Eric Yu from University of Toronto started the workshop, discussing how BI can enable organizations to become more adaptive. Interestingly, after all the talk about enterprise architecture and business architecture at last week’s Building Business Capability conference, that is the focus of Yu’s presentation, namely, that BI can help enterprises to better adapt and align business architecture and IT architecture. He presented a concept for an adaptive enterprise architecture that is owned by business people, not IT, and geared at achieving measurable business success. He discussed modeling variability at different architectural layers, and the traceability between them, and how making BI an integral part of an organization – not just the IT infrastructure – can support EA adaptability. He finished by talking about maturity models, and how a closed loop deployment of BI technologies can help meet adaptive enterprise requirements. Core to this is the explicit representation of change processes and their relationship to operational processes, as well as linking strategic drivers to specific goals and metrics.

Frank Tompa from University of Waterloo followed with a discussion of mapping policies (from a business model, typically represented as high-level business rules) to constraints (in a data model) so that these can be enforced within applications. My mind immediately went to why you would be mapping these to a database model rather than a rules management system; his view seems to be that a DBMS is what monitors at a transactional level and ensures compliance with the business model (rules). His question: “how do make the task of database programming easier?” My question: “why aren’t you doing this with a BRMS instead of a DBMS?” Accepting his premise that this should be done by a database programmer, the approach is to start with object definitions, where an object is a row (tuple) defined by a view over a fixed database schema, and represents all of the data required for policy making. Secondly, consider the states that an object can assume by considering that an object x is in state S if its attributes satisfy S(x). An object can be in multiple states at once; the states seem to be more like functions than states, but whatever. Thirdly, the business model has to be converted to an enforcement model through a sort of process model that also includes database states; really more of a state diagram that maps business “states” to database states, with constraints on states and state transitions denoted explicitly. I can see some value in the state transition constraint models in terms of representing some forms of business rules and their temporal relationships, but his representation of a business process as a constraint diagram is not something that a business analyst is ever going to read, much less create. However, the role of the business person seems to be restricted to “policy designer” listing “states of interest”, and the goal of this research is to “form a bridge between the policy manager and the database”. Their future work includes extracting workflows from database transaction logs, which is, of course, something that is well underway in the BPM data mining community. I asked (explicitly to the presenter, not just snarkily here in my blog post) about the role of rules engines: he said that one of the problems was in vocabulary definition, which is often not done in organizations at the policy and rules level; by the time things get to the database, the vocabulary is sufficiently constrained that you can ensure that you’re getting what you need. He did say that if things could be defined in a rules engine using a standardized vocabulary, then some of the rules/constraints could be applied before things reached the database; there does seem to be room for both methods as long as the business rules vocabulary (which does exist) is not well-entrenched.

Jennifer Horkoff from University of Toronto was up next discussing strategic models for BI. Her research is about moving BI from a technology practice to a decision-making process that starts with strategic concerns, generates BI queries, interprets the results relative to the business goals and decide on necessary actions. She started with the OMG Business Motivation Model (BMM) for building governance models, and extended that to a Business Intelligence Model (BIM), or business schema. The key primitives include goals, situations (can model SWOT), indicators (quantitative measures), influences (relationships) and more. This model can be used at the high-level strategic level, or at a more tactical level that links more directly to activities. There is also the idea of a strategy, which is a collection of processes and quality constraints that fulfill a root-level goal. Reasoning that can be done with BIMs, such as whether a specific strategy can fulfill a specific goal, and influence diagrams with probabilities on each link used to help determine decisions. They are using BIM concepts to model a case study with Rouge Valley Health System to improve patient flow and reduce wait times; results from this will be seen in future research.

Each of these presentations could have filled a much bigger time slot, and I could only capture a flavor of their discussions. If you’re interested in more detail, you can contact the authors directly (links to each above) to get the underlying research papers; I’ve always found researchers to be thrilled that anyone outside the academic community is interested in what they’re doing, and are happy to share.

We’re just at the md-morning break, but this is getting long so I’ll post this and continue in a second post. Lots of interesting content, I’m looking forward to the second half.

Agile Predictive Process Platforms for Business Agility with @jameskobielus

James Kobielus of Forrester brought the concepts of predictive analytics to processes to discuss optimizing processes using the Next Best Action (NBA): using analytics and predictive models to figure out what you should do next in a process in order to optimize customer-facing processes.

As we heard in this morning’s keynote, agility is mandatory not just for competitive differentiation, for but basic business survival. This is especially true for customer-facing processes: since customer relationships are fragile and customer satisfaction is dynamic, the processes need to be highly agile. Customer happiness metrics need to be built into process design, since customer (un)happiness can be broadcast via social media in a heartbeat. According to Kobielus, if you have the right data and can analyze it appropriately, you can figure out what a customer needs to experience in order to maximize their satisfaction and maximizing your profits.

Business agility is all about converging process, data, rules and analytics. Instead of static business processes, historical business intelligence and business rules silos, we need to have real-time business Intelligence, dynamic processes, and advanced analytics and rules that guide and automate processes. It’s all about business processes, but processes infused with agile intelligence.  This has become a huge field of study (and implementation) in customer-facing scenarios, where data mining and behavioral studies are used to create predictive models on what the next best action is for a specific customer, given their past behavior as your customer, and even social media sentiment analysis.

He walked through a number of NBA case studies, including auto-generating offers based on a customer’s portal behavior in retail; tying together multichannel customer communications in telecom; and personalizing cross-channel customer interactions in financial services. These are based on coupling front and back-office processes with predictive analytics and rules, while automating the creation of the predictive models so that they are constantly fine-tuned without human intervention.

TIBCO Product Strategy With Matt Quinn

Matt Quinn, CTO, gave us the product strategy presentation that will be seen in the general session tomorrow. He repeated the “capture many events, store few transactions” message as well as the five key components of a 21st century platform that we heard from Murrary Rode in the previous session; this is obviously a big part of the new messaging. He drilled into their four broad areas of interest from a product technology standpoint: event platform innovation, big data and analytics, social networking, and cloud enablement.

In the event platform innovation, they released BusinessEvents 5.0 in April this year, including the embedded TIBCO Datagrid technology, temporal pattern matching, stream processing and rules integration, and some performance and big data optimizations. One result is that application developers are now using BusinessEvents to build applications from the ground up, which is a change in usage patterns. For the future, they’re looking at supporting other models, such as BPMN and rule models, integrating statistical models, improving queries, improving the web design environment, and providing ActiveMatrix deployment options.

In ActiveMatrix, they’ve released a fully integrated stack of BusinessWorks, BPM and ServiceGrid with broader .Net and C++ support, optimized for large deployments and with better high-availability support and hot deployment capabilities. AXM/BPM has a number of new enhancements, mostly around the platform (such as the aforementioned HA and hot deployment), with their upcoming 1.2 release providing some functional enhancements such as customer forms and business rules based on BusinessEvents. We’ll see some Nimbus functionality integration before too much longer, although we didn’t see that roadmap; as Quinn pointed out, they need to be cautious about positioning which tools are for business users versus technical users. When asked about case management, he said that “case management brings us into areas where we haven’t yet gone as a company and aren’t sure that we want to go”. Interesting comment, given the rather wild bandwagon-leaping that has been going on in the ACM market by BPM and ECM vendors.

The MDM suite has also seen some enhancements, with ActiveSpaces integration and collaborative analytics with Spotfire, allowing MDM to become a hub for reference data from the other products. I’m very excited to see that one-click integration between MDM and AMX/BPM is on the roadmap; I think that MDM integration is going to be a huge productivity boost for overall process modeling, and when I reviewed AMX/BPM last year, I liked their process data modeling stated that “the link between MDM and process instance data needs to be firmly established so that you don’t end up with data definitions within your BPMS that don’t match up with the other data sources in your organization”. In fact, the design-time tool for MDM is now the same as that used for business object data models that I saw in AMX/BPM, which will make it easier for those who move across the data and process domains.

TIBCO is trying to build out vertical solutions in certain industries, particularly those where they have acquired or built expertise. This not only changes what they can package and offer as products, but changes who (at the customer) that they can have a relationship with: it’s now a VP of loyalty, for example, rather than (or in addition to) someone in IT.

Moving on to big data and analytics technology advances, they have released FTL 2.0 (low-latency messaging) to reduce inter-host latency below 2.2 microseconds as well as provide some user interface enhancements to make it easier to set up the message exchanges. They’re introducing TIBCO Web Messaging to integrate consumer mobile devices with TIBCO messaging. They’ve also introduced a new version of ActiveSpaces in-memory data grid, providing big data handling at in-memory speeds by easing the integration with other tools such as event processing and Spotfire.

They’ve also released Spotfire 4.0 visual analytics, with a bit focus on ease of use and dashboarding, plus tibbr integration for social collaboration. In fact, tibbr is being used as a cornerstone for collaboration, with many of the TIBCO products integrating with tibbr for that purpose. In the future, tibbr will include collaborative calendars and events, contextual notifications, and other functionality, plus better usability and speed. Formvine has been integrated with tibbr for forms-based routing, and Nimbus Control integrates with tibbr for lightweight processes.

Quinn finished up discussing their Silver Fabric cloud platform to be announced tomorrow (today, if you count telling a group of tweet-happy industry analysts) for public, private and hybrid cloud deployments.

Obviously, there was a lot more information here that I could possibly capture (or that he could even cover, some of the slides just flew past), and I may have to get out of bed in time for his keynote tomorrow morning since we didn’t even get to a lot of the forward-looking strategy. With a product suite as large as what TIBCO has now, we need much more than an hour to get through an analyst briefing.

Webinar on Process Intelligence and Predictive Analytics

Summer is the time when no one holds conferences, because vacation schedules make it difficult to get the attendance, so webinars tend to expand to fill the gap. I’ll be presenting on another BP Logix webinar on August 10th, discussing process intelligence and predictive analytics; you can register (and see my smiling face in a video) here.

I first presented on the combination of BPM, business rules and business intelligence at Business Rules Forum in 2007:

Near the end of the presentation, I talk about self-learning decisions in processes, where process statistics are captured with business intelligence, analyzed and fed back to business rules, which then modify the behavior of the processes. In the three years since then, technology has advanced significantly: rules are now accepted as a necessary part of BPM, and process intelligence has moved far beyond simple visualizations of process instance data. In the webinar, I’ll be discussing those trends and what they mean for process improvement.

Pegasystems SmartBPM V6

I’m wrapping up my coverage of this month’s round of four back-to-back conferences with the product reviews, which typically come from multiple meetings before, during and after a vendor conference, as well as some time spent pondering my notes and screen snapshots.

I had a remote product demo of SmartBPM prior to PegaWORLD, then a briefing from Kerim Akgonul at the conference. A lot of the changes to the product over the past year and a half have been focused on making it easier to use, trying to fight the perception that it’s a great product but that the inherent complexity makes it hard to use. In fact, the two main themes that I saw for this version are that it’s easy to use, and easy to share through design and runtime collaboration.

They started to address the complexity issue and promote the business agility message in Version 5 more than a year ago with a number of new tools:

  • Application Profiler to link requirements directly to developed applications and processes, replacing written specifications; the Application Accelerator then generates an application from this profile, as well as documentation
  • Improved non-technical process mapping with a shared model between business analysts and developers, including having the BA create UI forms
  • Visual Case Manager for mapping data from other systems into a case management application via various shared keys and search terms
  • Internet Application Composer for creating mashups with their own portlets and web components, plus other internal or external web components

Version 6 continues this direction, focusing on deploying solutions faster: a number of new gadgets allow building out the user experience and providing better visibility into what’s happening within business processes, and direct feedback on changes required to processes from participants to developers/analysts puts more ability to change processes into the hands of the business.

They’ve also started to use their own agile-like methodology and tools for internal projects, since the tools provide frameworks for project management, test management and documentation. Not only has this resulted in more rapid development of their own products and better alignment with the product requirements, it has eliminated the monolithic product release cycle in favor of smaller incremental releases that deliver new functionality sooner: they’ve released Pega Chat and other new features as modules without doing a full product release. With 1,200 Pega employees and 200 new ones from their Chordiant acquisition, introducing ways to shorten their product release cycle is an encouraging sign that they’re not letting their increasing size weigh them down in product innovation.

Discovery map viewTaking a look at the product, there’s a new Discovery Map view of a process, very similar to what you would see in the outline view other process discovery tools. The difference from other tools, however, is that this is a directly executable process: a shared model, rather than requiring a transfer to an execution environment (and the problems that come along with round-tripping). That ties in neatly with the “easy to use” theme, along with role-based views, reduced navigation complexity and case manager functionality.

The other theme, “easy to share” comes out in a number of ways. First of all, there’s a Facebook-style news feed of system-generated and team member alerts that shows who’s working with which processes and comments that participants have on them, including RSS feeds of the news feed and other sharing options to make it easy for people to consume that information in the format and tool that they choose; I’ve seen this in ARISalign and I suspect it will become standard functionality in most process discovery and design tools. With some of the sharing and bookmarking options, I don’t think that Pega even knows how (or if) customers are going to use them, but realizes that if you have to offer the functionality in order to start seeing the emergent usages.

User adds change request to designerThe second collaboration win is direct feedback from the participant of an executing process to the process designer. This is the type of functionality that I commented on a couple of months ago when Google came out with a way to provide feedback direction on their services from within the service (my tweet at the time was please, please, please can enterprise apps add a feature like this?). In SmartBPM, a user within an executing process drags a pushpin icon to the location of the problem on the screen, types a note in a popup and adds a category; when they click “Send” on the note, the current state (including a screenshot) is captured, and an item is added to the process designer’s feedback list in their news feed. Hot.

We also reviewed process optimization: optimization criteria are selected from the process attributes, and calculated based on actual process execution. A decision tree/table can be directly generated from the optimization results and added as a rule to the process: effectively, this automates the discovery of business rules for currently manual steps such as assignment, allowing for more process automation.

User builds custom subprocess in discovery map viewThe third collaboration-type functionality shown was the ability to spin off ad hoc subprocesses from any point in a structured process: just select “Build a Custom Process” from the action menu on a step to open up a new discovery map for creating the new subprocess, then add steps to create the flow. There’s only an outline view, not flow logic, and the step names are people to which the step is assigned: pretty simple, but little or no training required in order to use it for everyday users.

Later, all custom subprocesses created for a given process can be consolidated and summarized into suggested changes, and directed to a process designer for review; this may result in the original structured process being reworked to include some of the common patterns, or just have them left as ad hoc if they are not frequent enough to justify the changes.

Akgonul sees BPM and CRM converging; that’s certainly the direction that Pega has been taking recently, including (but not limited to) the Chordiant acquisition, and similar opinions are popping up elsewhere. As BPM products continue to turn into application development suites meant for building full enterprise applications, the boundaries start to blur.

One thing that I liked about the remote demo that has nothing to do with the product is that it was hosted on an Amazon EC2 instance: it’s only a short step from an EC2-based demo to providing a preloaded EC2 instance for those of us who like to get our hands on the products but don’t want to handle our own installation. For technical analysts like me, that’s a game-changer for doing product reviews.

Pegasystems SmartBPM V6 - 2010 

As a matter of disclosure, Pega paid my travel expenses to attend their conference, and they are a client of mine for creating webinars, but I am not compensated for writing about them here on my blog.

Impact Keynote: Agility in an Era of Change

Today’s keynote was focused on customers and how they improving their processes in order to become more agile, reduce costs and become more competitive in the marketplace. After a talk and intro by Carrie Lee, business news correspondent and WSJ columnist, Beth Smith and Shanker Ramamurthy of IBM hosted Richard Ward of Blue Cross Blue Shield of Michigan, Rick Goldgar of the Texas Education Agency and Justin Snoxall of Visa Europe.

The message from yesterday continued: process is king, and is at the heart of any business improvement. This isn’t just traditional structured process management, but social and contextual capabilities, ad hoc and dynamic tasks, and interactions across the business network. As they pointed out, dynamic processes don’t lead to chaos: they deliver consistent outcomes in goal-oriented knowledge work. First of all, there are usually structured portions of any process, whether that forms the overarching framework from which collaborations are launched, or whether structured subprocesses are spawned from an unstructured dynamic process. Secondly, monitoring and controls still exist, like guardrails around your dynamic process to keep it from running off the road.

The Lombardi products are getting top billing again here today, with Blueprint (now IBM BPM Blueprint, which is a bit of a mouthful) positioned as a key collaborative process discovery and modeling tool. There’s not much new in Blueprint since the Lombardi days except for a bit of branding; in other words, it remains a solid and innovative way for geographically (and temporally) separated participants to collaborate on process discovery. Blueprint has far better capabilities than other online process discovery tools, but they are going to need to address the overlap – whether real or perceived – with the free process discovery tools including IBM BlueWorks, ARISalign, InterstageBPM and others.

Smith gave a brief demo of Blueprint, which is probably a first view for many of the people in the audience based on the tweets that I’m seeing. Ramamurthy stepped in to point out that processes are part of your larger business network: that’s the beauty of tools like Blueprint, which allow people in different companies to collaborate on a hosted web application. And since Lombardi has been touting their support of BPMN 2.0 since last September, it’s no surprise that they can exchange process models between Blueprint and process execution engines – not the full advantages of a completely model-driven environment with a shared repository, but a reasonable bridge between a hosted modeling tool and an on-premise execution tool.

As you get into demanding transaction processing applications, however, Smith discussed WebSphere Process Server as their industrial-strength offering for handling high volumes of transactions. What’s unclear is where the Lombardi Edition (formerly TeamWorks) will fit as WPS builds out its human-centric capabilities, creating more of an overlap between these process execution environments. A year ago, I would have said that TeamWorks and WPS fit together with a minimum of overlap; now, there is a more significant overlap, and based on the WPS direction, there will be more in the future. IBM is no longer applying the “departmental” label to Lombardi, but I’m not sure that they really understand how to make these two process execution engines either work together with a minimum of overlap, or merge into a single system. Or maybe they’re just not telling.

It’s not just about process, however: there’s also predictive analytics and using real-time information to monitor and adjust processes, leveraging business rules and process optimization to improve processes. They talked about infusing processes with points of agility through the use/integration of rules, collaboration, content and more. As great as this sounds, this isn’t just one product, or a seamlessly-integrated suite: we’re back to the issue that I discussed with Angel Diaz yesterday, where IBM’s checklist for customers to decide which BPM products that they need will inevitably end up with multiple selections.

The session ended up with the IBM execs and all three customers being interviewed by Carrie Lee; as a skilled interviewer who has obviously done her homework, this had a good flow with a reasonable degree of interaction between the panelists. The need for business-controlled rules was stressed as a way to provide more dynamic control of processes to the business; in general, a more agile approach was seen as a way to reduce implementation time and make the systems more flexible in the face of changing business needs. Ward (from BCBS) said that they had to focus on keeping BPM as a key process improvement methodology, rather than just using TeamWorks as another application development tool, and recommended not going live with a BPMS without metrics for you to understand the benefits. That sounds like good advice for any organization finding themselves going down the rabbit hole of BPMS application development when they really need to focus on their processes.

Dr. Ketabchi: A Shared Vision With Progress and Savvion

Dr. K. took the stage to tell us about the planned integration between the existing Progress products and Savvion, starting with a discussion of Savvion’s event-driven human-centric beginnings, model-driven development and solution accelerators. The new Progress RPM (responsive process management) suite has Savvion’s BPM at its core, combining their BPM and BRM strengths with CEP and information management. A challenge for Progress – and any other BPM vendor – is that less than 5% of enterprises’ processes run on a BPMS, and although dramatic improvements could be made to 80% or more of enterprise processes, most enterprises find it too difficult and costly to implement a BPMS in order to make these end-to-end improvements. It’s Progress’ intention that RPM overcome some of this resistance by extending visibility of business events to business managers, and provide the ability to respond in order to control business and ultimately increase revenues.

He was joined by Sandeep Phanasgaonkar of Reliance Capital, who have a large and successful Savvion implementation. Phanasgaonkar was responsible for the Savvion implementation at a huge outsourcing firm prior to his time at Reliance, where they automated and standardized their processes in the course of improving those processes. When he moved to Reliance during their expansion into their multiple financial products and channels, he saw the potential for process improvement with a BPMS, did a vendor comparison, and again selected Savvion for their processes. They use Savvion as the glue for orchestrating multiple legacy financial systems, Documentum content management, low-level WebSphere messaging processes and other systems into a fully integrated set of business processes and data.

Reliance has no other Progress products besides Savvion, but they see the importance of managing business events and processes as a cohesive whole, not as two separate streams of activity. This will allow them to detect degradation in processes due to seasonal or other fluctuations, and address the problems before they fully manifest.

21st Century Government with BPM and BRM #brf

Bill Craig, a consultant with Service Alberta, discussed their journey with process and rules to create agile, business-controlled automation for land titles (and, in the future, other service areas such as motor vehicle licensing) in the province of Alberta. They take an enterprise architecture approach, and like to show alignment and traceability through the different levels of business and technology architecture. They used a number of mainframe-based legacy applications, and this project was driven initially by legacy renewal – mostly rewriting the legacy code on new platforms, but still with a lot of code – but quickly turned to the use of model-driven development for both processes and rules in order to greatly reduce the amount of code (which just creates new legacy code) and to put more control in the hands of the business.

They see 21st century government as having the following characteristics:

  • customer service focus
  • business centric
  • aligned
  • agile
  • assurance
  • management and controlled
  • architected (enterprise and solution)
  • focused on knowledge capture and retention
  • collaborative and integrative
  • managed business rules and business processes

BPM and BRM have been the two biggest technology contributors to their transformation, with BRM the leader because of the number of rules that they have dealing with land titles; they’ve also introduced SOA, BI, BAM, EA, KM and open standards.

In spite of their desire to be agile, it seems like they’re using quite a waterfall-style design; this is the government, however, so that’s probably inevitable. They ended up with Corticon for rules and Global 360 for process, fully integrated so that the rules were called from tasks in their processes (which for some reason required the purchase of an existing “Corticon Integration Task” component from Global 360 – not sure why this isn’t done with web services). He got way down in the weeds with technical details – although relevant to the project, not so much to this audience – then crammed a description of the actual business usage into two minutes.

One interesting point: he said that they tried doing automated rules extraction from their mainframe applications to load into Corticon, but the automated extraction found mostly navigation rules rather than business rules, so they gave up on it. It would be interesting to know what sort of systems that automated rule extraction works well on, since this would be a huge help with similar legacy modernization initiatives.