Forrester Wave for Human-Centric BPMS

Forrester just released a report on human-centric BPMS, and Lombardi is just busting with pride over their position: so much so, that they’re giving the report away here (registration required). Phil Gilbert must be doing a little happy dance, especially considering that their “dot size” has increased from a tiny blip on the radar to a more respectable market presence. Forrester’s had a soft spot for Lombardi for a while: the March 2004 “Pure Play BPM” wave (that’s when we were still calling this “pure play”) had Lombardi at the cusp between Strong Performer and Leader. Relative positions of many players have stayed pretty much the same since then in the Forrester rankings (Garter’s rankings are quite different), although TIBCO and Pegasystems have made significant increases in market presence. I’d have to say that Forrester must have been looking mostly at the American market presence, since TIBCO (which was still the Staffware product during the 2004 ranking) had a huge presence in UK, European, Australian and other markets that I saw.

Excerpts from Forrester’s executive summary regarding those in the Leaders category:

Lombardi Software, Pegasystems, and Savvion lead with comprehensive suites that foster rapid, iterative process design; Appian leads with a richly featured suite for people-intensive work; and TIBCO leads with a human-centric BPMS that leverages its integration-centric product.

In fact, later in the report, they more categorically state “Lombardi Software, Pegasystems, and Savvion lead the pack — hands down“, then they proceed to break out the specific reasons for their evaluation. However, they also stated “If you could only buy one BPMS product, Fuego offers the best — bar none — product supporting human-intensive and system-intensive processes“, an assessment with which I don’t disagree, after seeing things like Fuego’s web services introspection (although I still insist that they put their swimlanes sideways). They go on to say that “Appian and FileNet innovate beyond the boundaries of human-centric BPMS” by integrating collaboration tools that allow a non-standard process to go off the rails in a somewhat controlled manner.

One thing I didn’t like is how Forrester categorizes business processes:

  • Integration-intensive
  • People-intensive
  • Decision-intensive
  • Document-intensive

I don’t really agree with this categorization, first of all since it’s not normalized: integration-intensive and people-intensive certainly are at opposite ends of the same scale, but their definition of decision-intensive is really people-intensive with a strong need for business rules (which I think are necessary pretty much all the time), and document-intensive is just people-intensive with a lot of scanned documents involved. Although document-intensive processes would always be people-intensive, I believe that decision-intensive could fall anywhere along the integration-to-people scale since it is primarily about the use of business rules. Although many organizations are still choosing separate products for integration-intensive and people-intensive (or human-interrupted, as one of my customers once so charmingly put it) processes, the real issue in this report should be about any given product handles all three of (what I see the artificial divisions of) people-, decision- and document-intensive functionality.

The last half of the report shows the explicit criteria ranking for each vendor, and a detailed paragraph of strengths and weaknesses for each vendor. Definitely worth the read.

Retro look at the impact of SOA

I recently discovered some notes that I had made back in November 2004 from a TIBCO webinar “Enabling Real-time Business with a Service-Oriented and Event-Driven Architecture”. Randy Heffner from Forrester spoke at that webinar, and I remember that it was his words that made me realize what an impact that SOA was going to have, and how strategic SOA requires a focus on enterprise architecture, particularly the application architecture and technical architecture layers, so that business and IT metrics can be tied back to defined services.

Although it seems obvious now, that webinar really crystallized the idea of services as being process steps to be orchestrated, and how this allowed you to focus on an end-to-end process across all stakeholders, not just what happens inside your organization: the Holy Grail of BPM, as it were. EA often does not include business architecture, but services force it to consider the business process architecture and business strategy/organization.

Computer History Museum

My wrapup of Mashup Camp wouldn’t be complete without mentioning the fabulous Computer History Museum in Mountain View where the event was held. Great venue, and the part of their collection that we were able to view during the party on Monday night was very nostalgic (although I can’t quite say that I miss RSX11M). Definitely worth a visit if you’re in the Bay area.

On my return to Toronto, I had lunch with a friend who works for Alias, the day after she emailed me to say that their corporate email addresses have changed from @alias.com to @autodesk.com following the recent acquisition. The end of an era for a long-running innovative Canadian software company. There since the late 1980’s, she saw many transitions, including the purchase of Alias by Silicon Graphics (and its subsequent sale). SGI was, at the time, housed in the building that now holds the Computer History Museum, and she remembers visiting there when it was SGI headquarters. An interesting footnote after spending the first part of the week there.

Picturing yourself at Mashup Camp

I’m still wrapping my brain around some of the ideas that started in my head at Mashup Camp, but I’ve been having fun browsing through all of the photo-detritus of the event. I was surprised that I made the first photo in Valleywag’s coverage of the event, and Doc Searls caught me at the XDI session on Monday (bonus tip: wear purple at popular events so that you can find yourself in the photos). There’s over 900 Flickr photos tagged mashupcamp, and likely many more still languishing out there on memory cards.

Best quote from Mashup Camp

That’s the thing about mashups, almost all of them are illegal

I heard that (and unfortunately am unable to credit the source) in the “scrAPI” session at Mashup Camp, in which we discussed the delicate nature of using a site that doesn’t have APIs as part of a mashup. Adrian Holovaty of ChicagoCrime.org (my favourite mashup at camp) was leading part of the session, demonstrating what he had done with Chicago police crime data (the police, not having been informed in advance, called him for a little chat the day his site went live), Google maps, Yahoo! maps (used for geocoding after he was banned from the Google server for violating the terms of service) and the Chicago Journal.

Listening to Adrian and others talk about the ways to use third-party sites without their knowledge or permission really made me realize that most mashup developers are still like a bunch of kids playing in a sandbox, not realizing that they might be about to set their own shirts on fire. That’s not a bad thing, just a comment on the maturity of mashups in general.

The scrAPI conversation — a word, by the way, that’s a mashup between screen scraping and API — is something very near and dear to my heart, although in another incarnation: screen scraping from third-party (or even internal) applications inside the enterprise in order to create the type of application integration that I’ve been involved in for many years. In both cases, you’re dealing with a third party who probably doesn’t know that you exist, and doesn’t care to provide an API for whatever reason. In both cases, that third party may change the screens on their whim without telling you in advance. The only advantage of doing this inside the enterprise is that the third party ususally doesn’t know what you’re doing, so if you are violating your terms of service, it’s your own dirty little secret. Of course, the disadvantage of doing this inside the enterprise is that you’re dealing with CICS screens or something equally unattractive, but the principles are the same: from a landing page, invoke a query or pass a command; navigate to subsequent pages as required; and extract data from the resultant pages.

There’s some interesting ways to make all of this happen in mashups, such as using LiveHTTPHeaders to watch the traffic on the site that you want to scrape, and faking out forms by passing parameters that are not in their usual selection lists (Adrian did this with ChicagoCrime.org to pass a much larger radius to the crime stats site that its form drop-down allowed in order to pull back the entire geographic area in one shot). Like many enterprise scraping applications, site scraping applications often cache some of the data in a local database for easier access or further enrichment, aggregation, analysis or joining with other data.

In both web and enterprise cases, there’s a better solution: build a layer around the non-API-enabled site/application, and provide an API to allow multiple applications to access the underlying application’s data without each of them having to do site/screen scraping. Inside the enterprise, this is done by wrapping web services around legacy systems, although much of this is not happening as fast as it should be. In the mashup world, Thor Muller (of Ruby Red Labs) talked about the equivalent notion of scraping a site and providing a set of methods for other developers to use, such as Ontok‘s Wikipedia API.

We talked about the legality of site scraping, namely that there are no explicit rights to use the data, and the definition of fair use may or may not apply; this is what prompted the comment with which I opened this post.

In the discussion of strategic issues around site scraping, I certainly agree that site scraping indicates a demand for an API, but I’m not sure that I completely agree with the comment that site scraping forces service and data providers to build/open APIs: sure, some of them are likely just unaware that their data has any potential value to others, but there’s going to be many more who either will be horrified that their data can be reused on another site without attribution, or just don’t get that this is a new and important way to do business.

In my opinion, we’re going to have to migrate towards a model of compensating the data/service provider for access to their content, whether it’s done through site scraping or an API, in order to gain some degree of control (or at least advance notice) of changes to the site that would break the callling/scraping applications. That compensation doesn’t necessarily have to mean money changing hands, but ultimately everyone is driven by what’s in it for them, and needs to see some form of reward.

Update: Changed “scrapePI” to “scrAPI” (thanks, Thor).

Implementing BPM

The flight home from Mashup Camp was a great opportunity to catch up on my notes from the past couple of weeks, including several ideas triggered by discussions at the TIBCO seminar last week: some because I disagreed with the speakers, but some because I agreed with them. I split my opinions on the discussions about implementing BPM systems, specifically about the role of a business process analyst, and agile versus waterfall development.

First of all, the business process analyst role: Michael Melenovsky sees this as important for BPM initiatives, but I tend to feel the same way as I do about business rules analysts: just give me a good business analyst any day, and they’ll be able to cover rules, process, and whatever else is necessary for making that business-IT connection. Furthermore, he sees the BPA as a link between a BA and IT, as if we need yet another degree of separation between the business and those who are charged with implementing solutions to make business run better. Not.

There were some further discussions on how business and IT have to collaborate on BPM initiatives (duh) and share responsibility for a number of detailed design and deployment tasks, but this is true for any technology implementation. If you don’t already have a good degree of collaboration between business and IT, don’t expect it to magically appear for your BPM initiatives, but do take note that the need for it is at least as great as for any other technology implementation. How we’re supposed to collaborate more effectively by shoehorning a BPA between a BA and IT is beyond me, however.

Melenovsky also had some interesting “lesson learned” stats on the correlation between the time spent on process discovery and model construction, and the success of the BPM initiative: basically, do more work up on your analysis and up-front business-focussed design, and your project will be more successful. Gartner found that over 40% of the project time should be spent on process discovery, another 9% on functional and technical specifications, and just 12% on implementation. In my experience, you’ll spend that 40% on process discovery either up-front, or when you go back and do it over because you implemented the wrong thing due to insufficient process discovery in the first place: as usual, a case of choosing between doing it right or doing it over.

That directly speaks to the need for agile, or at least iterative, development on BPM projects. You really can’t use waterfall methods (successfully) for BPM development (or most other types of technology deployments these days), for so many reasons:

  1. When implementing new (and disruptive) technology, whatever business tells IT that they want is not accurate since the business really isn’t able to visualize the entire potential solution space until they see something working.
  2. While IT is off spending two years implementing the entire suite of required functionality in preparation for an all-singing, all-dancing big bang roll-out, the business requirements will change.
  3. During that two years, the business is losing money, productivity and/or opportunities due to the lack of whatever BPM is going to do for them, and is building stop-gap solutions using Excel, Access, paper routing slips and prayer.
  4. That all-singing, all-dancing complex BPM implementation is, by definition, more complex and therefore more rigid (less flexible) due to the amount of custom development. It makes sense that you can’t use a development methodology that’s not agile to implement processes that are agile.
  5. The big bang roll-out is a popular notion with the business right up to the point when it happens, and they discover that it doesn’t meet their needs (refer to points 1 and 2). Then it becomes unpopular.

Instead, get the business requirements and process models worked out up front, then engage in iterative methods of designing, implementing and deploying the systems. Deliver “good enough” processes on the first cut, then make iterative improvements. Don’t assume that the business users aren’t capable of providing valuable feedback on required functionality: just make them do their job with the first version of the system, and they’ll give you plenty of feedback. Consider the philosophy of an optional scope contract rather than a fixed price/date/scope contract, whether you’re using internal or external resources for the implementation. Where possible, put changes to the business processes and rules in the hands of the business so that they can tweak the controls themselves.

What Are The Analysts Looking At?

I hate to spend too much of my time dissing the analysts, but sometimes I really wonder where they get their information. At a seminar that I attended last week, Michael Melenovsky from Gartner characterized the current state of BPM as workflow-oriented departmental implementations that included software deals in the $60K range. That’s not what I’m seeing, although maybe my view is skewed because I’m actually helping to implement these systems rather than taking the long view from the top of an ivory tower. I’m seeing BPM projects spanning the functionality spectrum from human-facing to STP, and spanning multiple departments across the enterprise to become part of the enterprise infrastructure. Furthermore, organizations are spending a lot of money on them, and have budgeted to spend even more in the next year (I seem to recall blogging about a recent study that showed application integration as the #1 spend for CIOs this year, but can’t seem to find it).

Melenovsky also showed a graph of the “BPM stages of value creation” over time, which showed productivity as being the greatest value of BPM now, visibility having the greatest value by 2012, and innovation having the greatest value by 2017. Besides the obvious point that predicting anything to do with a still-fluid technology 11 years in the future is about as accurate as reading tea leaves, the idea that visibility won’t be the dominant value added by BPM until 2012 is just plain wrong. Process visibility kicked into gear as a dominant driver for BPM over the past few years of intense compliance scrutiny, and it’s already poised to overtake productivity as the biggest benefit provided by BPM. Although the definition of process metrics is still more of an art than a science, the various levels of process-centric business intelligence that provide process visibility (corporate performance management, business activity monitoring, complex event processing) are being mandated in the boardroom and implemented across the enterprise. Process visibility is the new black, and it’s in style now.

I also don’t see another 11 years elapsing before innovation overtakes visibility to become the primary value provided by BPM, since agility and innovation are so closely tied, but the Ouija board hasn’t yet given me enough information about when it will happen.

Separating rules and process

I’ve posted a number of times in the past about the importance of business rules engines (BRE) in conjunction with BPM in order to keep rules and process separate. To sum up my thoughts on this:

  • Most BPMS don’t have sufficient functionality in their in-process expression builders to offer true business rules capabilities. A notable exception is Pega, which is built on a rules engine. I’m not going to wax poetic on the benefits of business rules, since there’s lots of other people who can do that better than me, but take my word for it: you really need business rules.
  • Having the ability to change work in progress. If the business logic is embedded within the process map, then typically that logic is fixed for a particular item of work at the time that it is instantiated. For straight-through short-running processes, this isn’t a problem, but for long-running processes (with or without human interaction), it is, since most BPMS don’t provide for changing the business logic or process map for an item of work once it is instantiated. If the BPMS retrieves its rules from a BRE at the time that each rules-oriented step is executed, then the rules are as current as what’s in the BRE.
  • Using the same rules engine and rules across multiple applications, not just within the BPM. Imagine it: the same business rules about, for example, how you deal with a customer’s order in a particular situation being applied identically across your CRM, your BPMS, and any other applications that it might impact, because they all retrieve their business rules from a common business rules repository at the time of execution. This idea is just starting to creep into the consciousness of most large organizations (they’re still digesting the first two reasons), and is ultimately the most critical since it not only provides for greater business agility, but also has a huge impact on compliance.

This last point is exactly why I see Pega’s position as a disadvantage rather than an advantage: although they market themselves on the fact that they’re built on a BRE, the requirement for business rules in multiple applications across the enterprise is finally being recognized, and stand-alone BRE will become more commonplace in organizations in the next few years. And if you’re using Fair Isaac for all of your business rules across the organization, you’re not going to want to use a different, proprietary BRE inside a BPM product to re-implement some of the same rules that exist elsewhere in the organization.

I thought of this last week at the TIBCO seminar when I learned that they also have a proprietary rules engine embedded in their BPMS, although the BPMS is not based on the BRE (as with Pega), it’s just there to provide additional functionality, and TIBCO allows for integration with popular third-party BRE including Fair Isaac and ILOG. My prediction is that as organizations start to roll out these best-of-breed BRE across the enterprise, TIBCO will abandon their own rules engine in favour of integrating only with well-known third-party BRE.