Gartner BPM summit day 1: Sinur and Melenovsky

The conference opened with the two key faces of Gartner’s BPM vision — Jim Sinur and Michael Melenovsky — giving a brief welcome talk that focussed on a BPM maturity model, or what they are calling BPM3. There was only one slide for their presentation (if you don’t count the cover slide) and it hasn’t been published for the conference attendees, so I’ll rely on my sketchy notes and somewhat imperfect memory to give an overview of the model:

  • Level 0: Acknowledge operational inefficiences, with potential for the use of some business intelligence technology to measure and monitor business activities. I maintain that there is something lower than this, or maybe a redefinition of level 0 is required, wherein the organization is in complete denial about their operational inefficiences. In CMM (the Capability Maturity Model for software development processes), for example, level 0 is equivalent to having no maturity around the processes; level 1 is the “initial” stage where an organization realizes that they’re really in a lot of trouble and need to do something about it.
  • Level 1: Process aware, using business process analysis techniques and tools to model and analyze business processes. Think Visio with some human intelligence behind it, or a more robust tool such as those from Proforma, iGrafx or IDS Scheer.
  • Level 2: Process control, the domain of BPMS, where process models and rules can now be executed, and some optimization can be done on the processes. They admitted that this is the level on which the conference focusses, since few organizations have moved very far beyond this point. Indeed, almost every customer that I have that uses BPM is somewhere in this range, although many of them are (foolishly) neglecting the optimization potential that this brings.
  • Level 3: Enterprise process management, where BPM moves beyond departmental systems and becomes part of the corporate infrastructure, which typically also opens up the potential for processes that include trading partners and customers. This is a concept that I’ve been discussing extensively with my customers lately, namely, the importance of having BPM (and BRE and BI) as infrastructure components, not just embedded within departmental applications, because it’s going to be nearly impossible to realize any sort of SOA vision without these basic building blocks available.
  • Level 4: Enterprise performance management, which starts to look at the bigger picture of corporate performance management (which is what Gartner used to call this — are they changing CPM to EPM??) and how processes tie into that. I think that this is a critical step that organizations have to be considering now: CPM is a great early warning indicator for performance issues, but also provides a huge leap forward in issues such as maintaining compliance. I just don’t understand why Cognos or other vendors in this space aren’t at this conference talking about this.
  • Level 5: Competitive differentiation, where the business is sufficiently agile due to control over the processes that new products and services can be easily created and deployed. Personally, I believe that competitive differentiation is a measure of how well that you’re doing right from level 1 on up, rather than a separate level itself: it’s an indicator, not a goal per se.

That’s it for now, I’m off to lunch. At this rate, I’ll catch up on all the sessions by sometime next week. 🙂

Gartner BPM summit day 1 in review

I know, I should have posted all this yesterday, but I didn’t realize that there was wifi at the conference until late in the day so left my laptop at the hotel, and Movable Type doesn’t allow me to blog by email from my Blackberry. Today, however, I have my trusty tablet in hand and hope to blog throughout the day when I get a chance. I’ll post all these entries under the category Gartner BPM so that you can more easily find them by clicking on the category in the right sidebar.

Some great sessions at the conference, although I think that it would be greatly improved by opening up the speaking slots a bit. Right now, it’s mostly Gartner analysts, a few select Gartner customers, a few vendors (who apparently pay for the privilege so it’s really just an extension of their booth marketing presence), and a few odd selections such as the keynote by Mikel Harry of the Six Sigma Management Institute (probably a good management consultant but not an inspiring speaker) that I just left early. Gartner has sufficient pull to be able to do a limited call for papers that would result in some really excellent presentations to complement their own analysts’ views without diluting the value of the conference. Maybe another year.

A key theme that I’ve heard in a couple of talks: agility (the ability to react to unexpected change) is becoming as, or more, important than innovation: you don’t need to define and implement everything up front if you have confidence in your ability to react to change. The way that I interpret that is that reacting to market forces is the new innovation. Think about it.

I’ll summarize some of the more interesting sessions in separate posts following this, but suffice it to say that my favourites yesterday were Simon Hayward who delivered the keynote “Living in a Process-Centric World”, Janelle Hill with “Leveraging Existing IT Assets in BPM Initiatives”, and a short but very dynamic presentation by Patrick Morrissey from Savvion on “The Seven Deadly Sins of BPM”.

Today, I’m looking forward to this afternoon’s session by Jim Sinur on “When Will the Power Vendors Offer Credible BPM Solutions?”, which promises:

The power vendors have been lagging behind some of the more technically advanced and assertive BPM vendors, but recently each has made moves on the right direction. We expect the “Giants” to try to out stride some of the more advanced and nimble BPMS players in the long run and become a viable options for more than those who buy in a “best of brand” fashion. This session will outline the progress to date of the power vendors, the expected time lines, and where the best-of-breed vendors will try to widen the gap.

  • Where are the power vendors in relationship with more advanced BPMS players?
  • Who is in the best position to close the gap?
  • What differentiators are likely to keep the power vendors at bay?

 

I’ve been asking the same questions.

Eventful mashup hits Boing Boing

Before I went to Mashup Camp, I exchanged emails with Chris Radcliff of EVDB/Eventful, and it was great to meet him face-to-face at camp. EVDB makes an API for managing event, venue, and calendar data, and Eventful uses that API in an events/calendaring/social networking mashup of events submitted directly to Eventful plus those grabbed from other event sites.

Today, I see that Eventful was covered on Boing Boing, which should bring it a huge amount of well-deserved attention. Congrats!

Computer History Museum

My wrapup of Mashup Camp wouldn’t be complete without mentioning the fabulous Computer History Museum in Mountain View where the event was held. Great venue, and the part of their collection that we were able to view during the party on Monday night was very nostalgic (although I can’t quite say that I miss RSX11M). Definitely worth a visit if you’re in the Bay area.

On my return to Toronto, I had lunch with a friend who works for Alias, the day after she emailed me to say that their corporate email addresses have changed from @alias.com to @autodesk.com following the recent acquisition. The end of an era for a long-running innovative Canadian software company. There since the late 1980’s, she saw many transitions, including the purchase of Alias by Silicon Graphics (and its subsequent sale). SGI was, at the time, housed in the building that now holds the Computer History Museum, and she remembers visiting there when it was SGI headquarters. An interesting footnote after spending the first part of the week there.

Picturing yourself at Mashup Camp

I’m still wrapping my brain around some of the ideas that started in my head at Mashup Camp, but I’ve been having fun browsing through all of the photo-detritus of the event. I was surprised that I made the first photo in Valleywag’s coverage of the event, and Doc Searls caught me at the XDI session on Monday (bonus tip: wear purple at popular events so that you can find yourself in the photos). There’s over 900 Flickr photos tagged mashupcamp, and likely many more still languishing out there on memory cards.

Best quote from Mashup Camp

That’s the thing about mashups, almost all of them are illegal

I heard that (and unfortunately am unable to credit the source) in the “scrAPI” session at Mashup Camp, in which we discussed the delicate nature of using a site that doesn’t have APIs as part of a mashup. Adrian Holovaty of ChicagoCrime.org (my favourite mashup at camp) was leading part of the session, demonstrating what he had done with Chicago police crime data (the police, not having been informed in advance, called him for a little chat the day his site went live), Google maps, Yahoo! maps (used for geocoding after he was banned from the Google server for violating the terms of service) and the Chicago Journal.

Listening to Adrian and others talk about the ways to use third-party sites without their knowledge or permission really made me realize that most mashup developers are still like a bunch of kids playing in a sandbox, not realizing that they might be about to set their own shirts on fire. That’s not a bad thing, just a comment on the maturity of mashups in general.

The scrAPI conversation — a word, by the way, that’s a mashup between screen scraping and API — is something very near and dear to my heart, although in another incarnation: screen scraping from third-party (or even internal) applications inside the enterprise in order to create the type of application integration that I’ve been involved in for many years. In both cases, you’re dealing with a third party who probably doesn’t know that you exist, and doesn’t care to provide an API for whatever reason. In both cases, that third party may change the screens on their whim without telling you in advance. The only advantage of doing this inside the enterprise is that the third party ususally doesn’t know what you’re doing, so if you are violating your terms of service, it’s your own dirty little secret. Of course, the disadvantage of doing this inside the enterprise is that you’re dealing with CICS screens or something equally unattractive, but the principles are the same: from a landing page, invoke a query or pass a command; navigate to subsequent pages as required; and extract data from the resultant pages.

There’s some interesting ways to make all of this happen in mashups, such as using LiveHTTPHeaders to watch the traffic on the site that you want to scrape, and faking out forms by passing parameters that are not in their usual selection lists (Adrian did this with ChicagoCrime.org to pass a much larger radius to the crime stats site that its form drop-down allowed in order to pull back the entire geographic area in one shot). Like many enterprise scraping applications, site scraping applications often cache some of the data in a local database for easier access or further enrichment, aggregation, analysis or joining with other data.

In both web and enterprise cases, there’s a better solution: build a layer around the non-API-enabled site/application, and provide an API to allow multiple applications to access the underlying application’s data without each of them having to do site/screen scraping. Inside the enterprise, this is done by wrapping web services around legacy systems, although much of this is not happening as fast as it should be. In the mashup world, Thor Muller (of Ruby Red Labs) talked about the equivalent notion of scraping a site and providing a set of methods for other developers to use, such as Ontok‘s Wikipedia API.

We talked about the legality of site scraping, namely that there are no explicit rights to use the data, and the definition of fair use may or may not apply; this is what prompted the comment with which I opened this post.

In the discussion of strategic issues around site scraping, I certainly agree that site scraping indicates a demand for an API, but I’m not sure that I completely agree with the comment that site scraping forces service and data providers to build/open APIs: sure, some of them are likely just unaware that their data has any potential value to others, but there’s going to be many more who either will be horrified that their data can be reused on another site without attribution, or just don’t get that this is a new and important way to do business.

In my opinion, we’re going to have to migrate towards a model of compensating the data/service provider for access to their content, whether it’s done through site scraping or an API, in order to gain some degree of control (or at least advance notice) of changes to the site that would break the callling/scraping applications. That compensation doesn’t necessarily have to mean money changing hands, but ultimately everyone is driven by what’s in it for them, and needs to see some form of reward.

Update: Changed “scrapePI” to “scrAPI” (thanks, Thor).

Mashing up a new world (dis)order

Now that I’ve been disconnected from the fire hose of information that was Mashup Camp, I’ve had a bit of time to reflect on what I saw there.

Without doubt, this is the future of application integration both on the public internet and inside the enterprise. But — and this is a big but — it’s still very embryonic, and I can’t imagine seriously suggesting much of this to any CIO that I know at this point, since they all work for large and fairly conservative organizations. However, I will be whispering it in their ears (not literally) over the coming months to help prepare them for the new world (dis)order.

From an enterprise application integration perspective, there’s two major lessons to be learned from Mashup Camp.

First, there’s a lot of data sources and services out there that could be effectively combined with enterprise data for consumption both inside and outside the firewall. I saw APIs that wrap various data sources (including very business-focused ones such as Dun + Bradstreet), VOIP, MAPI and CRM as well as the better-known Google, Yahoo! and eBay APIs. The big challenge here is the NIH syndrome: corporate IT departments are notorious for rejecting services and especially data that they don’t own and didn’t create. Get over it, guys. ThereĂ‚Â’s a much bigger world of data and services than you can ever build yourself, and you can do a much better job of building the systems that are actually a competitive differentiator for you rather than wasting your time building your own mapping system so that you can show your customers where your branches are located. Put those suckers on Google maps, pronto. This is no different than 1000’s of other arguments that have occurred on this same subject over the years, such as “donĂ‚Â’t build your own workflow system” (my personal fave), and is no different than using a web service from a trusted service provider. Okay, maybe it’s a bit different than dealing with a trusted service provider, but I’ll get to the details of that in a later post on contracts and SLAs in the world of mashups.

Second, enterprise IT departments should be looking at the mechanics of how this integration takes place. Mashup developers are not spending millions of dollars and multiple years integrating services and data. Of course, theyĂ‚Â’re a bit too cavalier for enterprise development, typically eschewing such niceties as ensuring the legality of using the data sources and enterprise-strength testing, but there’s techniques to be learned that can greatly speed application integration within an organization. To be fair, many IT departments need to put themselves in the position of both the API providers and the developers that I met at MashupCamp, since they need to both wrap some of their own ugly old systems in some nicer interfaces and consume the resulting APIs in their own internal corporate mashups. I’ve been pushing for a few years for my customers to start wrapping their legacy systems in web services APIs for easier consumption, which few have adopted beyond some rudimentary functionality, but consider that some of the mashup developers are providing a PHP interface that wraps around a web service so that you can develop using something even easier: application integration for the people, instead of just for the wizards of IT. IT development has become grossly overcomplicated, and itĂ‚Â’s time to shed a few pounds and find some simpler and faster ways of doing things.

Get on the map

Attention, all you Mashup Camp attendees: go to Attendr to see a cool mashup example from Jeff Marshall that allows you to link to other attendees you already know or would like to meet. If you’re going to be at MashupCamp next week, be sure to look me up and say hi.

As for the rest of you, head on over and add yourself to my Frappr map. You can see where other readers of Column 2 are located, and I’ve added the capability to use coloured pins to denote whether you’re a customer, product vendor, services vendor or “other” as it relates to BPM and integration technologies.