Mashups and the corporate SOA

I listened to a podcast last week of David Linthicum interviewing Dion Hinchcliffe that really helped to coalesce my thoughts about mashups, Web 2.0, SOA, composite applications and the future of integration. I was walking along a street in downtown Toronto, listening to it on my iPod, and making enough facial expressions, hand gestures and remarks aloud that I was likely written off as one of the usual crazies: it’s very exciting when someone with very similar ideas to your own states them much more clearly than you could have said it yourself.

A couple of weeks ago, I posted about mashups and the implications for enterprise integration, which of the integration vendors is likely to jump on this bandwagon early, and noted that I’ll be at Mashup Camp later this month because I really want to explore the convergence of mashups and enterprise integration. Unbeknownst to me, Dion Hinchcliffe had published an article in the SOA Web Services journal in late December entitled Web 2.0: The Global SOA, which was the focus of the podcast, and blogged about the 100’s of services available on the “giant service ecosystem” that is the web:

An important reason why the Web is now the world’s biggest and most important computing platform is that people providing software over the Internet are starting to understand the law of unintended uses. Great web sites no longer limit themselves to just the user interface they provide. They also open up their functionality and data to anyone who wants to use their services as their own. This allows people to reuse, and re-reuse a thousand times over, another service’s functionality in their own software for whatever reasons they want, in ways that couldn’t be predicted. The future of software is going to be combining the services in the global service landscape into new, innovative applications. Writing software from scratch will continue to go away because it’s just too easy to wire things together now.

The information on this is now starting to explode: David Berlind (organizer of Mashup Camp) discusses the bazaar-like quality of the mashup ecosystem, Stephen O’Grady pushes the concept of SOA to include mashups, and even Baseline Magazine is talking about how mashups can free you from the tyranny of software vendors with a discussion about how some of the services feeding mashups could be used in an enterprise integration context.

All of this has huge implications for business processes, and the type of BPM that currently resides completely inside an organization. Most BPM vendors have enabled their products to be consumers of web services in order to more easily play an orchestration role, and some customers are even starting to take advantage of this by invoking web services that integrate other internal systems as steps in a business process (although a lot are still, unfortunately, stuck in earlier, more primitive generations of integration techniques). Imagine the next step: as corporate IT departments get over their “not invented here” fears, the BPM tools allow them to integrate not just internal web services, but external services that are part of the Web 2.0 mashup ecosystem. Use a Salesforce.com service to do a customer credit check as part of processing their insurance application. Integrate Google Maps or Yahoo maps to determine driving directions from your service dispatch location to your customer’s location in order to create service call sheets. It’s like software-as-a-service, but truly on a per-service rather than per-application basis, allowing you to pick and choose what functions/services that you want to invoke from any particular step in your business process.

Dion Hinchcliffe thinks that 80% of enterprise applications could be provided by external services, which is a great equalizer for smaller businesses that don’t have huge IT budgets, and could almost completely disconnect the issue of business agility from the size of your development team. I think that it’s time for some hard introspection about what business that you’re really in: if your organization is in the business of selling financial services, what are you doing writing software from scratch when you could be wiring it together using BPM and the global SOA that’s out there?

Update: David swamped his podcast provider and ended up moving the podcast here. Reference also updated above.

We need a BPM camp

I received yet another email about the upcoming Gartner BPM Summit, and I continue to be horrified by the price of conferences: U$1,895 for 3 days?! Or how about the AIIM records management conference in Toronto next week: C$2,899 for 3 days? By the time you add in travel and living, it’s no mean chunk of change when you’re an independent: I don’t have the luxury of a big company picking up my tab. Even those of you working for larger companies know that it’s not easy to find funding for attending conferences, even if you believe that they’ll be of value.

I know that analysts are in the business of making money from knowledge (so am I), but knowledge is becoming a commodity these days, and a lot of people won’t (or can’t) shell out that much cash just to sit in a room for three days and hear someone talk when the same information is available (albeit in a less structured manner) in a variety of other forms at a much lower cost: blogs, podcasts, vendor seminars, webinars, analyst reports and other sources that don’t believe that it’s in their best interests to charge everyone an arm and a leg just to have a conversation.

I only attend these big-money conferences once every few years; in the interim, I do just fine with my RSS feeds, daily email newsletters, webinars, vendor seminars, and other sources of free or reasonably-priced information. For example, in the past year, I’ve attended two major conferences: BPM 2005 in London, where I paid full price as an attendee, and FileNet’s user conference in Las Vegas, where I was a speaker so had my conference fee waived (check out the series of entries in my November archive, where I was blogging live from the conference sessions by emailing from my Blackberry). I also attended some local seminars/mini-conferences at little or no cost, such as e-Content Institute, plus some vendor seminars; in fact, I spent yesterday morning at a LabOne seminar hearing about how their next generation of products is going to better integrate into my insurance clients’ systems.

I attended a ton of webinars last year, most from ebizQ and BPMinstitute.org, but also from vendors such as Global 360 and Proforma (search my archives for “webinar” to see my comments on the webinars). I have a list of past webinars that I want to watch but haven’t found time yet: a wealth of information delivered to my desk, for free, with a relatively modest amount of vendor promotional material wrapped around it.

There is something to be said about a conference atmosphere, however. As much as I dislike most professional networking (I’m a closet introvert), conferences provide a great opportunity to meet people with the same interests: for me, that includes potential clients, but also vendors, potential partners, industry analysts and a variety of other types. Most conferences also include some sort of vendor showcase where I can have a peek at the latest and greatest technology.

The dilemma is this, then: given that much of the “information” (content) of the big conferences is available in the public domain or through lower-cost alternatives, how do we share that information in a conference-like networking atmosphere?

The answer may lie in the new generation of “un-conferences” or “camps”. These still exist mostly as technical conferences, but with the focus on collaboration rather than presentations (i.e., have a conversation guided by an actual practioner rather than death-by-Powerpoint from a hired speaker), limited enrolment, and free (or nearly so) fees for attending, this movement has the potential to expand into other traditional conference areas. One popular technical camp is BarCamp, including the recent TorCamp. David Crow, the prime organizer of TorCamp (and my neighbour), just posted about the camp format for un-conferences, and links to Chris Heuer with more about these sort of amateur conferences. A camp with a specific focus on integration is Mashup Camp next month in San Jose, which I’ll be attending because I want to explore how to use mashup concepts in the context of enterprise application integration: this is the part of the future of orchestration. And the expected “conference fee”? $0.

Camps are still, for the most part, for techno-geeks (I admit it, I am a geek). But how long before this “amateur” format hits the mainstream? How long before Gartner’s BPM summit is competing with BPMcamp?

Who will mash?

Further to my post about enterprise mashup yesterday, I’ve been thinking about who in the BPM space will jump on the enterprise mashup bandwagon first.

In my Making BPM Mean Business course, I discuss the history of BPM, and I’ve noticed that BPM vendors who started on the workflow side of the house typically expand their capabilities through OEM agreements and partnerships (the “United Nations” approach), whereas those who started on the EAI side typically expand by building functionality in-house or buying a small company outright and submerging it into their product (the “world domination” approach). That could be because the pretty UI stuff that is usually developed for the human-facing workflow functionality is perceived as the “personality” of the BPM product, and everyone needs to author their own personality, or at least be perceived as being its author. (Okay, for a comment about technology, that’s pretty philosophical.) There’s lots of exceptions to this, but I find that’s true in many cases.

Does that mean that the BPM vendors with a workflow heritage are more likely to embrace the mashup concepts than their descended-from-EAI competitors? While the old guard thinks it over, the “nouveau BPM” vendors (who are built on web services from the ground up) are probably already demoing the integration of Yahoo Maps with back-office transaction processing, and rewriting their marketing materials to include the word “mashup”.

By the way, I signed up for MashupCamp, so if you’re headed there in February, look me up.

Mashing up the enterprise

I’ve spent the past few days mulling over the differences between mashups and the more traditional integration that’s done with enterprise applications. My initial reaction? There’s a lot more similarities than differences: in both cases, a third party uses published application interfaces to create functionality that integrates the capabilities of two or more applications/services. I know, that’s a bit of a simplification, but how soon will it be before overly complex (and expensive) enterprise integrations start taking advantage of the lessons to be learned from the mashups of web services?

Obviously, I’m not the only one thinking about this: the ZDNet SaaS blog just posted about enterprise mashups, and what’s needed to make them a reality.

Makes me want to go to MashupCamp.

One last session

I’m cutting out early for my flight home, so I’m finishing the FileNet user conference with another BPM technical session, this one on process orchestration. This is a relatively new area for FileNet in terms of out-of-the-box functionality, and a bit behind the competitive curve but they appear to be charging into the fray with strong functionality. Mike Marin, BPM product architect extraordinaire, walked us through the current state: the ability of a process to consume web services, and the ability to launch and control a process as a web service. Mike sits on a couple of standards boards and is pretty up-to-date on what’s happening with the competition and future directions. Nothing here that I wasn’t already aware of, although he provided some good technical insights into how it all works under the covers as well as an excellent distinction between choreography and orchestration. He also talked about using web services as a method for federating process engine services, that is, allowing a process to span servers, which I think is absolutely brilliant. The same thing holds for invoking and being invoked by a process on a BPEL engine (like Oracle’s), because it’s just a web service interface.

Time to grab some lunch and head for the airport. Regular (non-UserNet) blogging resumes later this week.

BAM technical session

This seemed to be a morning for networking, and I’m arriving late for a technical session on FileNet’s BAM. I missed the hands-on session this morning so wanted to get a closer look at this before it’s released sometime in the next couple of months.

The key functional things in the product are dashboards, rules and alerts. The dashboard part is pretty standard BI presentation-layer stuff: pick a data source, pick a display/graph type, and position it on the dashboard. Rules are where the smarts come in: pick a data source, configure the condition for firing an alert, then set the content and recipient of the alert. Alerts can be displayed on the recipient’s dashboard, or sent as an email or SMS, or even launch other processes or services to handle an exception condition automatically.

There’s a nice interface for configuring the dimensions (aggregations) in the underlying OLAP cubes, and for configuring buckets for running statistics. The data kept on the BAM server is cycled out pretty quickly: it’s really for tracking work in progress with just enough historical data to do some statistical smoothing.

Because they’re using a third-party OEM product for BAM, it’s open to other data sources plugged into the server, used in the OLAP cubes, combined on the dashboards or used in the rules. However, this model adds yet another server, since it pulls pre-processed work-in-progress data from the Process Analyzer (so PA is still required) and has a sufficiently hefty memory requirement since it’s maintaining the cubes in memory that it’s probably not a good idea to co-locate it on a shared application server. I suppose that this demotes PA to a data mart for historical data as well as a pre-processor, which is not a completely bad thing, but I’m imagining that a full replacement for PA might be better received by the customers.

Rules, rules, rules

I consider rules (specifically, a BRE) to be pretty much essential as an adjunct to a BPMS these days. There’s a number of reasons for this:

– Rules are a lot more complex than you can implement in most BPMS, with the exception of rules-based systems like Pegasystems: FileNet’s expression builder, for example, is not a replacement for a BRE no matter how many times that I hear that from their product marketing group. A BRE lets a business analyst create business rules in a declarative fashion, using the language of the business.

– Rules in a BRE can be used consistently from different process flows, and also from other applications such as CRM: anywhere in the organization that needs to apply that rule can be assured of using the same rule if they’re all calling the same BRE.

– Most importantly, in my opinion, is the ability to change business rules on work in progress. If you implement a business rule in FileNet’s expression builder at a step in the process, then once a process instance is kicked off, it can’t (easily) be changed: it will execute to completion based on the workflow, and hence rule, definition at the time that it was instantiated. If you instead call a BRE at a step in the workflow, then that call isn’t made until that step is executed, so the current definition of the business rule at that time will be invoked. This, in my opinion, is one of the best reasons to get your business rules out of FileNet and into a BRE, where they belong.

I finished the conference today in a session on BPM that is much too rudimentary for me (hence why I’m blogging my thoughts on BRE), and not enough cover to dash for the door without being seen. It’s finishing up with Carl Hillier doing a demo, which is always entertaining: he showed pictures of both his baby and his Porsche.

I also found out that FileNet commissioned the Economist to do a survey on process management; I’ll have my eyes open for that.

Hot BAM!

If there’s anything better than hearing about a hot new product like FileNet’s BAM, it’s hearing it in Danny Pidutti’s lovely Aussie accent. There’s a few misconceptions in his presentation around the differences between BI and BAM; I see BAM as just a process-oriented subset of BI, although the real-time nature means that we’re in the realm of operational BI, such as was discussed in an eBizq webinar “Improving Business Visibility Through Operational BI” on Oct 27th (www.ebizq.net/webinars/6298.html according to my calendar, sorry for the lack of a direct hyperlink but that’s the limits of blogging via Blackberry email) and an earlier one about operational BI on Oct 12th, although I can’t recall who hosted it.

This looks like a pretty significant improvement on the old Process Analyzer: about 20 pre-configured reports, configurable role-based dashboards, KPIs for scorecard-like capabilities, alerts and other fun stuff. A bit of a catch-up from.a competitive standpoint, but FileNet’s more known for solid technology than being the first to market these days.

The demo starts with a Celequest login screen, telling you who the OEM vendor is. At this point, it’s really a standard BI demo, showing how dashboards are configured, alerts set and related functions.

My only question is, what took you guys so long?

BPM SIG

I’m in the BPM special interest group session, which is much more sparsely attended than I expected, but it’s just after lunch and people are still trickling in. The conversation is starting out a bit granular, questions about some very specific functionality although I suppose that’s part of the goal.

Chris Preston just made a statement that the clear direction for interoperability is BPEL, which is definitely the right answer although there’s still a lot of issues around handling the human-facing steps in a process. Unfortunately, in the absence of any questions from the audience, he’s off on a long rant about “re-engineering” using FileNet tools for process modelling, execution, analysis and simulation, which is a little too sales-y althoguh he’s doing his best to be consultative. He needs to encourage much more give-and-take with the audience rather than going into full oratory mode.

Minutes go by, and I’m really starting to wish that I sat closer to an escape route…

Fun with compliance

I spent some time this morning with the guys from BWise, which turned into a very informative session. Although FileNet has partnered with them primarily for their compliance solution, they do so much more in the entire area of internal controls. The compliance frameworks certainly are impressive, though. I’ll definitely be taking a closer look at this.

I’m currently sitting beside the pool at Caesar’s Palace, and although I don’t think that it’s warm enough to be dressed the way that some people are (or aren’t, to be more accurate), it’s a nice respite from the conference crowds for a few minutes before I head back to the sessions. This morning’s BPF hands-on session was so full that I didn’t get near a computer – better to let the customers at them first — and I’m surprised the FileNet didn’t anticipate this level of interest in the labs.

I’ve talked to a lot of UserNet first-timers, and they’re all a bit overwhelmed by the amount of information but seem to be getting a lot out of it in general.

Off to an afternoon of BPM and BAM sessions.