Through a fog of BPM standards

If you’re still confused about BPM standards, this article by Bruce Silver at BPMInstitute.org may not help much, but it’s a start at understanding both modelling and execution languages including BPMN, UML, XPDL, BPEL and how they all fit together (or don’t fit together, in most cases). I’m not sure of the age of the article since it predates the OMG-BPMI merger that happened a few months ago, but I just saw it referenced on David Ogren’s BPM Blog and it caught my attention. David’s post is worth reading as a summary but may be influenced by his employer’s (Fuego’s) product, especially his negative comments on BPEL.

A second standards-related article of interest appeared on BPTrends last week authored by Paul Harmon. Harmon’s premise is that organizations can’t be process-oriented until managers visualize their business processes as process diagrams — something like not being able to be truly fluent in a spoken language until you think in that language — and that a common process modelling notation (like BPMN) must be widely known in order to foster communication via that notation.

That idea has a lot of merit; he uses the example of a common financial language (such as “balance sheet”), but it made me think about project management notation. I’m the last person in the world to be managing a project (I like to do the creative design and architecture stuff, not the managing of project schedules), but I learned critical path methods and notation — including hand calculations of such — back in university, and those same terms and techniques are now manifested in popular products such as MS-Project. Without these common terms (such as “critical path”) and the visual notation made popular by MS-Project, project management would be in a much bigger mess than it is today.

The related effect in the world of BPM is that the sooner we all start speaking the same language (BPMN), the sooner we start being able to model our processes in a consistent fashion that’s understood by all, and therefore the sooner that we all starting thinking in BPMN instead of some ad hoc graphical notation (or even worse, a purely text description of our processes). There’s a number of modelling tools, as well as the designer modules within various BPMS, that allow you to model in BPMN these days; there’s even templates that you can find online for Visio to allow you to model in BPMN in that environment if you’re not ready for a full repository-based modeling environment. No more excuses.

Proforma Enterprise Architecture webinar

I’ve just finished viewing a webinar put on by Proforma that talks about building, using and managing an enterprise architecture, featuring David Ritter, Proforma’s VP of Enterprise Solutions. He came out of the EA group at United Airlines so really knows how this stuff works, which is a nice change from the usual vendor webinars where they need to bring in an outside expert to lend some real-world credibility to their message. He spent a full 20 minutes up front giving an excellent background of EA before moving on to their ProVision product, then walked through a number of their different models that are used for modelling strategic direction, business architecture, system (application and data) architecture and technology architecture. More importantly, he showed how the EA artifacts (objects or models) are linked together, and how they interact: how a workflow model links to a data model and a network model, for example. He also went through an EA benefits model based on some work by Mary Knox at Gartner, showing where the different types of architecture fit on the benefits map:

After the initial 30 minutes of “what is EA” and “what is ProVision”, he dug into a more interesting topic: how to use and manage EA within your organization. I loved one diagram that he showed about where EA govenance belongs:

This reinforces what I’ve been telling people about how EA isn’t the same as IT architecture, and it can’t be “owned” by IT. He also showed the results of a survey by the Institute for Enterprise Architecture Developments, which indicates that the two biggest reasons why organizations are implementing EA are business-IT alignment (21%), and business change (17%): business reasons, not IT, are driving EA today. Even Gartner Group, following their ingestion of META Group and their robust EA practice earlier this year, has a Golden Rule of the New Enterprise Architecture that reflects this — “always make technology decisions based on business principles” — and go on to state that by 2010, companies that have not aligned their technology with their business strategy will no longer be competitive.

Some of this information is available on the Proforma website as white papers (such as the benefits map), and some is from analyst reports. With any luck, the webinar will be available for replay soon.

Outstanding in Winnipeg

I understand that PR people have to write something in press releases, but this one today really made me laugh: ebizQ reports that HandySoft just installed their BizFlow BPM software at Cambrian Credit Union, “the largest credit union in Winnipeg”. You probably have to be Canadian for this to elicit spontaneous laughter; the rest of you can take note that Winnipeg is a city in the Canadian prairies with a population of about 650,000, known more for rail transportation and wheat than finance, and currently enjoying -10C and a fresh 30cm of snow that’s disrupting air travel. In fact, I spoke with someone in Winnipeg just this afternoon and he laughed at my previous post about my -20C boots, which he judged as woefully inadequate for any real walking about in The ‘Peg. Every one of my business-related trips to Winnipeg have been in the winter, where -50C is not unheard of, and although most of my clients there have been financial or insurance companies — and large ones — it’s not the first place that I think of when I think of financial centres where I would brag about installing the largest of anything.

Now this whole scenario isn’t as rip-roaringly funny as, for example, installing a system at the largest credit union in Saskatoon, but I have to admit that the hyperbole used in the press release completely distracted me from the point at hand, and has probably done a disservice to HandySoft. HandySoft may have done a fine job at Cambrian. They may have even written a great press release. But I didn’t get past the first paragraph where the big selling point was that the customer is the largest credit union in Winnipeg.

I sure hope that they’re not expecting any prospective customers to go on site visits there this time of year.

Update: an ebizQ editor emailed me within hours to say that they removed the superlative from the press release on their site. You can still find the original on HandySoft’s site here.

BPM en français

Although schooled in Canada where we all have to learn some degree of French, my French is dodgy at best (although, in my opinion, it tends to improve when I’ve been drinking). However, I noticed that my blog appeared on the blogroll of a French BPM blog that just started up, and I’ve been struggling through the language barrier to check it out. There’s no information on the author, but I was instantly endeared to him (?) when I read the following in his reasons for starting the blog:

le marketing bullshit est omniprésent

Isn’t that just too true in any language?

More on vendor blogs

I ususally don’t put too much stock in BPM vendor blogs. First of all, there’s not a lot of them (or at least, not a lot that I’ve seen), since I imagine that getting official sanction for writing a blog about your product or company is exponentially more difficult as your company gets larger. Secondly, they can disappear rather suddenly in this era of mergers and acquisitions. Thirdly, anybody who works for a vendor and has something interesting to say is probably too busy doing other things, like building the product, to spend much time blogging. And lastly, they’re always a bit self-promotional, even when they’re not a blatant PR/marketing soapbox. (Yes, I know, my blog is self-promotional, but I am my own PR and marketing department, so I’m required to do that, or I’d have to fire myself.)

I’ve been keeping an eye on Phil Gilbert’s blog — he’s the CTO at Lombardi. I don’t know him personally, although I’ve been seeing and hearing a lot about their product lately. He wrote a post last week about “BPM as a platform” that every BPM vendor and customer should read, because it tells it like it is: the days of departmental workflow/BPM systems are past, and it’s time to grow up and think about this as part of your infrastructure. In his words:

Further, while it is a platform, it is built to handle and give visibility to processes of all sizes – from human workflows to complex integration and event processing. Choosing to start down the “process excellence” path may very well start with a simple process – therefore it’s not a “sledgehammer for a nail.” It’s a “properly sized hammer for the nail” built on a solid foundation that allows many people to be building (hammering) at once. And because of this, it scales very well from an administrative perspective. You can build one process, or you can build twenty. Sequentially, or all at once. Guess what? The maintenance of the platform is identical!

He also talks about how the real value of BPM isn’t process automation, it’s the data that the BPMS captures about the process along the way, which can then feed back into the process/performance improvement cycle and provide far more improvement than the original process automation.

He takes an unnecessary jab at Pegasystems (“the best BPM platforms aren’t some rules-engine based thing”) which probably indicates where Lombardi is getting hit from a competitive standpoint, and the writings a bit stilted, but that shows that it’s really coming from him, not being polished by a handler before it’s released. And the fact that the blog’s on Typepad rather than hosted on the Lombardi site is also interesting: it makes at least a token statement of independence on his part.

Worth checking out.

Planning for Disaster

I just bought a new pair of winter boots, guaranteed waterproof and warm to -20C; I stood in the store and swore to the sales clerk that I was not going to have cold, wet feet this year (I probably sounded a bit melodramatic, like Scarlett O’Hara declaring that she’d never be hungry again). For those of you who have never been to Toronto, you may not realize that some people make it through the winter without proper boots, just by avoiding the great outdoors on the few days when it is really cold or snowy. We only have a few weeks each winter as cold as -20; we only get a few big snowstorms; most of the snow usually melts within a day or two; and many days hover around the freezing mark so the bigger danger is cold slush leaking into your boots rather than the frigid air. However, every few years we have a colder-than-usual winter, or mounds of snow — like a few years back when a metre of the white stuff fell in two days, closing the city and causing sightings of cross-country skiers in the downtown financial district — and many people (including myself) aren’t properly prepared for it.

In my case, business still has to go on: being self-employed, I can’t just stay inside when the weather is foul, but have to get out there and continue with my day-to-day business of seeing clients and whatever other activities are on my schedule. In other words, the “weather event” occurs, and my business continues, although in a somewhat uncomfortable and restricted manner. There are many natural disasters that are a much greater challenge to business continuity, like the tsunamis, hurricanes and earthquakes that we’ve seen all over the world in the past year, in addition to manmade disasters and even biological events like a flu pandemic: a recent article in the Economist (subscription required) states that Gartner has advised their clients to consider the effect of 30% of their staff not showing up for work due to the flu, which would certainly fall into the “disaster” category for many businesses.

I spoke briefly about business continuity and BPM at a conference last week, and am doing a more comprehensive analysis for a client in the upcoming months. For me, it comes back to thinking about one of Tom Davenport’s nine steps to process innovation: geographical, or more specifically, location independence. BPM is one of the key technologies that may allow a process, or part of a process, to be located anywhere in the world, as long as the communications infrastructure and trained local staff exist. This has been a large driver behind the move to business process outsourcing, a controversial trend that is rejected outright by many organizations, but many people miss the fact that outsourcing also provides some level of business continuity: if you can move some of your business processes to a remote location, then you can just as easily have them at two locations so that there’s a fallback plan in the event of unforeseen events. I’m not talking about replicating systems here — that part’s relatively straightforward, although expensive — I’m talking about what is often forgotten by the IT disaster recovery team: people. If you have a single site where your human-facing business processes take place and something happens at that site, what’s your plan? Where do your people work in the advent of a physical site disaster? How do you reach them to coordinate them? Can you easily reroute client communications (phone, email, postal mail) to the new location? Are people trained at all locations to handle all processes? Can you reroute only part of the process if you have a partial failure at your main site?

Earthquakes are going to happen on the Pacific Rim; hurricanes are going to happen in the southern US, and it’s going to snow in Toronto. I’ve got my boots, are you ready?

One last session

I’m cutting out early for my flight home, so I’m finishing the FileNet user conference with another BPM technical session, this one on process orchestration. This is a relatively new area for FileNet in terms of out-of-the-box functionality, and a bit behind the competitive curve but they appear to be charging into the fray with strong functionality. Mike Marin, BPM product architect extraordinaire, walked us through the current state: the ability of a process to consume web services, and the ability to launch and control a process as a web service. Mike sits on a couple of standards boards and is pretty up-to-date on what’s happening with the competition and future directions. Nothing here that I wasn’t already aware of, although he provided some good technical insights into how it all works under the covers as well as an excellent distinction between choreography and orchestration. He also talked about using web services as a method for federating process engine services, that is, allowing a process to span servers, which I think is absolutely brilliant. The same thing holds for invoking and being invoked by a process on a BPEL engine (like Oracle’s), because it’s just a web service interface.

Time to grab some lunch and head for the airport. Regular (non-UserNet) blogging resumes later this week.

BAM technical session

This seemed to be a morning for networking, and I’m arriving late for a technical session on FileNet’s BAM. I missed the hands-on session this morning so wanted to get a closer look at this before it’s released sometime in the next couple of months.

The key functional things in the product are dashboards, rules and alerts. The dashboard part is pretty standard BI presentation-layer stuff: pick a data source, pick a display/graph type, and position it on the dashboard. Rules are where the smarts come in: pick a data source, configure the condition for firing an alert, then set the content and recipient of the alert. Alerts can be displayed on the recipient’s dashboard, or sent as an email or SMS, or even launch other processes or services to handle an exception condition automatically.

There’s a nice interface for configuring the dimensions (aggregations) in the underlying OLAP cubes, and for configuring buckets for running statistics. The data kept on the BAM server is cycled out pretty quickly: it’s really for tracking work in progress with just enough historical data to do some statistical smoothing.

Because they’re using a third-party OEM product for BAM, it’s open to other data sources plugged into the server, used in the OLAP cubes, combined on the dashboards or used in the rules. However, this model adds yet another server, since it pulls pre-processed work-in-progress data from the Process Analyzer (so PA is still required) and has a sufficiently hefty memory requirement since it’s maintaining the cubes in memory that it’s probably not a good idea to co-locate it on a shared application server. I suppose that this demotes PA to a data mart for historical data as well as a pre-processor, which is not a completely bad thing, but I’m imagining that a full replacement for PA might be better received by the customers.

Rules, rules, rules

I consider rules (specifically, a BRE) to be pretty much essential as an adjunct to a BPMS these days. There’s a number of reasons for this:

– Rules are a lot more complex than you can implement in most BPMS, with the exception of rules-based systems like Pegasystems: FileNet’s expression builder, for example, is not a replacement for a BRE no matter how many times that I hear that from their product marketing group. A BRE lets a business analyst create business rules in a declarative fashion, using the language of the business.

– Rules in a BRE can be used consistently from different process flows, and also from other applications such as CRM: anywhere in the organization that needs to apply that rule can be assured of using the same rule if they’re all calling the same BRE.

– Most importantly, in my opinion, is the ability to change business rules on work in progress. If you implement a business rule in FileNet’s expression builder at a step in the process, then once a process instance is kicked off, it can’t (easily) be changed: it will execute to completion based on the workflow, and hence rule, definition at the time that it was instantiated. If you instead call a BRE at a step in the workflow, then that call isn’t made until that step is executed, so the current definition of the business rule at that time will be invoked. This, in my opinion, is one of the best reasons to get your business rules out of FileNet and into a BRE, where they belong.

I finished the conference today in a session on BPM that is much too rudimentary for me (hence why I’m blogging my thoughts on BRE), and not enough cover to dash for the door without being seen. It’s finishing up with Carl Hillier doing a demo, which is always entertaining: he showed pictures of both his baby and his Porsche.

I also found out that FileNet commissioned the Economist to do a survey on process management; I’ll have my eyes open for that.