I look at James Kobielus’ blog once in a while to browse his insightful commentary on various technical subjects. I never expected poetry about content.
BAM technical session
This seemed to be a morning for networking, and I’m arriving late for a technical session on FileNet’s BAM. I missed the hands-on session this morning so wanted to get a closer look at this before it’s released sometime in the next couple of months.
The key functional things in the product are dashboards, rules and alerts. The dashboard part is pretty standard BI presentation-layer stuff: pick a data source, pick a display/graph type, and position it on the dashboard. Rules are where the smarts come in: pick a data source, configure the condition for firing an alert, then set the content and recipient of the alert. Alerts can be displayed on the recipient’s dashboard, or sent as an email or SMS, or even launch other processes or services to handle an exception condition automatically.
There’s a nice interface for configuring the dimensions (aggregations) in the underlying OLAP cubes, and for configuring buckets for running statistics. The data kept on the BAM server is cycled out pretty quickly: it’s really for tracking work in progress with just enough historical data to do some statistical smoothing.
Because they’re using a third-party OEM product for BAM, it’s open to other data sources plugged into the server, used in the OLAP cubes, combined on the dashboards or used in the rules. However, this model adds yet another server, since it pulls pre-processed work-in-progress data from the Process Analyzer (so PA is still required) and has a sufficiently hefty memory requirement since it’s maintaining the cubes in memory that it’s probably not a good idea to co-locate it on a shared application server. I suppose that this demotes PA to a data mart for historical data as well as a pre-processor, which is not a completely bad thing, but I’m imagining that a full replacement for PA might be better received by the customers.
Rules, rules, rules
I consider rules (specifically, a BRE) to be pretty much essential as an adjunct to a BPMS these days. There’s a number of reasons for this:
– Rules are a lot more complex than you can implement in most BPMS, with the exception of rules-based systems like Pegasystems: FileNet’s expression builder, for example, is not a replacement for a BRE no matter how many times that I hear that from their product marketing group. A BRE lets a business analyst create business rules in a declarative fashion, using the language of the business.
– Rules in a BRE can be used consistently from different process flows, and also from other applications such as CRM: anywhere in the organization that needs to apply that rule can be assured of using the same rule if they’re all calling the same BRE.
– Most importantly, in my opinion, is the ability to change business rules on work in progress. If you implement a business rule in FileNet’s expression builder at a step in the process, then once a process instance is kicked off, it can’t (easily) be changed: it will execute to completion based on the workflow, and hence rule, definition at the time that it was instantiated. If you instead call a BRE at a step in the workflow, then that call isn’t made until that step is executed, so the current definition of the business rule at that time will be invoked. This, in my opinion, is one of the best reasons to get your business rules out of FileNet and into a BRE, where they belong.
I finished the conference today in a session on BPM that is much too rudimentary for me (hence why I’m blogging my thoughts on BRE), and not enough cover to dash for the door without being seen. It’s finishing up with Carl Hillier doing a demo, which is always entertaining: he showed pictures of both his baby and his Porsche.
I also found out that FileNet commissioned the Economist to do a survey on process management; I’ll have my eyes open for that.
Hot BAM!
If there’s anything better than hearing about a hot new product like FileNet’s BAM, it’s hearing it in Danny Pidutti’s lovely Aussie accent. There’s a few misconceptions in his presentation around the differences between BI and BAM; I see BAM as just a process-oriented subset of BI, although the real-time nature means that we’re in the realm of operational BI, such as was discussed in an eBizq webinar “Improving Business Visibility Through Operational BI” on Oct 27th (www.ebizq.net/webinars/6298.html according to my calendar, sorry for the lack of a direct hyperlink but that’s the limits of blogging via Blackberry email) and an earlier one about operational BI on Oct 12th, although I can’t recall who hosted it.
This looks like a pretty significant improvement on the old Process Analyzer: about 20 pre-configured reports, configurable role-based dashboards, KPIs for scorecard-like capabilities, alerts and other fun stuff. A bit of a catch-up from.a competitive standpoint, but FileNet’s more known for solid technology than being the first to market these days.
The demo starts with a Celequest login screen, telling you who the OEM vendor is. At this point, it’s really a standard BI demo, showing how dashboards are configured, alerts set and related functions.
My only question is, what took you guys so long?
Fun with compliance
I spent some time this morning with the guys from BWise, which turned into a very informative session. Although FileNet has partnered with them primarily for their compliance solution, they do so much more in the entire area of internal controls. The compliance frameworks certainly are impressive, though. I’ll definitely be taking a closer look at this.
I’m currently sitting beside the pool at Caesar’s Palace, and although I don’t think that it’s warm enough to be dressed the way that some people are (or aren’t, to be more accurate), it’s a nice respite from the conference crowds for a few minutes before I head back to the sessions. This morning’s BPF hands-on session was so full that I didn’t get near a computer – better to let the customers at them first — and I’m surprised the FileNet didn’t anticipate this level of interest in the labs.
I’ve talked to a lot of UserNet first-timers, and they’re all a bit overwhelmed by the amount of information but seem to be getting a lot out of it in general.
Off to an afternoon of BPM and BAM sessions.
Keeping busy
I’ve been snowed under with finishing the first version of “Making BPM Mean Business“, to be premiered next month at FileNet’s user conference in Las Vegas, as well as a few other presentations and some coursework for a client on enterprise architecture.
I’ve also been spending some time on my new technology acquisitions: an HP tablet PC and a new Blackberry, replacing some ancient stuff from Nokia, RIM and Compaq that steadfastly refuse to die. The new convertible tablet is great and will be very useful for the BPM course and other presentations: I miss the old days of transparencies when I could write on the slides, and being able to annotate in digital ink is the next best thing. I’ve taken it for a few test drives, but the two-day course will be the real challenge. The new Blackberry is a dream: phone and PDA in one, which reduces the electronic clutter to a minimum, and much better geographic coverage for email. Since I use it primarily for email, and only have the phone functionality because one must have a mobile these days, the PDA format (rather than the phone format that RIM also offers) works best for me. My only beef: the holster that comes with it looks like something that Batman would wear on his belt (not my style), and I need something without a clip to slip into my purse; the simple slipcover with the magnet in the right place to allow the device to register itself as “holstered” set me back $40.
Dumbing down outsourcing
Methinks the simplification of the workplace has just gone too far: The Complete Idiot’s Guide to Successful Outsourcing.
Fractured Language
Yesterday, I was finishing off a presentation for a talk that I’ll be giving next month about corporate performance management, including some of the analytics tools that are used to build things like executive dashboards to display the key performance indicators of a company’s operations as charts and dials. Two tools/metaphors are used a lot: dashboards and scorecards, which both do exactly as they sound. Unfortunately, in my research I found at least one vendor of these products who verbs the nouns, and refers to “dashboarding” and “scorecarding” as the activities of creating these things for a company. Blech.
I felt better after this morning’s daily dose of Savage Chickens.
The disappearing blog
I hate it when a blog that I read semi-regularly just vanishes off the face of the ‘net. I commented back in June about vendors starting to blog, and I had my finger on the RSS pulse of CommerceQuest’s blog in spite of some of the blatant self-promotion. Today, however, Metastorm and CommerceQuest have merged under the Metastorm name, the CommerceQuest site is redirected to Metastorm, and the blog is gone. All that I have left are a few crumbs in Bloglines.
Business discontinuity
I’ve been developing something recently for a customer on business continuity planning, and it put me in mind of how a former customer handled a disaster without the benefit of much BCP.
It was the ice storm of 1998, and this small financial company had their main processing site in downtown Montreal. Although the storm started on Monday (and continued for a week), power didn’t start failing until late in the week, and my customer didn’t lose their power until Friday afternoon. It quickly became obvious that the power was not coming back any time soon, which created a problem of unprocessed transactions: since they process mutual fund trades, many trades were already entered in the system, but would not be priced and processed until the overnight batch run. That weekend, the CIO and VP of IT decided to take action. They sneaked into the building (by now, the city was being patrolled by the military to deter looters), climbed 30 floors to their offices, disconnected the main server, and carried it on their backs down the 30 flights of stairs. They returned to one of their own homes, which still had power and telephone service, downloaded the pricing data and did the overnight batch run to process the trades. They then packed up the server in a van, drove it to Toronto — an interesting drive given that the main highways were all closed by now — and installed it in their small sales office there. They arranged for the toll-free customer service lines to be rerouted from their Montreal office to Toronto, added a few temporary and relocated staff to handle the phones, and were up and running by 6am on Monday. They were missing a few pieces, such as that nice imaging and workflow system that I had put in for them, but they were operational with effectively no interruption to their customers.
I remember laughing about this whole scenario when I heard it from the CIO shortly after that, and I definitely thought that these guys were heroes. In retrospect, if that same scenario had gone down today, someone would have been fired for a serious lack of business continuity planning. They suffered from a very common view of continuity planning that exists widely even today: a fatalistic sense that the risks are so low that it’s not worth planning for such a disaster. Given that the last ice storm of that magnitude to hit Montreal was almost 40 years before that, I can understand that view with regards to weather disasters, but there’s other ways to put your business out of commission; for example, Quebec’s history of domestic terrorism can’t be ignored in this post-9/11 world.
In other words, the potential for business discontinuity exists even if you don’t live on a fault line or in the path of major hurricanes: there are enough man-made disasters to keep us all awake at night. It’s no longer possible to ignore BCP, or claim that you can’t plan for something that might never happen. The question to ask yourself when budgeting for business continuity is “how long can we afford to be down?”