Integrating Information into the Process

How much information and how it is used is a fundamental issue that I always deal with when designing processes: the usual tendency is to put too much information into the process itself (typically because the line-of-business system is so ugly that no one wants to deal with it directly), but that over-replication of data causes problems with synchronization and potentially performance. I’m a big fan of keeping the data in a process to the minimum as long as those other systems can be accessed in some way, and replicating data into the process for three reasons only:

  • Data elements that link this process instance to other systems, e.g., an account number. If you want other information about the account, I’d recommend that it come from the system of record, not be replicated into the BPM system.
  • Data elements that are passed or displayed to the agent executing a specific step in the process flow, whether a human user or another process/system, that allow them to execute the task at hand. This may be the same information as in the previous case, such as an account number, but may also be instructions such as “call the customer for the missing data on their application form”.
  • Data elements that are used to control process flow. For example, if transactions of greater than $1000 in value follow one path in the process flow, and those less than that follow another path, then you need to have a “transaction value” data element in your process.

As Value Wizard points out in Process/information integration, too much information can be just as bad as too little, and can lead to delays and inefficiences while a human operator sorts through a mass of data to find the one piece that they need to execute the task at hand. He points out that we need to be mapping information architecture right along with the process and business architecture, and points out the value of an enterprise architect in ensuring that the information (data) architecture is aligned with everything else.

Gartner’s BPM Suite Selection Criteria

Appian has a link to Gartner’s newly-published report on BPMS selection criteria here (free registration required). Gartner has the 10 major areas of functionality used to develop the criteria listed as following:

  • Human task support: Executing human-focused process steps
  • Business process/policy modeling and simulation environment
  • Pre-built frameworks, models, flows, rules and services
  • Human interface support and content management
  • Collaboration anywhere support
  • System task and integration support
  • Business activity monitoring (BAM)
  • Runtime simulation, optimization and predictive modeling
  • Business policy/rule management support
  • Real-time agility infrastructure supports

For each of these, the report provides a description of the functionality and why it’s important, then provides a list of what to look for when you’re evaluating that particular functionality. For example, they have this to say about real-time agility:

We believe that making changes to process flows in real time is less important than making changes at the correct or relevant time.

A sentiment that I wholeheartedly agree with — in fact, I think that I said pretty much those exact words during my course earlier this week. They proceed on with several paragraphs of explanation about the factors involved and what to look for, then the dish up their “factors to seek out” list, which includes things such as BPEL support, and early warning through rules and complex event pattern matching.

Excellent reading, and a very practical checklist for anyone evaluating BPMS.

Process isn’t going away

Tom Davenport posted last week about how process isn’t going away, in spite of some arguments against Six Sigma and process management. I couldn’t trace his reference to a specific entry on John Hagel’s blog (Tom, a direct reference to the post in question would be helpful!), but I agree with his assessment of Ross Mayfield’s The End of Process post as “silly”, mostly because Ross’ post assumes that business processes are static and (as Ethan points out in a comment on the post) confuses process and policy.

I especially like Tom’s last paragraphs on the balance between process and practice.

On the road again

I’m in Dallas for the first part of this week, delivering my 2-day Making BPM Mean Business course to a client. This is the second time for an end-to-end delivery of the course (the first time being for a group of FileNet customers at their user conference), and it’s quite a different sort of audience from the first delivery so I expect to learn more about what’s needed and what’s not.

I’m having a lot of fun with the teaching gig: I seem to have been teaching most of my life, from the 3rd-grade teacher having me run the spelling test because I knew all the words already (and she probably wanted to sneak out for a smoke), to substitute teaching first-year university algebra when I was an engineering graduate student, to teaching machine vision and image analysis courses for Learning Tree in the late 1980’s, to the teaching component of evangelism when I ran my own systems integration group and, later, worked for FileNet.

This is the first time that I’ve completely written a multi-day course as well as delivering it, but it turned out to be easier than I expected: the biggest problem was cutting out material to keep it to two days, and I still found myself having to cut one section on the fly during the first delivery because I really tend to get carried away talking about this stuff with customers. In particular, the first part of the course (a little more than one day) is entitled “Why Process Matters”, a topic on which I can rave passionately for hours on end.

Design and requirements

Some recent work for a client has me struggling over the distinction between requirements and design: not in my own mind, where it’s pretty clear, but in the minds of those who make no distinction between them, or attempt to substitute design for requirements.

First in any project are the business requirements, that is, a statement of what the business needs to achieve. For example, in a transaction processing environment (which is a lot of what I work with), a business requirement would be “the transactions must be categorized by transaction type, since the processing is different for each type”. Many business requirements and constraints of an existing organization are detected, not designed; from a Zachman standpoint, this is rows 1 and 2, although I think that a few of the models developed in row 2 (such as the Business Workflow Model) are actually functional design rather than requirements.

Second is the functional design, or functional specifications, that is, a statement of how the system will behave in order to meet the business requirements. For example, in our transaction processing example, the functional specification could be “the user selects the transaction type” or “the transaction type is detected from the barcode value”. Some references refer to these as “functional requirements”, which I think is where the problem in terminology lies: when I say “requirements”, I mean “business requirements”; when some people say “requirements”, they mean “functional requirements”, or rather, how the system behaves rather than what the business needs. This is further complicated by those who choose not to document the business requirements at all, but go directly to functional design, call it “requirements”, and gloss over the fact that the business requirements have been left undocumented. Personally, I don’t consider functional design to be requirements, I consider it to be design, to be performed by a trained designer, and based on the business requirements.

Lastly is the technical design, about which there is usually little debate, although there’s still way too many projects where this stage is at least partially skipped in favour of going straight from functional design to coding.

All this being said, it’s possible for a designer who is familiar with the requirements to internalize them sufficiently to do a good functional design, which in turn can produce a good technical design and a system that meets the current requirements. So what’s the problem with just skipping the documentation of business requirements and progressing straight to functional design? There are two problems, as I see it. First, there’s a big communcation gap between the business and the technology. The technical designer understands what the system should do, but not why it should do it that way, so can’t make reasonable suggestions for modifications to the functional design if there appears to be a better way to implement the system. Second, the future agility of both the business processes and the technology is severely impacted, since it will be nearly impossible to determine if a change to the technology will violate a business requirement, or to model how a changing business requirement will impact the technology implementation, since the requirements are not formally linked via a functional design to the technical design.

A lot of the work that I do is centred around BPM, and although these principles aren’t specific to BPM, they’re particularly important on BPM projects, where the functional design can sometimes appear to be obvious to the subject matter experts (who are not usually trained designers) and a lot of paving of the cow paths occurs because of that.

When rules rule

Rolando of the BIZRULES blog shared his thoughts on the recent International Business Rules Forum: a major shift being that people are no longer asking what they should do with BR, but are asking what else they can do with BR. He also references an Intelligent Enterprise article on balancing the control of business rules between business and IT.

I’m finding BR to be a bit of an uphill battle with many of my clients, in spite of the clear benefits of integrating a BRE with BPM to provide more business control over processes. Some BPM vendors, such as Pegasystems, are built on a BRE so that functionality is there from the start; most others, however, have recently built integration to one or more BRE’s to provide equivalent functionality. In either case, the ability to execute a business-controllled rule at a point in a process makes a lot of sense for both business and technology reasons: in-flight rules changes (typically not possible if you define the rules directly in a BPM vendor’s process definition logic); increased control by the business over the business rules; more robust decisioning features; decoupling of complex and changeable business logic from the process execution engine; and the ability to reuse the same business rules in other systems unrelated to BPM, such as CRM.

The concept of business rules isn’t new, but in my practice (focused on financial services and insurance), I see it primarily in insurance underwriting. Given the rules-driven nature of financial businesses in general, there’s a lot more out there that could beneft from business rules, with or without BPM.

Adaptive approaches

Greg Wdowiak’s post on application integration and agility (as in agile development) includes a great comparison of plan-driven versus adaptive development. He rightly points out that both approaches are valid, but for different types of projects:

Adaptive approach provides for an early customer’s feedback on the product. This is critical for new product development where the customer and designers ideas may significantly differ, be very vague, or the kind of product that is being design has not been ever tried before; therefore, having the ability for the customers to ‘touch’ an approximation is very important if we want to build something useful.

That pretty much describes most development projects that I’m involved in…

The plan-driven approach allows for optimization of the project trajectory. The trajectory of adaptive approach is always suboptimal, but this is only apparent once the project is complete.

As this last quote from his post makes clear, the plan-driven approach works well for well-understood implementations, but not so well for the introduction of new technology/functionality into an organization. The plan-driven approach reduces logistical risks, whereas the adaptive approach reduces the risks of uncertain requirements and unknown technology.

One of the key advantages of adaptive development in introducing new technology is the delivery methodology: instead of a “big bang” delivery at the end, which often surprises the end-user by not delivering what they expected (even though it may have been what was agreed upon in the specifications), it offers incremental approximations of the final result which are refined at each stage based on end-user feedback.

So why isn’t the adaptive approach used for every new technology project? Alas, the realities of budgets and project offices often intervene: many corporate IT departments require rigid scheduling and costing that don’t allow for the fluidity required for adaptive development, for example, by requiring complete signed-off requirements before any development begins. Although it’s certainly possible to put a project plan in place for an adaptive development project, it doesn’t look the same as a “classical” IT project plan, so may not gain the acceptance required to acquire the budget. Also, if part of the development is outsourced, this type of rigid project planning is almost always used to provide an illusion of control over the project.

When a company just isn’t ready for the adaptive approach yet, but can be convinced that the plan-drive approach isn’t quite flexible enough, I propose a hybrid approach through some project staging: my mantra is “get something simpler in production sooner”. If I’m working with a BPM product, for example, my usual recommendation is to deploy out-of-the-box functionality (or nearly so) to allow the users to get their hands on the system and give us some real feedback on what they need, even if it means that they have to work around some missing functionality. In many cases, there’s a lot of the OOTB functionality that’s completely acceptable to them, although the users may never have specified it in exactly the same manner. Once they’ve had a chance to see what’s available with a minimal amount of custom development, they can participate in informed discussions about where the development dollars are best spent.

This approach often puts me at odds with an outsourced development group: they want to maximize their development revenue from the client, whereas I want to keep the design simple and get something into production as soon as possible. I’ve had many cases in the past where I’ve worked as a subcontractor to a large systems integrator, and I almost always end up in that same conflict of interest, which explains why I usually try to work as a freelance designer/architect directly for the end customer, rather than as a subcontractor.

Ghosts of articles past

This was a shocker: I clicked on a link to test some search engines, typed in my own name as the search phrase, and one of the engines returned a link to articles that I wrote (or was interviewed for) back in 1991-1994. All of these were for Computing Canada, a free IT rag similar to today’s eWeek but aimed at the Canadian marketplace.

Back in 1992, I wrote about the trend of EIM (electronic image management, or what is now ECM/BPM) vendors moving from proprietary hardware and software components to open systems: Windows workstations, UNIX servers, and Oracle RDBMS, for example. This was right around the time that both FileNet and Wang were migrating off proprietary hardware, but both were still using customized or fully proprietary versions of the underlying O/S and DBMS. My predictions at that time (keep in mind that “open systems” was largely synonymous with “UNIX” in that dark period):

Future EIM systems will continue toward open platforms and other emerging industry standards. As the market evolves, expect the following trends.

  • More EIM vendors will offer systems consisting of a standard Unix server with MS Windows workstations. Some will port their software to third-party hardware and abandon the hardware market altogether.
  • EIM systems will increasingly turn to commercial products for such underlying components as databases and device drivers. Advances in these components can thus be more quickly incorporated into a system.
  • A greater emphasis will be placed on professional software integration services to customize EIM systems.

On an open platform, EIM systems will become part of a wider office technology solution, growing into an integral and seamless component of the corporate computing environment.

Okay, not so bad; I can confidently say that all that really happened in the intervening years. The prediction that some of the EIM vendors would abandon the specialized hardware market altogether still makes me giggle, since I can’t imagine Fuego or Lombardi, for example, building their own servers.

In another article that same year, I wrote about a client of mine where we had replaced the exchange of paper with the exchange of data: an early e-business application. I summarized with:

Whenever possible, exchange electronic data rather than paper with other departments or organizations. In our example, Canadian and U.S. offices will exchange database records electronically, eliminating several steps in document processing.

This was just not done on a small scale at that time: the only e-business applications were full-on, expensive EDI applications, and we had to create the whole underlying structure to support a small e-business application without using EDI. What I wouldn’t have given for BizTalk back then.

By early 1994, I’m not talking so much about documents any more, but about process. First, I propose this radical notion:

As graphical tools become more sophisticated, development of production workflow maps can be performed by business analysts who understand the business process, rather than by IT personnel.

I finish up with some advice for evaluating Workflow Management Systems (WFMS, namely, early BPM systems):

WFMS’ vary in their routing and processing capabilities, so carefully determine the complexity of the routing rules and processing methods you require. Typical minimum requirements include rules-based conditional branching, parallel routing and rendezvous, role assignments, work-in-process monitoring and re-assignment, deadline alerts, security and some level of integration with other applications.

I find this last paragraph particularly interesting, because I’m still telling this same story to clients today. The consolidation of the BPM marketplace to include all manner of systems from EAI to BPM and everything in between has led to a number of products that can’t meet one or more of these “minimum requirements”, even though they’re all called BPM.

I’m not sure whether being recognized as an expert in this field 14 years ago makes me feel proud to be one of the senior players, or makes me feel incredibly old!

Outstanding in Winnipeg

I understand that PR people have to write something in press releases, but this one today really made me laugh: ebizQ reports that HandySoft just installed their BizFlow BPM software at Cambrian Credit Union, “the largest credit union in Winnipeg”. You probably have to be Canadian for this to elicit spontaneous laughter; the rest of you can take note that Winnipeg is a city in the Canadian prairies with a population of about 650,000, known more for rail transportation and wheat than finance, and currently enjoying -10C and a fresh 30cm of snow that’s disrupting air travel. In fact, I spoke with someone in Winnipeg just this afternoon and he laughed at my previous post about my -20C boots, which he judged as woefully inadequate for any real walking about in The ‘Peg. Every one of my business-related trips to Winnipeg have been in the winter, where -50C is not unheard of, and although most of my clients there have been financial or insurance companies — and large ones — it’s not the first place that I think of when I think of financial centres where I would brag about installing the largest of anything.

Now this whole scenario isn’t as rip-roaringly funny as, for example, installing a system at the largest credit union in Saskatoon, but I have to admit that the hyperbole used in the press release completely distracted me from the point at hand, and has probably done a disservice to HandySoft. HandySoft may have done a fine job at Cambrian. They may have even written a great press release. But I didn’t get past the first paragraph where the big selling point was that the customer is the largest credit union in Winnipeg.

I sure hope that they’re not expecting any prospective customers to go on site visits there this time of year.

Update: an ebizQ editor emailed me within hours to say that they removed the superlative from the press release on their site. You can still find the original on HandySoft’s site here.