Process isn’t going away

Tom Davenport posted last week about how process isn’t going away, in spite of some arguments against Six Sigma and process management. I couldn’t trace his reference to a specific entry on John Hagel’s blog (Tom, a direct reference to the post in question would be helpful!), but I agree with his assessment of Ross Mayfield’s The End of Process post as “silly”, mostly because Ross’ post assumes that business processes are static and (as Ethan points out in a comment on the post) confuses process and policy.

I especially like Tom’s last paragraphs on the balance between process and practice.

On the road again

I’m in Dallas for the first part of this week, delivering my 2-day Making BPM Mean Business course to a client. This is the second time for an end-to-end delivery of the course (the first time being for a group of FileNet customers at their user conference), and it’s quite a different sort of audience from the first delivery so I expect to learn more about what’s needed and what’s not.

I’m having a lot of fun with the teaching gig: I seem to have been teaching most of my life, from the 3rd-grade teacher having me run the spelling test because I knew all the words already (and she probably wanted to sneak out for a smoke), to substitute teaching first-year university algebra when I was an engineering graduate student, to teaching machine vision and image analysis courses for Learning Tree in the late 1980’s, to the teaching component of evangelism when I ran my own systems integration group and, later, worked for FileNet.

This is the first time that I’ve completely written a multi-day course as well as delivering it, but it turned out to be easier than I expected: the biggest problem was cutting out material to keep it to two days, and I still found myself having to cut one section on the fly during the first delivery because I really tend to get carried away talking about this stuff with customers. In particular, the first part of the course (a little more than one day) is entitled “Why Process Matters”, a topic on which I can rave passionately for hours on end.

Design and requirements

Some recent work for a client has me struggling over the distinction between requirements and design: not in my own mind, where it’s pretty clear, but in the minds of those who make no distinction between them, or attempt to substitute design for requirements.

First in any project are the business requirements, that is, a statement of what the business needs to achieve. For example, in a transaction processing environment (which is a lot of what I work with), a business requirement would be “the transactions must be categorized by transaction type, since the processing is different for each type”. Many business requirements and constraints of an existing organization are detected, not designed; from a Zachman standpoint, this is rows 1 and 2, although I think that a few of the models developed in row 2 (such as the Business Workflow Model) are actually functional design rather than requirements.

Second is the functional design, or functional specifications, that is, a statement of how the system will behave in order to meet the business requirements. For example, in our transaction processing example, the functional specification could be “the user selects the transaction type” or “the transaction type is detected from the barcode value”. Some references refer to these as “functional requirements”, which I think is where the problem in terminology lies: when I say “requirements”, I mean “business requirements”; when some people say “requirements”, they mean “functional requirements”, or rather, how the system behaves rather than what the business needs. This is further complicated by those who choose not to document the business requirements at all, but go directly to functional design, call it “requirements”, and gloss over the fact that the business requirements have been left undocumented. Personally, I don’t consider functional design to be requirements, I consider it to be design, to be performed by a trained designer, and based on the business requirements.

Lastly is the technical design, about which there is usually little debate, although there’s still way too many projects where this stage is at least partially skipped in favour of going straight from functional design to coding.

All this being said, it’s possible for a designer who is familiar with the requirements to internalize them sufficiently to do a good functional design, which in turn can produce a good technical design and a system that meets the current requirements. So what’s the problem with just skipping the documentation of business requirements and progressing straight to functional design? There are two problems, as I see it. First, there’s a big communcation gap between the business and the technology. The technical designer understands what the system should do, but not why it should do it that way, so can’t make reasonable suggestions for modifications to the functional design if there appears to be a better way to implement the system. Second, the future agility of both the business processes and the technology is severely impacted, since it will be nearly impossible to determine if a change to the technology will violate a business requirement, or to model how a changing business requirement will impact the technology implementation, since the requirements are not formally linked via a functional design to the technical design.

A lot of the work that I do is centred around BPM, and although these principles aren’t specific to BPM, they’re particularly important on BPM projects, where the functional design can sometimes appear to be obvious to the subject matter experts (who are not usually trained designers) and a lot of paving of the cow paths occurs because of that.

O (Canada)

Although I’m based in Toronto, many of my clients are elsewhere, and the past year I’ve seen mostly American clients. For those of you who aren’t familiar with the significant cultural differences across the N-S divide, I won’t bore you with the details, but you’re likely aware that we talk different. There are different expressions (such as the American use of “uh-huh” as a replacement for “you’re welcome”, and the quintessential Canadian “eh?”), but I tend to notice pronunciation. All of the Americans reading this probably flashed immediately to “oot and aboot”, but my focus, as usual, is on process.

Today, in a meeting of about 15 people at a client (in Toronto), I heard — about 1000 times, considering the subject matter — the Canadian “PRO-cess” rather than the American “PRAW-cess”. Music to my ears! 🙂

When rules rule

Rolando of the BIZRULES blog shared his thoughts on the recent International Business Rules Forum: a major shift being that people are no longer asking what they should do with BR, but are asking what else they can do with BR. He also references an Intelligent Enterprise article on balancing the control of business rules between business and IT.

I’m finding BR to be a bit of an uphill battle with many of my clients, in spite of the clear benefits of integrating a BRE with BPM to provide more business control over processes. Some BPM vendors, such as Pegasystems, are built on a BRE so that functionality is there from the start; most others, however, have recently built integration to one or more BRE’s to provide equivalent functionality. In either case, the ability to execute a business-controllled rule at a point in a process makes a lot of sense for both business and technology reasons: in-flight rules changes (typically not possible if you define the rules directly in a BPM vendor’s process definition logic); increased control by the business over the business rules; more robust decisioning features; decoupling of complex and changeable business logic from the process execution engine; and the ability to reuse the same business rules in other systems unrelated to BPM, such as CRM.

The concept of business rules isn’t new, but in my practice (focused on financial services and insurance), I see it primarily in insurance underwriting. Given the rules-driven nature of financial businesses in general, there’s a lot more out there that could beneft from business rules, with or without BPM.

Adaptive approaches

Greg Wdowiak’s post on application integration and agility (as in agile development) includes a great comparison of plan-driven versus adaptive development. He rightly points out that both approaches are valid, but for different types of projects:

Adaptive approach provides for an early customer’s feedback on the product. This is critical for new product development where the customer and designers ideas may significantly differ, be very vague, or the kind of product that is being design has not been ever tried before; therefore, having the ability for the customers to ‘touch’ an approximation is very important if we want to build something useful.

That pretty much describes most development projects that I’m involved in…

The plan-driven approach allows for optimization of the project trajectory. The trajectory of adaptive approach is always suboptimal, but this is only apparent once the project is complete.

As this last quote from his post makes clear, the plan-driven approach works well for well-understood implementations, but not so well for the introduction of new technology/functionality into an organization. The plan-driven approach reduces logistical risks, whereas the adaptive approach reduces the risks of uncertain requirements and unknown technology.

One of the key advantages of adaptive development in introducing new technology is the delivery methodology: instead of a “big bang” delivery at the end, which often surprises the end-user by not delivering what they expected (even though it may have been what was agreed upon in the specifications), it offers incremental approximations of the final result which are refined at each stage based on end-user feedback.

So why isn’t the adaptive approach used for every new technology project? Alas, the realities of budgets and project offices often intervene: many corporate IT departments require rigid scheduling and costing that don’t allow for the fluidity required for adaptive development, for example, by requiring complete signed-off requirements before any development begins. Although it’s certainly possible to put a project plan in place for an adaptive development project, it doesn’t look the same as a “classical” IT project plan, so may not gain the acceptance required to acquire the budget. Also, if part of the development is outsourced, this type of rigid project planning is almost always used to provide an illusion of control over the project.

When a company just isn’t ready for the adaptive approach yet, but can be convinced that the plan-drive approach isn’t quite flexible enough, I propose a hybrid approach through some project staging: my mantra is “get something simpler in production sooner”. If I’m working with a BPM product, for example, my usual recommendation is to deploy out-of-the-box functionality (or nearly so) to allow the users to get their hands on the system and give us some real feedback on what they need, even if it means that they have to work around some missing functionality. In many cases, there’s a lot of the OOTB functionality that’s completely acceptable to them, although the users may never have specified it in exactly the same manner. Once they’ve had a chance to see what’s available with a minimal amount of custom development, they can participate in informed discussions about where the development dollars are best spent.

This approach often puts me at odds with an outsourced development group: they want to maximize their development revenue from the client, whereas I want to keep the design simple and get something into production as soon as possible. I’ve had many cases in the past where I’ve worked as a subcontractor to a large systems integrator, and I almost always end up in that same conflict of interest, which explains why I usually try to work as a freelance designer/architect directly for the end customer, rather than as a subcontractor.

Service-Oriented Business Architecture

I’ve been doing quite a bit of enterprise architecture work lately for a couple of clients, which has me thinking about how to “package” business processes as “services” for reusability: a service-oriented business architecture (SOBA), if you will. (I have no idea if anyone else has used that term before, but it fits in describing the componentization and reuse of various functions and processes within an organization, regardless of whether or not the processes are assisted by information systems.)

When we think about SOA, we think about automated processes and services: web services that can be called, or orchestrated, to execute a specific function such as mapping a set of input data to output data. SOBA, however, is for all those unautomated or semi-automated processes (what someone in a client IT department once referred to as “human-interrupted” processes) that may be reused, such as a credit adjudication process that requires human intervention. In many large organizations, the same (or very similar) processes are done by different groups of people in different departments, and if they’re not modeling some of this via enterprise architecture, then they likely have no idea that the redundancy even exists. There are exceptions to this, usually in very paper-intensive processes; most organizations, for example, have some sort of centralized mail room and some sort of centralized filing, although there will be pockets of redundancy even in such a structure.

From a Zachman framework standpoint, most web services are modeled at row 4 (technology model) of column 2 (function), whereas business “services” are modeled at row 2 (business model) of column 2. If you’ve spent some time with Zachman, you know that the lower (higher-numbered) rows are not just more descriptive versions of the upper rows; the rows described fundamentally different perspectives on the enterprise, and often contain models that are unique to that particular row.

In talking about enterprise architecture, I often refer to business function reusability as a key benefit, but most people think purely about IT functions when they think about reusability, and overlook the benefits that could arise from reusing business processes. What’s required to get people thinking about reusing business processes, then? One thing for certain is a common process modeling language, as I discussed here, but there’s more to it than that. There needs to be some recognition of business functions and processes as enterprise assets, not just departmental assets. For quite a while now, information systems and even data have been recognized as belonging to the enterprise rather than a specific department, even if they primarily serve one department, but the same is not true of the human-facing processes around them: most departments think of their business processes as belonging to them, and have no concept of either sharing them with other departments or looking for ways to reduce the redundancy of similar business functions around the enterprise.

These ideas kicked into gear back in the summer when I read Tom Davenport’s HBR article on the commoditization of processes, and gained strength in the past few weeks as I contemplate enterprise architecture. His article focused mainly on how processes could be outsourced once they’re standardized, but I have a slightly different take on it: if processes within an organization are modeled and standardized, there’s a huge opportunity to identify the redundant business processes across an organization within the context of an enterprise architecture, consolidate the functionality into a single business “service”, then enable that service for identification and reuse where appropriate. Sure, some of these business functions may end up being outsourced, but many more may end up being turned into highly-efficient business services within the organization.

There’s common ground with some older (and slightly tarnished) techniques such as reengineering, but I believe that creating business services through enterprise architecture is ultimately a much more powerful concept.

More on the Proforma webinar

I found an answer to EA wanna be!’s comment on my post about the Proforma EA webinar last week: David Ritter responded that the webinar was not recorded, but he’ll be presenting the same webinar again on December 9th at 2pm Eastern. You can sign up for it here. He also said that he’s reworking the material and will be doing a version in January that will be recorded, so if you miss it on the 9th you can still catch it then or (presumably) watch the recorded version on their site.

There’s a couple of other interesting-looking webinars that they’re offering; I’ve signed up for “Accelerated Process Improvement” on December 8th.

Ghosts of articles past

This was a shocker: I clicked on a link to test some search engines, typed in my own name as the search phrase, and one of the engines returned a link to articles that I wrote (or was interviewed for) back in 1991-1994. All of these were for Computing Canada, a free IT rag similar to today’s eWeek but aimed at the Canadian marketplace.

Back in 1992, I wrote about the trend of EIM (electronic image management, or what is now ECM/BPM) vendors moving from proprietary hardware and software components to open systems: Windows workstations, UNIX servers, and Oracle RDBMS, for example. This was right around the time that both FileNet and Wang were migrating off proprietary hardware, but both were still using customized or fully proprietary versions of the underlying O/S and DBMS. My predictions at that time (keep in mind that “open systems” was largely synonymous with “UNIX” in that dark period):

Future EIM systems will continue toward open platforms and other emerging industry standards. As the market evolves, expect the following trends.

  • More EIM vendors will offer systems consisting of a standard Unix server with MS Windows workstations. Some will port their software to third-party hardware and abandon the hardware market altogether.
  • EIM systems will increasingly turn to commercial products for such underlying components as databases and device drivers. Advances in these components can thus be more quickly incorporated into a system.
  • A greater emphasis will be placed on professional software integration services to customize EIM systems.

On an open platform, EIM systems will become part of a wider office technology solution, growing into an integral and seamless component of the corporate computing environment.

Okay, not so bad; I can confidently say that all that really happened in the intervening years. The prediction that some of the EIM vendors would abandon the specialized hardware market altogether still makes me giggle, since I can’t imagine Fuego or Lombardi, for example, building their own servers.

In another article that same year, I wrote about a client of mine where we had replaced the exchange of paper with the exchange of data: an early e-business application. I summarized with:

Whenever possible, exchange electronic data rather than paper with other departments or organizations. In our example, Canadian and U.S. offices will exchange database records electronically, eliminating several steps in document processing.

This was just not done on a small scale at that time: the only e-business applications were full-on, expensive EDI applications, and we had to create the whole underlying structure to support a small e-business application without using EDI. What I wouldn’t have given for BizTalk back then.

By early 1994, I’m not talking so much about documents any more, but about process. First, I propose this radical notion:

As graphical tools become more sophisticated, development of production workflow maps can be performed by business analysts who understand the business process, rather than by IT personnel.

I finish up with some advice for evaluating Workflow Management Systems (WFMS, namely, early BPM systems):

WFMS’ vary in their routing and processing capabilities, so carefully determine the complexity of the routing rules and processing methods you require. Typical minimum requirements include rules-based conditional branching, parallel routing and rendezvous, role assignments, work-in-process monitoring and re-assignment, deadline alerts, security and some level of integration with other applications.

I find this last paragraph particularly interesting, because I’m still telling this same story to clients today. The consolidation of the BPM marketplace to include all manner of systems from EAI to BPM and everything in between has led to a number of products that can’t meet one or more of these “minimum requirements”, even though they’re all called BPM.

I’m not sure whether being recognized as an expert in this field 14 years ago makes me feel proud to be one of the senior players, or makes me feel incredibly old!