Improving Process Quality with @TJOlbrich

My last session at Building Business Capability before heading home, and I just had to sit in on Thomas Olbrich’s session on some of the insights into process quality that he has gained through the Process TestLab. Just before the session, he decided to retitle it as “How to avoid being mentioned by Roger Burlton”, namely, not being one of the process horror stories that Roger loves to share.

According to many analyst studies, only 18% of business process projects achieve their scope and objectives while staying on time and on budget, making process quality more of an exception than the rule. In the Process TestLab, they see a lot of different types of process quality errors:

  • 92% have logical errors
  • 62% have business errors
  • 95% have dynamic defects that would manifest in the environment of multiple processes running simultaneously, and having to adapt to changing conditions
  • 30% are unsuited to the real-world business situation

Looking at their statistics for 2011 to date, about half of the process defects are due to discrepancies between models and the verbal/written description – what would typically be considered “requirements” – with the remainder spread across a variety of defects in the process models themselves. The process model defects may manifest as endless loops, disappearing process instances, missing data and a variety of other undesired results.

He presented four approaches for improving process quality:

  • Check for process defects at the earliest possible point in the design phase
  • Validate the process before implementing, either through manual reenactment, simulation, the TestLab approach which simulates the end-user experience as well as the flow, or a BPMS environment such as IBM BPM (formerly Lombardi) that allows playback of models and UI very early in the design phase
  • Check for practicability to determine if the process will work in real life
  • Understand the limits of the process to know when it will cease to deliver when circumstances change

Olbrich’s approach is based on the separation of business-based modeling of processes from IT implementation: he sees that these sort of process quality checks are done “before you send the process over to IT for implementation”, which is where their service fits in. Although that’s still the norm in many cases, as model-driven development becomes more business-friendly, the line between business modeling and implementation is getting fuzzier in some situations. However, in most complex line-of-business processes, especially those that use quite a bit of automation and have complex user experience, this separation is still prevalent.

Some of his case studies certainly bear this out: a fragment of the process models sent to them by a telecom customer filled an entire slide, even though the activities in the processes were only slightly bigger than individual pixels. The customer had “tested” the process themselves already, but using the typical method of showing the process, encouraging people to walk through it as quickly as possible, and sign off on it. In the Process TestLab, they found 120 defects in process logic alone, meaning that the processes would never have executed as modeled, and 20 process integration defects that determine how different processes related to each other. Sure, IT would have worked around those defects during implementation, but then the process as implemented would be significantly different from the process as modeled by the business. That means that the business’ understanding and documentation of their processes are flawed, and that IT had to make changes to the processes – possibly without signoff from the business – that may actually change the business intention of the processes.

It’s necessary to use context when analyzing and optimizing processes in order to avoid verschlimmbesserung, roughly translated as “improvements that make things worse”, since the interaction between processes is critical: change is seldom limited to a single process. This is where process architecture can help, since it can show the relations between processes as well as the processes themselves.

Testing process models by actually experiencing them, as if they were already live, allows business users and analysts to detect flaws while they are still in the model stage by standing in for the users of the intended process and seeing if they could do the assigned business task given the user interface and information at that point in the process. Process TestLab is certainly one way to do that, although a sufficiently agile model-driven BPMS could probably do something similar if it were used that way (which most aren’t). In addition to this type of live testing, they also do more classic simulation, highlighting bottlenecks and other timing-related problems across process variations.

The key message: process quality starts at the very beginning of the process lifecycle, so test your processes before you implement, rather than trying to catch them during system testing. The later that errors are identified, the more expensive it is to fix them.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.