Enterprise BPM Webinar Q&A Followup

I know, two TIBCO-related posts in one day, but I just received the link to the replay of the Enterprise BPM webinar that I did for TIBCO last week, along with the questions that we didn’t have time to answer during the webinar, and wanted to summarize here. First of all, my slides:

These were the questions that came in during the webinar via typed chat that are not related to TIBCO or its products; I think that we covered some of these during the session but will respond to all of them here.

Is it possible to implement BPM (business process management) without a BPMS?

How to capture process before/without technology?

These are both about doing BPM without a BPMS. I wrote recently about Elevations Credit Union (the fact that they are an IBM customer is completely immaterial in this context) that gained a huge part of their BPM success long before they touched any technology, Basically, they carved out some high-level corporate goals related to quality, modeled their value streams, then documented their existing business processes relative to those value streams. Every business process had to fit into a value stream (which was in turn related to a corporate goal), or else it didn’t survive. They saw how processes touched various different groups, and where the inefficiencies lay, and they did all of this using manual mapping on white boards, paper and sticky notes. In other words, they used the management discipline and methodology side of BPM before they (eventually) selected a tool for collaborative process modeling, which then helped them to spread the word further in their organization. There is a misperception in some companies that if you a buy a BPMS, your processes will improve, but you really need to reorient your thinking, management and strategic goals around your business processes before you start with any technology, or you won’t get the benefits that you are expecting.

In enterprises that do not have SOA implemented horizontally across the organization, how can BPM be leveraged to implement process governance in the LOB silos, yet have enterprise control?

A BPM center of excellence (CoE) would be the best way to ensure process governance across siloed implementations. I wrote recently about a presentation that I was at where Roger Burlton spoke about BPM maturity; there was some advice that he had at the end of that about organizations that had only a level 1 or 2 in process maturity (which, if you’re still very siloed, you’re probably at): get a CoE in place and target it more at change initiatives than governance. However, you will be able to leverage the CoE to put standards in place, provide mentoring and training, and eventually build a repository of reusable process artifacts.

I work in the equipment finance industry. Companies in this space are typically classified as banks/bank-affiliates, captives and independents. With a few exceptions it’s my understanding that this particular industry has been rather slow at adopting BPMS. Have you noticed this in other industries and, if so, what do you see as being the “tipping point” for greater BPMS adoption rates? Does it ultimately come down to a solid ROI, or perhaps a few peer success stories?

My biggest customers are in financial services and insurance, so are also fairly conservative. Insurance, in particular, tends to adopt technology at the very end of adoption tail. I have seen a couple of factors that can impact a slower-moving adoption of any sort of technology, not just BPMS: first, if they just can’t do business the old way any more, and have to adopt the new technology. An example of this was a business process outsourcer for back-office mutual fund transactions that started losing bids for new work because it was actually written into the RFP that they had to have “imaging and workflow” technology rather than paper-based processes. Secondly, if they can’t change quickly enough to be competitive in the market, which is usually the case when many other of their competitors have already started using the technology. So, yes, it does come down to a solid ROI and some peer success stories, but in many cases, the ROI is one of survival rather than just incremental efficiency improvements.

Large scale organizations tend to have multiple BPM / workflow engines. What insights can you share to make these different engines in different organizational BUs into an enterprise BPM capability?

Every large organization that I work with has multiple BPMS, and this is a problem that they struggle with constantly. Going back to the first question, you need to think about both sides of BPM: it’s the management discipline and methodology, then it’s the technology.  The first of these, which is arguably the one with the biggest impact, is completely independent of the specific BPMS that you’re using: it’s about getting the organization oriented around processes, and understanding how the end-to-end business processes relate to the strategic goals. Building a common BPM CoE for the enterprise can help to bring all of these things together, including the expertise related to the multiple BPM products. By bringing them together, it’s possible to start looking at the target use cases for each of the systems currently in use, and selecting the appropriate system for each new implementation. Eventually, this may lead to some systems being replaced to reduce the number of BPMS used in the organization overall, but I rarely see large enterprises without at least two different BPMS in use, so don’t be fanatical about getting it down to a single system.

Typically what is the best order to implement ; first BPM and last SOA or vice versa.

I recommend a hybrid approach rather than purely top-down (BPM first) or bottom-up (SOA first). First, do an inventory in your environment for existing services, since there will almost always be some out there, even if just in your packaged applications such as ERP. While is this happening, start your BPM initiative by setting the goals and doing some top-down process modeling. Assuming that you have a particular process in mind for implementation, do the more detailed process design for that, taking advantage of any services that you have discovered, and identifying what other services need to be created. If possible, implement the process even without the services: it will be no worse from an efficiency standpoint than your current manual process, and will provide a framework both for adding services later and for process monitoring. As you develop the services for integration and automation, replace the manual steps in the process with the services.

Re: Enterprise BPM Goals – Develop, Execute, but what about Governance?

This was in response to the material on my agenda for the webinar. Yes, governance is important, but I only had 40 minutes and could barely cover the design/develop/execute parts of what we wanted to cover. Maybe TIBCO will have me back for another webinar on governance. 😉

Data/content centric processes vs. people-centric vs. EAI/integration centric re: multiple BPMS platforms. Any guidelines for when and where to demarcate?

These divisions are very similar to the Forrester divisions of the BPMS landscape from a few years ago, and grew mostly out of the different types of systems that were all lumped together as “BPMS” by the analysts in the early 2000’s. Many of today’s products offer strength in more than one area, but you need to have a good understanding of your primary use cases when selecting a product. Personally, I think that content-centric and human-centric isn’t the right way to split it: more like unstructured (case management) versus structured; even then, there is more of a spectrum of functionality in most cases than purely unstructured or purely structured. So really, the division is between processes that have people involved (human-centric) or those that are more for automated integration (system-centric), with the latter having to accommodate this wider spectrum of process types. If you have mostly automated integration processes, then certainly an integration-centric BPMS makes sense; if you have human-facing processes, then the question is a bit more complex, since you’re dealing with content/documents, process types, social/collaborative capabilities and a host of other requirements that you need to look at relative to your own use cases. In general, the market is moving towards the full range of human-facing processes being handled by a single product, although specialist product companies would differ.

Thoughts on the role of the application/solution architect within an LOB or COE vs. that of the enterprise architect assigned to the BPM domain?

An enterprise architect assigned to the BPM CoE/domain is still (typically) part of the EA team, therefore involved with the broader scope of enterprise architecture issues. An application/solution architect tends to be more product and technology focused, and in many some that is just a fancy term used for a developer. In other words, the EA should be concerned with overall strategy and goals, whereas the solution architect is focused on implementation.

Role of the COE in governance? How far does/should it extend?

The CoE is core to governance: that’s what it’s there for. At the very least, the CoE will set the standards and procedures for governance, and may rely on the individual projects to enforce that governance.

Is it really IT giving up control? In many cases, the business does whatever they do — and IT has little (or aged) information about the actual processes.

This was in reference to slide #11 in my deck about cultural issues. Certainly business can (and often do) go off and implement their own processes, but that is outside the context of enterprise-wide systems. In order to have the business be doing that within the enterprise BPMS, IT has to ensure that business can access the process discovery and modeling tools that become the front-end of process design. That way, business and IT share models of the business processes, which means that what gets implemented in the BPMS might actually resemble what is required by the business. In some cases, I see a company buy a BPMS but not allow the business users to use the business-level tools to participate in process modeling: this is usually the result of someone in IT thinking that this is beyond the capability of the business people.

Is following of any BPM notation standards part of BPM development? I saw that there was no mention of it.

There was so much that I did not have time to address with only 40 minutes or so to speak, and standards didn’t make the cut. In longer presentations, I always address the issue of standards, since a common process modeling notation is essential to communication between various stakeholders. BPMN is the obvious front-runner there, and if used properly, can be understood by both business and IT. It’s not just about process models, however: a BPMS implementation has to also consider data models, organizational models and more, around which there is less standardization.

Regarding Common UI: shouldn’t it be Common Architecture, accessed by different UIs that fit the user’s roles, knowledge, etc?

In the context of slide #6, I did mean a common UI, literally. In other words, using the BPMS’ composite application development and forms environment to create a user interface that hides multiple legacy applications behind a single user interface, so that the user deals with this new integrated UI instead of multiple legacy UIs. Your point seems to be more about persona-based (or role-based) interfaces into the BPMS, which is a valid, but different, point. That “single UI” that I mention would, in fact, be configurable for the different personas who need to access it.

How does a fully fledged BPM tool stack up against workflow tools part of other COTS application, e.g. workflow in a document management tool or in a trouble ticketing tool?

A full BPMS tends to be much more flexible than what you will find in the embedded workflow within another platform, and is more of an application development platform than just a way to control processes within that application. On the other side, the workflow within those applications are typically already fully integrated with the other business objects within them (e.g., documents, trouble tickets), so the implementation may be faster for that particular type of process. If the only type of process management that you need to do is document approvals within your document management system, it may make sense to use that rather than purchase a full BPMS; if you have broader process management needs, start looking at a more general BPMS platform that can handle more of your use cases.

How do u see BPM tools surviving when CRM tools with more or less same capability is getting widely accepted by enterprises with out-of-box processes defined?

Similar to my response to the previous question, if the processes are related only to the business objects within the CRM, then you may be better off using the workflow tools within it. However, as soon as you want to integrate in other data sources, systems or users, you’ll start to get beyond the functional capabilities of the simpler workflow tools within the CRM. There’s room in the market for both; the trick is, for customers, to understand when to use one versus the other.

What are the reasons you see for BPM tools not getting quickly and widely accepted and what are the solutions to overcome that?

There are both cost and complexity components with BPMS adoption, but a big reason before you even start looking at tools is moving your organization to a process-driven orientation, as I discussed above. Once people start to look at the business as end-to-end processes, and those processes as assets and capabilities that the business offers to its customers, there will be a great pull for BPMS technologies to help that along. Once that motivation is in place, the cost and complexity barriers are still there, but are becoming less significant: first of all, more vendors are offering cloud-based versions of their software that allow you to try it out – and even do your full development and testing – without capital expenditures. If they offer the option, you can move your production processes on-premise, or leave them in the cloud to keep the total cost down. As for complexity, the products are getting easier to use, but are also offering a lot more functionality. This shifts the complexity from one of depth (learning how to do a particular function) to breadth (learning what all the functions are and when to use which), which is still complex but less of a technological complexity.

Is it possible to start introducing and implementing BPM in one department or module only and then extending the BPM to other departments or modules? Or this should be the enterprise wide decisions since it involves heavy cost to bring BPM technologies.

Almost every organization that I work with does their BPM implementation in one department first, or for one process first (which may span departments): it’s just not possible to implement everything that you will ever implement in BPM at the same time, first time. There needs to be ROI within that first implementation, but you also have to look at enterprise cost justification as with any horizontal technology: plan for the other projects that will use this, and allocate the costs accordingly. That might mean that some of the initial costs come from a shared services or infrastructure budget rather than the project budget, because they will eventually be allocated to future projects and processes.

How difficult would it be to replace legacy workflow system with BPM?

It depends (that’s always the consultant’s answer). Seriously, though, it depends on the level of integration between the existing workflow system and other systems, and how much of the user interface that it provides. I have seen situations where a legacy workflow system is deeply embedded in a custom application platform, with fairly well-defined integration points to other systems, and the user interface hiding the workflow system from the end user. In this case, although it’s not trivial, it is a straightforward exercise to rip out the workflow system since it is being used purely as a process engine, replace it with a new one, refactor the integration points so that the new system calls the other systems in the environment (usually easier since modern BPMS’ have better integration capabilities) and refactor the custom UI so that it calls the new BPMS (also usually easier because of updated functionality). That’s the best case, and as I said, it’s still not trivial. If the legacy workflow system also provides the user interface, then you’re looking at redeveloping your entire UI either in the new BPMS or in some other UI development tool, plus the back-end systems integration work. A major consideration in either case is that you don’t just want to replace the same functionality of the old workflow system, since the new BPMS will have far greater functionality: you need to think about how you are going to leverage capabilities such as runtime collaboration that never existed in the old system, in order to see the greatest benefit from the upgrade.

Is it possible to switch between BPM vendors without having pain?

No. Similar to the previous answer, this is a non-trivial exercise, and depending on how much of the functionality of the BPMS that you were using, could be pretty much a complete redevelopment. If the BPMS was used primarily for orchestration of automated processes, it will be much easier, but as soon as you get into custom integration/orchestration and user interfaces, it gets a lot more complicated (and painful).

Do we really need to go for BPM in a situation where we need only integration orchestration only?

One end of the BPMS market is integration-centric systems, which primarily do just integration orchestration. The advantage of using a BPMS for this instead of orchestrating directly in application code is that you get all of the other stuff that comes with the BPMS “for free”: graphical process modeling, execution monitoring, process governance and whatever other goodies are in the BPMS. It’s not really free, of course, but it’s valid to consider a comparison of all of that functionality against what parts of it you would have to custom-build if you were to do the orchestration in code.

That’s it for the Q&A. If you listen to the replay, or were on the live broadcast, my apologies for the rushed beginning: I got off on the wrong foot out of the gate, but settled down after the first few minutes.

Aligning BPM and EA Tutorial at BBCCon11

I reworked my presentation on BPM in an enterprise architecture context (a.k.a., “why this blog is called ‘Column 2’”) that I originally did at the IRM BPM conference in London in June, and presented it at the Building Business Capability conference in Fort Lauderdale last week. I removed much of the detailed information on BPMN, refined some of the slides, and added in some material from Michael zur Muehlen’s paper on primitives in BPM and EA. Some nice improvements, I thought, and it came in right on time at 3 hours without having to skip over some material as I did in London.

Here are some of the invaluable references that I used in creating this presentation:

That should give you plenty of follow-on reading if you find my slides to be too sparse on their own.

NSERC BI Network at CASCON2011 (Part 2)

The second half of the workshop started with Renée Miller from University of Toronto digging into the deeper database levels of BI, and the evolving role of schema from a prescriptive role (time-invariant, used to ensure data consistency) to a descriptive role (describe/understand data, capture business knowledge). In the old world, a schema was meant to reduce redundancy (via Boyce-Codd normal form), whereas the new world schema is used to understand data, and the schema may evolve. There are a lot of reasons why data can be “dirty” – my other half, who does data warehouse/BI for a living, is often telling me about how web developers create their operational database models mostly by accident, then don’t constrain data values at the UI – but the fact remains that no matter how clean you try to make it, there are always going to be operational data stores with data that needs some sort of cleansing before effective BI. In some cases, rules can be used to maintain data consistency, especially where those rules are context-dependent. In cases where the constraints are inconsistent with the existing data (besides asking the question of how that came to be), you can either repair the data, or discover new constraints from the data and repair the constraints. Some human judgment may be involved in determining whether the data or the constraint requires repair, although statistical models can be used to understand when a constraint is likely invalid and requires repair based on data semantics. In large enterprise databases as well as web databases, this sort of schema management and discovery could be used to identify and leverage redundancy in data to discover metadata such as rules and constraints, which in turn could be used to modify the data in classic data repair scenarios, or modify the schema to adjust for a changing reality.

Sheila McIlraith from University of Toronto presented on a use-centric model of data for customizing and constraining processes. I spoke last week at Building Business Capability on some of the links between data and processes, and McIlraith characterized processes as a purposeful view of data: processes provide a view of the data, and impose policies on data relative to some metrics. Processes are also, as she pointed out, are a delivery vehicle for BI – from a BPM standpoint, this is a bit of a trivial publishing process – to ensure that the right data gets to the right stakeholder. The objective of her research is to develop business process modeling formalism that treats data and processes as first class citizens, and supports specification of abstract (ad hoc) business processes while allowing the specification of stakeholder policies, preferences and priorities. Sounds like data+process+rules to me. The approach is to specify processes as flexible templates, with policies as further constraints; although she represents this as allowing for customizable processes, it really just appears to be a few pre-defined variations on a process model with a strong reliance on rules (in linear temporal logic) for policy enforcement, not full dynamic process definition.

Lastly, we heard from Rock Leung from SAP’s academic research center and Stephan Jou from IBM CAS on industry challenges: SAP and IBM are industry partners to the NSERC Business Intelligence Network. They listed 10 industry challenges for BI, but focused on big data, mobility, consumable analytics, and geospatial and temporal analytics.

  • Big data: Issues focus on volume of data, variety of information and sources, and velocity of decision-making. Watson has raised expectations about what can be done with big data, but there are challenges on how to model, navigate, analyze and visualize it.
  • Consumable analytics: There is a need to increase usability and offering new interactions, making the analytics consumable by everyone – not just statistical wizards – on every type of device.
  • Mobility: Since users need to be connected anywhere, there is a need to design for smaller devices (and intermittent connectivity) so that information can be represented effectively, and seamless with representations on other devices. Both presenters said that there is nothing that their respective companies are doing where mobile device support is not at least a topic of conversation, if not already a reality.
  • Geospatial and temporal analytics: Geospatial data isn’t just about Google Maps mashups any more: location and time are being used as key constraints in any business analytics, especially when you want to join internal business information with external events.

They touched briefly on social in response to a question (it was on their list of 10, but not the short list), seeing it as a way to make decisions better.

For a workshop on business intelligence, I was surprised at how many of the presentations included aspects of business rules and business process, as well as the expected data and analytics. Maybe I shouldn’t have been surprised, since data, rules and process are tightly tied in most business environments. A fascinating morning, and I’m looking forward to the keynote and other presentations this afternoon.

NSERC BI Network at CASCON2011 (Part 1)

I only have one day to attend CASCON this year due to a busy schedule this week, so I am up in Markham (near the IBM Toronto software lab) to attend the NSERC Business Intelligence Network workshop this morning. CASCON is the conference run by IBM’s Centers for Advanced Studies throughout the world, including the Toronto lab (where CAS originated), as a place for IBM researchers, university researchers and industry to come together to discuss many different areas of technology. Sometimes, this includes BPM-related research, but this year the schedule is a bit light on that; however, the BI workshop promises to provide some good insights into the state of analytics research.

Eric Yu from University of Toronto started the workshop, discussing how BI can enable organizations to become more adaptive. Interestingly, after all the talk about enterprise architecture and business architecture at last week’s Building Business Capability conference, that is the focus of Yu’s presentation, namely, that BI can help enterprises to better adapt and align business architecture and IT architecture. He presented a concept for an adaptive enterprise architecture that is owned by business people, not IT, and geared at achieving measurable business success. He discussed modeling variability at different architectural layers, and the traceability between them, and how making BI an integral part of an organization – not just the IT infrastructure – can support EA adaptability. He finished by talking about maturity models, and how a closed loop deployment of BI technologies can help meet adaptive enterprise requirements. Core to this is the explicit representation of change processes and their relationship to operational processes, as well as linking strategic drivers to specific goals and metrics.

Frank Tompa from University of Waterloo followed with a discussion of mapping policies (from a business model, typically represented as high-level business rules) to constraints (in a data model) so that these can be enforced within applications. My mind immediately went to why you would be mapping these to a database model rather than a rules management system; his view seems to be that a DBMS is what monitors at a transactional level and ensures compliance with the business model (rules). His question: “how do make the task of database programming easier?” My question: “why aren’t you doing this with a BRMS instead of a DBMS?” Accepting his premise that this should be done by a database programmer, the approach is to start with object definitions, where an object is a row (tuple) defined by a view over a fixed database schema, and represents all of the data required for policy making. Secondly, consider the states that an object can assume by considering that an object x is in state S if its attributes satisfy S(x). An object can be in multiple states at once; the states seem to be more like functions than states, but whatever. Thirdly, the business model has to be converted to an enforcement model through a sort of process model that also includes database states; really more of a state diagram that maps business “states” to database states, with constraints on states and state transitions denoted explicitly. I can see some value in the state transition constraint models in terms of representing some forms of business rules and their temporal relationships, but his representation of a business process as a constraint diagram is not something that a business analyst is ever going to read, much less create. However, the role of the business person seems to be restricted to “policy designer” listing “states of interest”, and the goal of this research is to “form a bridge between the policy manager and the database”. Their future work includes extracting workflows from database transaction logs, which is, of course, something that is well underway in the BPM data mining community. I asked (explicitly to the presenter, not just snarkily here in my blog post) about the role of rules engines: he said that one of the problems was in vocabulary definition, which is often not done in organizations at the policy and rules level; by the time things get to the database, the vocabulary is sufficiently constrained that you can ensure that you’re getting what you need. He did say that if things could be defined in a rules engine using a standardized vocabulary, then some of the rules/constraints could be applied before things reached the database; there does seem to be room for both methods as long as the business rules vocabulary (which does exist) is not well-entrenched.

Jennifer Horkoff from University of Toronto was up next discussing strategic models for BI. Her research is about moving BI from a technology practice to a decision-making process that starts with strategic concerns, generates BI queries, interprets the results relative to the business goals and decide on necessary actions. She started with the OMG Business Motivation Model (BMM) for building governance models, and extended that to a Business Intelligence Model (BIM), or business schema. The key primitives include goals, situations (can model SWOT), indicators (quantitative measures), influences (relationships) and more. This model can be used at the high-level strategic level, or at a more tactical level that links more directly to activities. There is also the idea of a strategy, which is a collection of processes and quality constraints that fulfill a root-level goal. Reasoning that can be done with BIMs, such as whether a specific strategy can fulfill a specific goal, and influence diagrams with probabilities on each link used to help determine decisions. They are using BIM concepts to model a case study with Rouge Valley Health System to improve patient flow and reduce wait times; results from this will be seen in future research.

Each of these presentations could have filled a much bigger time slot, and I could only capture a flavor of their discussions. If you’re interested in more detail, you can contact the authors directly (links to each above) to get the underlying research papers; I’ve always found researchers to be thrilled that anyone outside the academic community is interested in what they’re doing, and are happy to share.

We’re just at the md-morning break, but this is getting long so I’ll post this and continue in a second post. Lots of interesting content, I’m looking forward to the second half.

Catch Me Twice On “Webinar Week”

I’m presenting on two webinars this week. First, on Tuesday (tomorrow), I will be joining Jeremy Westerman of TIBCO to discuss the BPM issues and challenges specific to large enterprises. It’s at 11am Eastern (8am Pacific) on Tuesday, and you can sign up here.

Then, on Wednesday, I’ll be presenting with Matt Cicciari of Progress on how BPM can work within an application development environment. Since this is targeted at Progress OpenEdge developers who may not know a lot about BPM, I’ll be covering some BPM background plus why you want to do certain things with a BPMS, such as explicit process modeling. This is at 11am Eastern on Wednesday, and you can sign up here.

These two gigs are sandwiched between IBM’s CASCON today, where I am attending the NSERC Business Intelligence workshop in the morning and the keynote presentations in the afternoon, and SAP’s World Tour on Thursday. Both of these, although not requiring me to get on an airplane, do require me to get in a Zipcar and drive to the far reaches of the Toronto suburbs and beyond.

Improving Process Quality with @TJOlbrich

My last session at Building Business Capability before heading home, and I just had to sit in on Thomas Olbrich’s session on some of the insights into process quality that he has gained through the Process TestLab. Just before the session, he decided to retitle it as “How to avoid being mentioned by Roger Burlton”, namely, not being one of the process horror stories that Roger loves to share.

According to many analyst studies, only 18% of business process projects achieve their scope and objectives while staying on time and on budget, making process quality more of an exception than the rule. In the Process TestLab, they see a lot of different types of process quality errors:

  • 92% have logical errors
  • 62% have business errors
  • 95% have dynamic defects that would manifest in the environment of multiple processes running simultaneously, and having to adapt to changing conditions
  • 30% are unsuited to the real-world business situation

Looking at their statistics for 2011 to date, about half of the process defects are due to discrepancies between models and the verbal/written description – what would typically be considered “requirements” – with the remainder spread across a variety of defects in the process models themselves. The process model defects may manifest as endless loops, disappearing process instances, missing data and a variety of other undesired results.

He presented four approaches for improving process quality:

  • Check for process defects at the earliest possible point in the design phase
  • Validate the process before implementing, either through manual reenactment, simulation, the TestLab approach which simulates the end-user experience as well as the flow, or a BPMS environment such as IBM BPM (formerly Lombardi) that allows playback of models and UI very early in the design phase
  • Check for practicability to determine if the process will work in real life
  • Understand the limits of the process to know when it will cease to deliver when circumstances change

Olbrich’s approach is based on the separation of business-based modeling of processes from IT implementation: he sees that these sort of process quality checks are done “before you send the process over to IT for implementation”, which is where their service fits in. Although that’s still the norm in many cases, as model-driven development becomes more business-friendly, the line between business modeling and implementation is getting fuzzier in some situations. However, in most complex line-of-business processes, especially those that use quite a bit of automation and have complex user experience, this separation is still prevalent.

Some of his case studies certainly bear this out: a fragment of the process models sent to them by a telecom customer filled an entire slide, even though the activities in the processes were only slightly bigger than individual pixels. The customer had “tested” the process themselves already, but using the typical method of showing the process, encouraging people to walk through it as quickly as possible, and sign off on it. In the Process TestLab, they found 120 defects in process logic alone, meaning that the processes would never have executed as modeled, and 20 process integration defects that determine how different processes related to each other. Sure, IT would have worked around those defects during implementation, but then the process as implemented would be significantly different from the process as modeled by the business. That means that the business’ understanding and documentation of their processes are flawed, and that IT had to make changes to the processes – possibly without signoff from the business – that may actually change the business intention of the processes.

It’s necessary to use context when analyzing and optimizing processes in order to avoid verschlimmbesserung, roughly translated as “improvements that make things worse”, since the interaction between processes is critical: change is seldom limited to a single process. This is where process architecture can help, since it can show the relations between processes as well as the processes themselves.

Testing process models by actually experiencing them, as if they were already live, allows business users and analysts to detect flaws while they are still in the model stage by standing in for the users of the intended process and seeing if they could do the assigned business task given the user interface and information at that point in the process. Process TestLab is certainly one way to do that, although a sufficiently agile model-driven BPMS could probably do something similar if it were used that way (which most aren’t). In addition to this type of live testing, they also do more classic simulation, highlighting bottlenecks and other timing-related problems across process variations.

The key message: process quality starts at the very beginning of the process lifecycle, so test your processes before you implement, rather than trying to catch them during system testing. The later that errors are identified, the more expensive it is to fix them.

Process and Information Architectures

Last day of the Building Business Capability conference, and I attended Louise Harris’ session on process and information architectures as the missing link to improving enterprise performance. She was on the panel on business versus IT architecture that I moderated yesterday, and had a lot of great insight into business architecture and enterprise architecture.

Today’s session highlighted how business processes and information are tightly interconnected – business processes create and maintain information, and information informs and guides business processes – but that different types of processes use information differently. This is a good distinction: looking at what she called “transactional” (structured)  versus “creative” (case management) versus “social” (ad hoc), where transactional processes required exact data, but the creative and social processes may require interpretation of a variety of information sources that may not be known at design time. She showed the Burlton Hexagon to illustrate how information is not just input to be processed into output, but also used to guide processes, inform desisions and measure process results.

This led to Harris’ definition of a business process architecture as “defining the business processes delivering results to stakeholders and supported by the organization/enterprise showing how they are related to each other and to the strategic goals of the organization/enterprise”. (whew) This includes four levels of process models:

  • Business capability models, also called business service models or end-to-end business process models, which is the top level of the work hierarchy that defined what business processes are, but not how they are performed. Louise referenced this to a classic EA standpoint as being row 1 of Zachman (in column 2).
  • Business process models, which provide deeper decomposition of the end-to-end models that tie them to the KPIs/goals. This has the effect of building process governance into the architecture directly.
  • Business process flow models, showing the flow of business processes at the level of logistical flow, such as value chains or asset lifecycles, depending on the type of process.
  • Business process scope models (IGOEs, that is, Inputs, Guides, Outputs, Enablers), identifying the resources involved in the process, including information, people and systems.

She moved on to discuss information architecture, and its value in defining information assets as well as content and usage standards. This includes three models:

  • Information concept model with the top level of the information related to the business, often organized into domains such as finance or HR. For example, in the information domain of finance, we might have information subject areas (concepts) of Invoicing, capital assets, budget, etc.
  • Information relationship model defines the relationships between the concepts identified in the information concept model, which can span different subject areas. This can look like an ERD, but the objects being connected are higher-level business objects rather than database objects: this makes it fairly tightly tied to the processes that those business objects undergo.
  • Information governance model, which defines that has to be done to maintain information integrity: governance structure, roles responsible, and policy and business standards.

Next was bringing together the process and information architectures, which is where IGOE (business process scope models) come into play, since they align information subject areas with top level business processes or business capabilities, allowing identification of gaps between process and information. This creates a framework for ensuring alignment at the design and operational levels, but does not map information subject areas to business functions since that is too dependent on the organizational structure.

Harris presented these models as being the business architecture, corresponding to rows 1 and 2 of Zachman (for example), which can then be used to provide context for the remainder of the enterprise architecture and into design. For example, once these models are established, the detailed process design can be integrated with logical data models.

She finished up by looking at how process and information architectures need to be developed in lock step, since business process ensures information quality, while information ensures process effectiveness.

Assessing BPM Maturity with @RogerBurlton

Roger Burlton held a joint session across several of the tracks on assessing BPM maturity, starting with the BPTrends pyramid of process maturity, which ranges from a wide base of the implementation level, to the middle tier of the business process level, up to the enterprise level that includes strategy and process architecture. He also showed his own “Burlton Hexagon” of the disciplines that form around business process and performance: policy and rules, human capital, enabling technologies, supporting infrastructure, organizational structure, and intent and strategy. His point is that not everyone is ready for maturity in all the areas that impact BPM (such as organizational structure), although they may be doing process transformation projects that require greater maturity in many of these other areas. At some level, these efforts must be able to be traced back to corporate strategy.

He presented a process maturity model based on the SEI capability maturity model, showing the following levels:

  1. Initial – zero process organizations
  2. Repeatable – departmental process improvement projects, some cross-functional process definition
  3. Defined – business processes delivered and measurements defined
  4. Managed – governance system implemented
  5. Optimizing – ongoing process improvement

Moving from level 2 to 3 is a pretty straightforward progression that you will see in many BPM “project to program” initiatives, but the jump to level 4 requires getting the high-level management on board and starting to make some cultural shifts. Organizations have to be ready to accept a certain level of change and maturity: in fact, organizational readiness will always constrain achievement of greater maturity, and may even end up getting the process maturity team in trouble.

He presented a worksheet for assessing your enterprise BPM gap, with several different factors on which you are intended to mark the current state, the desired future state, and the organizational management (labeled as “how far will management let you go?”). The factors include enterprise context, value chain models, alignment of resources with business processes, process performance measurement system, direct management responsibility for value chains, and a process CoE. By marking the three states (as is, to be, and what can we get away with) on each of these as a starting point, it allows you to see not just the spread between where you are and where you need to be, but adds in that extra dimension of organizational readiness for moving to a certain level of process maturity.

Depending on whether your organization is ready to crawl, walk or run (as defined by your organizational readiness relative to the as-is and to-be states), there are different techniques for getting to the desired maturity state: for those with low organizational readiness, for example, you need to focus on increasing that first, then evolve the process capabilities together with readiness as it increases. Organizational readiness at the executive level manifests as understanding, willingness and ability to do their work differently: in many cases, executives don’t want to change how they do their work, although they do want to reap the benefits of increased process maturity.

He showed a more detailed spreadsheet of a maturity and readiness assessment for a large technology company, color-coded based on which factors contribute most to an increase in maturity, and which hold the most risk since they represent the biggest jump in maturity without necessarily having the readiness.

With such a focus on readiness, change management is definitely a big issue with increasing process maturity. In order to address this, there are a number of steps in a communication plan: understand the stakeholders’ concerns, determine the messages, identify the media for delivering the messages, identify timetables for communication and change, identify the messengers, create/identify change agents (who is sometimes the biggest detractor to start), and deliver the message and handle the feedback. In looking at stakeholder concerns as part of the communication plan, you need to look at levels from informational (“what is it”), personal (“how will it impact my job”), management (“how will the change happen”), consequences (“what are the benefits”) and on into collaboration where the buy-in really starts to happen.

Ultimately, you’re not trying to sell business process change (or BPMS) within the organization: you’re trying to sell improvements in business performance, particularly for processes that are currently painful. Focus on the business goals, and use a model of the customer experience to illustrate how process improvements can improve that experience and therefore help meet overall business goals.

Finishing up with the maturity model, if you’re at level 1 or 2, get an initial process steering committee and CoE in place for governance, and plan a simple process architecture targeted at change initiatives rather than governance. Get standards for tools and templates in place, and start promoting the process project successes via the CoE. This is really about getting some lightweight governance in place, showing some initial successes, and educating all stakeholders on what process can do for them.

If you’re at level 3 or 4, you need to be creating your robust process architecture in collaboration with the business, and socialize it across the enterprise. With the Process Council (steering committee) in place, make sure that the process stewards/owners report up to the council. Put process measurements are in place, and ensure that the business is being managed relative to those KPIs. Expand process improvement out to the related areas across the enterprise architecture, and create tools and methods within the CoE that make it easy to plan, justify and execute process initiatives.

Agile Predictive Process Platforms for Business Agility with @jameskobielus

James Kobielus of Forrester brought the concepts of predictive analytics to processes to discuss optimizing processes using the Next Best Action (NBA): using analytics and predictive models to figure out what you should do next in a process in order to optimize customer-facing processes.

As we heard in this morning’s keynote, agility is mandatory not just for competitive differentiation, for but basic business survival. This is especially true for customer-facing processes: since customer relationships are fragile and customer satisfaction is dynamic, the processes need to be highly agile. Customer happiness metrics need to be built into process design, since customer (un)happiness can be broadcast via social media in a heartbeat. According to Kobielus, if you have the right data and can analyze it appropriately, you can figure out what a customer needs to experience in order to maximize their satisfaction and maximizing your profits.

Business agility is all about converging process, data, rules and analytics. Instead of static business processes, historical business intelligence and business rules silos, we need to have real-time business Intelligence, dynamic processes, and advanced analytics and rules that guide and automate processes. It’s all about business processes, but processes infused with agile intelligence.  This has become a huge field of study (and implementation) in customer-facing scenarios, where data mining and behavioral studies are used to create predictive models on what the next best action is for a specific customer, given their past behavior as your customer, and even social media sentiment analysis.

He walked through a number of NBA case studies, including auto-generating offers based on a customer’s portal behavior in retail; tying together multichannel customer communications in telecom; and personalizing cross-channel customer interactions in financial services. These are based on coupling front and back-office processes with predictive analytics and rules, while automating the creation of the predictive models so that they are constantly fine-tuned without human intervention.

Process Excellence at Elevations Credit Union

Following the opening keynote at Building Business Capability, I attended the session about Elevations Credit Union’s journey to process excellence. Rather than a formal presentation, this was done as a sit-down discussion with Carla Wolfe, senior business analyst at Elevations CU being interviewed by Mihnea Galateanu, Chief Storyteller for Blueworks Live at IBM. Elevations obviously has a pretty interesting culture, because they publicly state – on their Facebook page, no less – that achieving the Malcolm Baldrige National Quality Award is their big hairy audacious goal (BHAG). To get there, they first had to get their process house in order.

They had a lot of confusion about what business processes even are, and how to discover the business processes that they had and wanted to improve. They used the AQPC framework as a starting point, and went out to all of their business areas to see who “Got Process?”. As they found out, about 80% didn’t have any idea of their business processes, and certainly didn’t have them documented or managed in any coherent manner. As they went through process discovery, they pushed towards “enterprise process maps”: namely, their end-to-end processes, or value streams.

Elevations is a relatively small company, only 260 employees; they went from having 60 people involved in process management (which is an amazingly high percentage to begin with) to a “much higher” number now. By publicly stating the Baldridge award – which is essentially about business process quality – as a BHAG, they couldn’t back away from this; this was a key motivator that kept people involved in the process improvement efforts. As they started to look at how processes needed to work, there was a lot of pain, particularly as they looked as some of the seriously broken processes (like when the marketing department created a promotion using a coupon to bring in new customers, but didn’t inform operations about the expected bump of new business, nor tell the front line tellers how to redeem the coupons). Even processes that are perceived as being dead simple – such as cashing a $100 bill at a branch – ended up involving many more steps and people that anyone had anticipated.

What I found particularly interesting about their experience was how they really made this about business processes (using value stream terminology, but processes nonetheless), so that everything that they looked at had to relate to a value stream. “Processes are the keys to the kingdom”, said Wolfe, when asked why they focused on process rather than, for example, customers. As she pointed out, if you get your processes in order, everything else falls into place. Awesome.

It was a major shift in thinking for people to see how they fit into these processes, and how they supported the overall value stream. Since most people (not just those at Elevations) just think about their own silo, and don’t think beyond their immediate process neighbors. Now, they think about process first, transforming the entire organization into process thinking mode. As they document their processes (using, in part, a Six Sigma SIPOC movel), they add a picture of the process owner to each of the processes or major subprocesses, which really drives home the concept of process ownerships. I should point out that most of the pictures that she showed of this was of paper flow diagrams pasted on walls; although they are a Blueworks Live customer, the focus here was really on their process discovery and management. She did, however, talk about the limitations of paper-based process maps (repository management, collaboration, ease of use), and how they used Blueworks Live once they had stabilized their enterprise process maps in order to allow better collaboration around the process details. By developing the SIPOCs of the end-to-end processes first on paper, they then recreated those in Blueworks Live to serve as a framework for collaboration, and anyone creating a new process had to link it to one of those existing value streams.

It’s important to realize that this was about documenting and managing manual processes, not implementing them in an automated fashion using a BPMS execution engine. Process improvement isn’t (necessarily) about technology, as they have proved, although the the process discovery uses a technology tool, and the processes include steps that interact with their core enterprise systems. Fundamentally, these are manual processes that include system interaction. Which means, of course, that there may be a whole new level of improvement that they could consider by adding some process automation to link together their systems and possibly automate some manual steps, plus automate some of the metrics and controls.

So where are they in achieving their BHAG? One year after launching their process improvement initiative, they won the Timberline level of the Colorado Performance Excellence (CPEx) Award, and continue to have their sights set on the Baldridge in the long term. Big, hairy and audacious, indeed.