SAP World Tour Toronto: Morning Keynotes

There was a big crowd out for SAP’s only Canadian stop in its World Tour today: about 900 people in the keynote as Mark Aboud took the stage to discuss how SAP helps companies run their business, and look at the business trends in Canada right now: focus on the customer to create an experience; improve employee engagement by providing them with better tools and information to do their job better, increase speed in operations, managing information and distributing information. He moved on to talk about three technology trends, which echo what I heard at CASCON earlier this week: big data, cloud and mobility. No surprises there. He then spoke about what SAP is doing about these business and technology trends, which is really the reason that we’re all here today: cloud, analytics and mobility. Combined with their core ERP business, these “new SAP” products are where SAP is seeing market growth, and where they seem to be focusing their strategy.

He then invited CBC business correspondent Amanda Lang to the stage to talk further about productivity and innovation. It’s not just about getting better – it’s about getting better faster. This was very much a Canadian perspective, which means a bit of an inferiority complex comparing ourselves to the Americans, but also some good insights into the need to change corporate culture in order to foster an atmosphere of innovation, including leaving room for failure. Aboud is also providing some good insights into how SAP is transforming itself, in addition to what their customers are doing. SAP realized that they needed to bring game-changing technology to the market, and now see HANA as being as big for SAP as R/3 was back in the day. As Lang pointed out, service innovation is as important (or even more so) than product innovation in Canada, and SAP is supporting service businesses such as banking in addition to their more traditional position in product manufacturing companies.

Next up was Gary Hamel, recently named by the Wall Street Journal as the world’s most influential business thinker. Obviously, I’m just not up on my business thinkers, because I’ve never heard of him; certainly, he was a pro at business-related sound bytes.  He started off by asking what makes us inefficient, and talking about how we’re at an inflection point in terms of the rate of change required by business today. Not surprisingly, he sees management as the biggest impediment to efficiency and innovation, and listed three problematic characteristics that many companies have today:

  • Inertial (not very adaptable)
  • Incremental (not very innovative)
  • Insipid (not very inspiring)

He believes that companies need to foster with initiative, creativity and passion in their employees, not obedience, diligence and intellect. I’m not sure that a lot of companies would survive without intellect, but I agree with his push from feudal “Management 1.0” systems to more flexible organizations that empower employees. Management 1.0 is based on standardization, specialization, hierarchy, alignment, conformance, predictability and extrinsic rewards. Management 2.0 is about transparency (giving people the information that they need to do their job), disaggregation (breaking down the corporate power structures to give people responsibility and authority), natural hierarchies (recognizing people’s influence as measured by how much value they add), internal markets (providing resources inside companies based on market-driven principles rather than hierarchies, allowing ideas to come from anyone), communities of passion (allowing people to work on the things for which they have passion in order to foster innovation), self-determination (allowing freedom to move within corporate control structures based on value added), and openness (external crowdsourcing). Lots of great ideas here, although guaranteed to shake up most companies today.

The only bad note of the morning (aside from having to get up early, rent a Zipcar and drive through morning rush hour to an airport-area conference center far from downtown) was on the Women’s Leadership Forum breakfast. Moderated by a Deloitte partner, the panel included a VP of Marketing from Bell and Director of Legal for Medtronic. Where are the women in technology? Where are the women entrepreneurs? The woman from Bell, when asked about lessons that she could share, started with “work harder, every day – just that extra half hour or so”. That is so wrong. We need to be working smarter, not longer hours, and we need to take time away from work so that we’re not focused on it every day of our life if we expect to show true innovative leadership. About 20 minutes into the conversation, when the moderator turned the talk away from business and started asking about their children, horseback riding and the dreaded “work-life balance”, I left. What other business leadership forum that didn’t have the word “women” in the title would have included such topics? Quite frankly, this was an embarrassment.

Aligning BPM and EA Tutorial at BBCCon11

I reworked my presentation on BPM in an enterprise architecture context (a.k.a., “why this blog is called ‘Column 2’”) that I originally did at the IRM BPM conference in London in June, and presented it at the Building Business Capability conference in Fort Lauderdale last week. I removed much of the detailed information on BPMN, refined some of the slides, and added in some material from Michael zur Muehlen’s paper on primitives in BPM and EA. Some nice improvements, I thought, and it came in right on time at 3 hours without having to skip over some material as I did in London.

Here are some of the invaluable references that I used in creating this presentation:

That should give you plenty of follow-on reading if you find my slides to be too sparse on their own.

Sal Vella on Technologies for a Smarter Planet at CASCON2011

I attended the keynote at IBM’s CASCON conference in Toronto today, where Judy Huber, who directs the IBM Canada software lab, kicked off the session by reminding us that IBM software development has been happening in Canada since 1967 and continues to grow, and of the importance of collaboration between the research and industry communities. She introduced Joanna Ng, who is the head of research at the lab, to congratulate the winners of the most influential paper from CASCON 2001 (that date is not a typo, it’s a 10-year thing): Svetlana Kiritchenko and Stan Matwin for “Classification with Co-Training” (on email classification).

The main speaker was Sal Vella, VP of architecture and technology within the IBM software group, talking about technologies to build solutions for a smarter planet. Fresh from the IOD conference two weeks ago, I was all primed for this; there was a great booth at IOD that highlighted “smarter technology” with some interesting case studies. IBM’s smarter planet initiative is about technologies that allow us to do things that we were never able to do before, much of which is based on the immeasurable volume of data constantly produced by people, devices and systems. Consider electricity meters, like the one that you have in your home: it used to be that these were read once per month (if you were lucky) by a human, and the results entered into a billing system. Now, smart meters are read every 15 minutes to allow for time of use billing that rewards people for shifting their electricity usage away from peak periods. Analytics are being used in ways that they were never used before, and he discussed the popular Moneyball case of building a sports team based on player statistics. He also spoke about an even better use of analytics to create “solutions for a partying planet”: a drinks supplier predicting sports games outcomes to ensure that the pubs frequented by the fans of the teams most like to win had enough alcohol on hand to cover the ensuing parties. Now that’s technology used for the greater good. Winking smile

There are a lot of examples of big data and analytics that were previously unmanageable that are now becoming reasonable targets, most of which could be considered event-based: device instrumentation, weather data, social media, credit card transactions, crime statistics, traffic data and more. There are also some interesting problems in determining identity and relationships: figuring out who people really are even when they use different versions of their name, and who they are connected to in a variety of different ways that might indicate potential for fraud or other misrepresentation. Scary and big-brotherish to some, but undeniably providing organizations (including governments) with deeper insights into their customers and constituents. If those who complain about governments using this sort of technology “against” them would learn how to use it themselves, the tables might be turned as we gain insight into how well government is providing services to us.

We heard briefly from Charles Gauthier, acting director at the institute for information technology at National Research Council (NRC) Canada. NRC helped to create the CASCON conference 21 years ago, and continue to sponsor it; they support research in a number of areas that overlap with CAS and the other researchers and exhibitors presenting here.

The program chairs, Marin Litoiu of York University and Eleni Stroulia of University of Alberta presented awards for the two outstanding papers from the 22 papers at the conference:

  • “Enhancing Applications Robustness in Cloud Data Centres” by Madalin Mihailescu, Andres Rodriguez and Cristiana Amza of University of Toronto, and Dmitrijs Palcikovs, Gabriel Iszlai, Andrew Trossman and Joanna Ng of IBM Canada
  • “Parallel Data Cubes on Multi-Core Processors with Multiple Disks” for best student paper, by Hamidreza Zaboli and Frank Dehne of Carlton University

We finished with a presentation by Stan Matwin of University of Ottawa, co-author of the most influential paper presentation on email classification from the CASCON of 10 years past (his co-author is due to give birth on Wednesday, so decided not to attend). It was an interesting look at how the issue of email classification has continued to grow in the past 10 years; systems have become smarter since then, and we have automated spam filtering as well as systems for suggesting actions to take (or even taking actions without human input) on a specific message. The email classification that they discussed in their paper was based on classification systems where multiple training sets were used in concert to provide an overall classification for email messages. For example, two messages might both use the word “meeting” and a specific time in the subject line, but one might include a conference room reference in the body while the other references the local pub. Now, I often have business meetings in the pub, but I understand that many people do not, so I can see the value of such a co-training method. In 2001, they came to the conclusion that co-training can be useful, but is quite sensitive to its parameters and the learning algorithms used. Email classification has progressed since then: Bayesian (and other) classifiers have improved drastically, data representation is richer (through the use of meta formats and domain-specific enrichment) to allow for easier classification. social network and other information can be correlated, and there are specific tailored solutions for some email classification applications such as legal discovery. Interesting to see this sort of perspective on a landmark paper in the field of email classification.

I’m not sticking around for any of the paper presentations, since the ones later today are a bit out of my area of interest, and I’m booked the rest of the week on other work. However, I have the proceedings so will have a chance to look over the papers.

NSERC BI Network at CASCON2011 (Part 2)

The second half of the workshop started with Renée Miller from University of Toronto digging into the deeper database levels of BI, and the evolving role of schema from a prescriptive role (time-invariant, used to ensure data consistency) to a descriptive role (describe/understand data, capture business knowledge). In the old world, a schema was meant to reduce redundancy (via Boyce-Codd normal form), whereas the new world schema is used to understand data, and the schema may evolve. There are a lot of reasons why data can be “dirty” – my other half, who does data warehouse/BI for a living, is often telling me about how web developers create their operational database models mostly by accident, then don’t constrain data values at the UI – but the fact remains that no matter how clean you try to make it, there are always going to be operational data stores with data that needs some sort of cleansing before effective BI. In some cases, rules can be used to maintain data consistency, especially where those rules are context-dependent. In cases where the constraints are inconsistent with the existing data (besides asking the question of how that came to be), you can either repair the data, or discover new constraints from the data and repair the constraints. Some human judgment may be involved in determining whether the data or the constraint requires repair, although statistical models can be used to understand when a constraint is likely invalid and requires repair based on data semantics. In large enterprise databases as well as web databases, this sort of schema management and discovery could be used to identify and leverage redundancy in data to discover metadata such as rules and constraints, which in turn could be used to modify the data in classic data repair scenarios, or modify the schema to adjust for a changing reality.

Sheila McIlraith from University of Toronto presented on a use-centric model of data for customizing and constraining processes. I spoke last week at Building Business Capability on some of the links between data and processes, and McIlraith characterized processes as a purposeful view of data: processes provide a view of the data, and impose policies on data relative to some metrics. Processes are also, as she pointed out, are a delivery vehicle for BI – from a BPM standpoint, this is a bit of a trivial publishing process – to ensure that the right data gets to the right stakeholder. The objective of her research is to develop business process modeling formalism that treats data and processes as first class citizens, and supports specification of abstract (ad hoc) business processes while allowing the specification of stakeholder policies, preferences and priorities. Sounds like data+process+rules to me. The approach is to specify processes as flexible templates, with policies as further constraints; although she represents this as allowing for customizable processes, it really just appears to be a few pre-defined variations on a process model with a strong reliance on rules (in linear temporal logic) for policy enforcement, not full dynamic process definition.

Lastly, we heard from Rock Leung from SAP’s academic research center and Stephan Jou from IBM CAS on industry challenges: SAP and IBM are industry partners to the NSERC Business Intelligence Network. They listed 10 industry challenges for BI, but focused on big data, mobility, consumable analytics, and geospatial and temporal analytics.

  • Big data: Issues focus on volume of data, variety of information and sources, and velocity of decision-making. Watson has raised expectations about what can be done with big data, but there are challenges on how to model, navigate, analyze and visualize it.
  • Consumable analytics: There is a need to increase usability and offering new interactions, making the analytics consumable by everyone – not just statistical wizards – on every type of device.
  • Mobility: Since users need to be connected anywhere, there is a need to design for smaller devices (and intermittent connectivity) so that information can be represented effectively, and seamless with representations on other devices. Both presenters said that there is nothing that their respective companies are doing where mobile device support is not at least a topic of conversation, if not already a reality.
  • Geospatial and temporal analytics: Geospatial data isn’t just about Google Maps mashups any more: location and time are being used as key constraints in any business analytics, especially when you want to join internal business information with external events.

They touched briefly on social in response to a question (it was on their list of 10, but not the short list), seeing it as a way to make decisions better.

For a workshop on business intelligence, I was surprised at how many of the presentations included aspects of business rules and business process, as well as the expected data and analytics. Maybe I shouldn’t have been surprised, since data, rules and process are tightly tied in most business environments. A fascinating morning, and I’m looking forward to the keynote and other presentations this afternoon.

NSERC BI Network at CASCON2011 (Part 1)

I only have one day to attend CASCON this year due to a busy schedule this week, so I am up in Markham (near the IBM Toronto software lab) to attend the NSERC Business Intelligence Network workshop this morning. CASCON is the conference run by IBM’s Centers for Advanced Studies throughout the world, including the Toronto lab (where CAS originated), as a place for IBM researchers, university researchers and industry to come together to discuss many different areas of technology. Sometimes, this includes BPM-related research, but this year the schedule is a bit light on that; however, the BI workshop promises to provide some good insights into the state of analytics research.

Eric Yu from University of Toronto started the workshop, discussing how BI can enable organizations to become more adaptive. Interestingly, after all the talk about enterprise architecture and business architecture at last week’s Building Business Capability conference, that is the focus of Yu’s presentation, namely, that BI can help enterprises to better adapt and align business architecture and IT architecture. He presented a concept for an adaptive enterprise architecture that is owned by business people, not IT, and geared at achieving measurable business success. He discussed modeling variability at different architectural layers, and the traceability between them, and how making BI an integral part of an organization – not just the IT infrastructure – can support EA adaptability. He finished by talking about maturity models, and how a closed loop deployment of BI technologies can help meet adaptive enterprise requirements. Core to this is the explicit representation of change processes and their relationship to operational processes, as well as linking strategic drivers to specific goals and metrics.

Frank Tompa from University of Waterloo followed with a discussion of mapping policies (from a business model, typically represented as high-level business rules) to constraints (in a data model) so that these can be enforced within applications. My mind immediately went to why you would be mapping these to a database model rather than a rules management system; his view seems to be that a DBMS is what monitors at a transactional level and ensures compliance with the business model (rules). His question: “how do make the task of database programming easier?” My question: “why aren’t you doing this with a BRMS instead of a DBMS?” Accepting his premise that this should be done by a database programmer, the approach is to start with object definitions, where an object is a row (tuple) defined by a view over a fixed database schema, and represents all of the data required for policy making. Secondly, consider the states that an object can assume by considering that an object x is in state S if its attributes satisfy S(x). An object can be in multiple states at once; the states seem to be more like functions than states, but whatever. Thirdly, the business model has to be converted to an enforcement model through a sort of process model that also includes database states; really more of a state diagram that maps business “states” to database states, with constraints on states and state transitions denoted explicitly. I can see some value in the state transition constraint models in terms of representing some forms of business rules and their temporal relationships, but his representation of a business process as a constraint diagram is not something that a business analyst is ever going to read, much less create. However, the role of the business person seems to be restricted to “policy designer” listing “states of interest”, and the goal of this research is to “form a bridge between the policy manager and the database”. Their future work includes extracting workflows from database transaction logs, which is, of course, something that is well underway in the BPM data mining community. I asked (explicitly to the presenter, not just snarkily here in my blog post) about the role of rules engines: he said that one of the problems was in vocabulary definition, which is often not done in organizations at the policy and rules level; by the time things get to the database, the vocabulary is sufficiently constrained that you can ensure that you’re getting what you need. He did say that if things could be defined in a rules engine using a standardized vocabulary, then some of the rules/constraints could be applied before things reached the database; there does seem to be room for both methods as long as the business rules vocabulary (which does exist) is not well-entrenched.

Jennifer Horkoff from University of Toronto was up next discussing strategic models for BI. Her research is about moving BI from a technology practice to a decision-making process that starts with strategic concerns, generates BI queries, interprets the results relative to the business goals and decide on necessary actions. She started with the OMG Business Motivation Model (BMM) for building governance models, and extended that to a Business Intelligence Model (BIM), or business schema. The key primitives include goals, situations (can model SWOT), indicators (quantitative measures), influences (relationships) and more. This model can be used at the high-level strategic level, or at a more tactical level that links more directly to activities. There is also the idea of a strategy, which is a collection of processes and quality constraints that fulfill a root-level goal. Reasoning that can be done with BIMs, such as whether a specific strategy can fulfill a specific goal, and influence diagrams with probabilities on each link used to help determine decisions. They are using BIM concepts to model a case study with Rouge Valley Health System to improve patient flow and reduce wait times; results from this will be seen in future research.

Each of these presentations could have filled a much bigger time slot, and I could only capture a flavor of their discussions. If you’re interested in more detail, you can contact the authors directly (links to each above) to get the underlying research papers; I’ve always found researchers to be thrilled that anyone outside the academic community is interested in what they’re doing, and are happy to share.

We’re just at the md-morning break, but this is getting long so I’ll post this and continue in a second post. Lots of interesting content, I’m looking forward to the second half.

Catch Me Twice On “Webinar Week”

I’m presenting on two webinars this week. First, on Tuesday (tomorrow), I will be joining Jeremy Westerman of TIBCO to discuss the BPM issues and challenges specific to large enterprises. It’s at 11am Eastern (8am Pacific) on Tuesday, and you can sign up here.

Then, on Wednesday, I’ll be presenting with Matt Cicciari of Progress on how BPM can work within an application development environment. Since this is targeted at Progress OpenEdge developers who may not know a lot about BPM, I’ll be covering some BPM background plus why you want to do certain things with a BPMS, such as explicit process modeling. This is at 11am Eastern on Wednesday, and you can sign up here.

These two gigs are sandwiched between IBM’s CASCON today, where I am attending the NSERC Business Intelligence workshop in the morning and the keynote presentations in the afternoon, and SAP’s World Tour on Thursday. Both of these, although not requiring me to get on an airplane, do require me to get in a Zipcar and drive to the far reaches of the Toronto suburbs and beyond.

Improving Process Quality with @TJOlbrich

My last session at Building Business Capability before heading home, and I just had to sit in on Thomas Olbrich’s session on some of the insights into process quality that he has gained through the Process TestLab. Just before the session, he decided to retitle it as “How to avoid being mentioned by Roger Burlton”, namely, not being one of the process horror stories that Roger loves to share.

According to many analyst studies, only 18% of business process projects achieve their scope and objectives while staying on time and on budget, making process quality more of an exception than the rule. In the Process TestLab, they see a lot of different types of process quality errors:

  • 92% have logical errors
  • 62% have business errors
  • 95% have dynamic defects that would manifest in the environment of multiple processes running simultaneously, and having to adapt to changing conditions
  • 30% are unsuited to the real-world business situation

Looking at their statistics for 2011 to date, about half of the process defects are due to discrepancies between models and the verbal/written description – what would typically be considered “requirements” – with the remainder spread across a variety of defects in the process models themselves. The process model defects may manifest as endless loops, disappearing process instances, missing data and a variety of other undesired results.

He presented four approaches for improving process quality:

  • Check for process defects at the earliest possible point in the design phase
  • Validate the process before implementing, either through manual reenactment, simulation, the TestLab approach which simulates the end-user experience as well as the flow, or a BPMS environment such as IBM BPM (formerly Lombardi) that allows playback of models and UI very early in the design phase
  • Check for practicability to determine if the process will work in real life
  • Understand the limits of the process to know when it will cease to deliver when circumstances change

Olbrich’s approach is based on the separation of business-based modeling of processes from IT implementation: he sees that these sort of process quality checks are done “before you send the process over to IT for implementation”, which is where their service fits in. Although that’s still the norm in many cases, as model-driven development becomes more business-friendly, the line between business modeling and implementation is getting fuzzier in some situations. However, in most complex line-of-business processes, especially those that use quite a bit of automation and have complex user experience, this separation is still prevalent.

Some of his case studies certainly bear this out: a fragment of the process models sent to them by a telecom customer filled an entire slide, even though the activities in the processes were only slightly bigger than individual pixels. The customer had “tested” the process themselves already, but using the typical method of showing the process, encouraging people to walk through it as quickly as possible, and sign off on it. In the Process TestLab, they found 120 defects in process logic alone, meaning that the processes would never have executed as modeled, and 20 process integration defects that determine how different processes related to each other. Sure, IT would have worked around those defects during implementation, but then the process as implemented would be significantly different from the process as modeled by the business. That means that the business’ understanding and documentation of their processes are flawed, and that IT had to make changes to the processes – possibly without signoff from the business – that may actually change the business intention of the processes.

It’s necessary to use context when analyzing and optimizing processes in order to avoid verschlimmbesserung, roughly translated as “improvements that make things worse”, since the interaction between processes is critical: change is seldom limited to a single process. This is where process architecture can help, since it can show the relations between processes as well as the processes themselves.

Testing process models by actually experiencing them, as if they were already live, allows business users and analysts to detect flaws while they are still in the model stage by standing in for the users of the intended process and seeing if they could do the assigned business task given the user interface and information at that point in the process. Process TestLab is certainly one way to do that, although a sufficiently agile model-driven BPMS could probably do something similar if it were used that way (which most aren’t). In addition to this type of live testing, they also do more classic simulation, highlighting bottlenecks and other timing-related problems across process variations.

The key message: process quality starts at the very beginning of the process lifecycle, so test your processes before you implement, rather than trying to catch them during system testing. The later that errors are identified, the more expensive it is to fix them.

What Analysts Need to Understand About Business Events

Paul Vincent, CTO of Business Rules and CEP at TIBCO (and possibly the only person at Building Business Capability sporting a bow tie), presented a less technical view of events that you would normally see in one of his presentation, intended to have the business analysts here at Building Business Capability understand what events are, how they impact business processes, and how to model them. He started with a basic definition of events – an observation, a change in state, or a message – and why we should care about them. I cover events in the context of processes in many of the presentations that I give (including the BPM in EA tutorial that I did here on Monday), and his message is the same: life is event-driven, and our business processes need to learn to deal with that fact. Events are one of the fundamentals of business and business systems, but many systems do not handle external events well. Furthermore, many process analysts don’t understand events or how to model them, and can end up creating massive spaghetti process models to try and capture the result of events since they don’t understand how to model events explicitly.

He went through several different model types that allow for events to be captured and modeled explicitly, and compared the pros and cons of each: state models, event process chain models, resources events agents (REA) models, and BPMN models. The BPMN model is the only one that really models events in the context of business processes, and relates events as drivers of process tasks, but is really only appropriate for fairly structured processes. It does, however, allow for modeling 63 different types of events, meaning that there’s probably nothing that can happen that can’t be modeled by a BPMN event. The heavy use of events in BPMN models can make sense for heavily automated processes, and can make the process models much more succinct. Once the event notation is understood, it’s fairly easy to trace through them, but events are the one thing in BPMN that probably won’t be immediately obvious to the novice process analyst.

In many cases, individual events are not the interesting part, but rather a correlation between many events; for example, fraud events may be detected only have many small related transactions have occurred. This is the heart of complex event processing (CEP), which can be applied to a wide variety of business situations that rely on large volumes of events, and distinguishes between simple process patterns and business rules that can be applied to individual transactions.

Looking at events from an analyst’s view, it’s necessary to identify actors and roles, just as in most use cases, then identify what they do and (more importantly) when they do it in order to drive out the events, their sources and destinations. Events can be classified as positive (e.g., something that you are expecting to happen actually happened), negative (e.g., something that you are expecting to happen didn’t happen within a specific time interval) or sets (e.g., the percentage of a particular type of event is exceeding an SLA). In many cases, the more complex events that we start to see in sets are the ones that you’re really interested in from a business standpoint: fraud, missed SLAs, gradual equipment failure, or customer churn.

He presented the EPTS event reference architecture for complex events, then discussed how the different components are developed during analysis:

  • Event production and consumption, namely, where events come from and where they go
  • Event preparation, or what selection operations need to be performed to extract the events, such as monitoring, identification and filtering
  • Event analysis, or the computations that need to be performed on the individual events
  • Complex event detection, that is, the event correlations and patterns that need to performed in order to determine if the complex event of interest has occurred
  • Event reaction, or what event actions need to be performed in reaction to the detected complex event; this can overlap to some degree with predictive analytics in order to predict and learn the appropriate reactions

He discussed event dependencies models, which show event orderings, and relate events together as meaningful facts that can then be used in rules. Although not a common practice, this model type does show relationships between events as well as linking to business rules.

He finished with some customer case studies that include CEP and event decision-making: FedEx achieving zero latency in determining where a package is right now; and Allstate using CEP to adjust their rules on a daily basis, resulting in a 15% increase in closing rates.

A final thought that he left us with: we want agile processes and agile decisions; process changes and rule changes are just events. Analyzing business events is good, but exploiting business events is even better.

Process and Information Architectures

Last day of the Building Business Capability conference, and I attended Louise Harris’ session on process and information architectures as the missing link to improving enterprise performance. She was on the panel on business versus IT architecture that I moderated yesterday, and had a lot of great insight into business architecture and enterprise architecture.

Today’s session highlighted how business processes and information are tightly interconnected – business processes create and maintain information, and information informs and guides business processes – but that different types of processes use information differently. This is a good distinction: looking at what she called “transactional” (structured)  versus “creative” (case management) versus “social” (ad hoc), where transactional processes required exact data, but the creative and social processes may require interpretation of a variety of information sources that may not be known at design time. She showed the Burlton Hexagon to illustrate how information is not just input to be processed into output, but also used to guide processes, inform desisions and measure process results.

This led to Harris’ definition of a business process architecture as “defining the business processes delivering results to stakeholders and supported by the organization/enterprise showing how they are related to each other and to the strategic goals of the organization/enterprise”. (whew) This includes four levels of process models:

  • Business capability models, also called business service models or end-to-end business process models, which is the top level of the work hierarchy that defined what business processes are, but not how they are performed. Louise referenced this to a classic EA standpoint as being row 1 of Zachman (in column 2).
  • Business process models, which provide deeper decomposition of the end-to-end models that tie them to the KPIs/goals. This has the effect of building process governance into the architecture directly.
  • Business process flow models, showing the flow of business processes at the level of logistical flow, such as value chains or asset lifecycles, depending on the type of process.
  • Business process scope models (IGOEs, that is, Inputs, Guides, Outputs, Enablers), identifying the resources involved in the process, including information, people and systems.

She moved on to discuss information architecture, and its value in defining information assets as well as content and usage standards. This includes three models:

  • Information concept model with the top level of the information related to the business, often organized into domains such as finance or HR. For example, in the information domain of finance, we might have information subject areas (concepts) of Invoicing, capital assets, budget, etc.
  • Information relationship model defines the relationships between the concepts identified in the information concept model, which can span different subject areas. This can look like an ERD, but the objects being connected are higher-level business objects rather than database objects: this makes it fairly tightly tied to the processes that those business objects undergo.
  • Information governance model, which defines that has to be done to maintain information integrity: governance structure, roles responsible, and policy and business standards.

Next was bringing together the process and information architectures, which is where IGOE (business process scope models) come into play, since they align information subject areas with top level business processes or business capabilities, allowing identification of gaps between process and information. This creates a framework for ensuring alignment at the design and operational levels, but does not map information subject areas to business functions since that is too dependent on the organizational structure.

Harris presented these models as being the business architecture, corresponding to rows 1 and 2 of Zachman (for example), which can then be used to provide context for the remainder of the enterprise architecture and into design. For example, once these models are established, the detailed process design can be integrated with logical data models.

She finished up by looking at how process and information architectures need to be developed in lock step, since business process ensures information quality, while information ensures process effectiveness.

Assessing BPM Maturity with @RogerBurlton

Roger Burlton held a joint session across several of the tracks on assessing BPM maturity, starting with the BPTrends pyramid of process maturity, which ranges from a wide base of the implementation level, to the middle tier of the business process level, up to the enterprise level that includes strategy and process architecture. He also showed his own “Burlton Hexagon” of the disciplines that form around business process and performance: policy and rules, human capital, enabling technologies, supporting infrastructure, organizational structure, and intent and strategy. His point is that not everyone is ready for maturity in all the areas that impact BPM (such as organizational structure), although they may be doing process transformation projects that require greater maturity in many of these other areas. At some level, these efforts must be able to be traced back to corporate strategy.

He presented a process maturity model based on the SEI capability maturity model, showing the following levels:

  1. Initial – zero process organizations
  2. Repeatable – departmental process improvement projects, some cross-functional process definition
  3. Defined – business processes delivered and measurements defined
  4. Managed – governance system implemented
  5. Optimizing – ongoing process improvement

Moving from level 2 to 3 is a pretty straightforward progression that you will see in many BPM “project to program” initiatives, but the jump to level 4 requires getting the high-level management on board and starting to make some cultural shifts. Organizations have to be ready to accept a certain level of change and maturity: in fact, organizational readiness will always constrain achievement of greater maturity, and may even end up getting the process maturity team in trouble.

He presented a worksheet for assessing your enterprise BPM gap, with several different factors on which you are intended to mark the current state, the desired future state, and the organizational management (labeled as “how far will management let you go?”). The factors include enterprise context, value chain models, alignment of resources with business processes, process performance measurement system, direct management responsibility for value chains, and a process CoE. By marking the three states (as is, to be, and what can we get away with) on each of these as a starting point, it allows you to see not just the spread between where you are and where you need to be, but adds in that extra dimension of organizational readiness for moving to a certain level of process maturity.

Depending on whether your organization is ready to crawl, walk or run (as defined by your organizational readiness relative to the as-is and to-be states), there are different techniques for getting to the desired maturity state: for those with low organizational readiness, for example, you need to focus on increasing that first, then evolve the process capabilities together with readiness as it increases. Organizational readiness at the executive level manifests as understanding, willingness and ability to do their work differently: in many cases, executives don’t want to change how they do their work, although they do want to reap the benefits of increased process maturity.

He showed a more detailed spreadsheet of a maturity and readiness assessment for a large technology company, color-coded based on which factors contribute most to an increase in maturity, and which hold the most risk since they represent the biggest jump in maturity without necessarily having the readiness.

With such a focus on readiness, change management is definitely a big issue with increasing process maturity. In order to address this, there are a number of steps in a communication plan: understand the stakeholders’ concerns, determine the messages, identify the media for delivering the messages, identify timetables for communication and change, identify the messengers, create/identify change agents (who is sometimes the biggest detractor to start), and deliver the message and handle the feedback. In looking at stakeholder concerns as part of the communication plan, you need to look at levels from informational (“what is it”), personal (“how will it impact my job”), management (“how will the change happen”), consequences (“what are the benefits”) and on into collaboration where the buy-in really starts to happen.

Ultimately, you’re not trying to sell business process change (or BPMS) within the organization: you’re trying to sell improvements in business performance, particularly for processes that are currently painful. Focus on the business goals, and use a model of the customer experience to illustrate how process improvements can improve that experience and therefore help meet overall business goals.

Finishing up with the maturity model, if you’re at level 1 or 2, get an initial process steering committee and CoE in place for governance, and plan a simple process architecture targeted at change initiatives rather than governance. Get standards for tools and templates in place, and start promoting the process project successes via the CoE. This is really about getting some lightweight governance in place, showing some initial successes, and educating all stakeholders on what process can do for them.

If you’re at level 3 or 4, you need to be creating your robust process architecture in collaboration with the business, and socialize it across the enterprise. With the Process Council (steering committee) in place, make sure that the process stewards/owners report up to the council. Put process measurements are in place, and ensure that the business is being managed relative to those KPIs. Expand process improvement out to the related areas across the enterprise architecture, and create tools and methods within the CoE that make it easy to plan, justify and execute process initiatives.