Architecture & Process: Pat Cappelaere

For my last session — I have to leave for the airport around the time that the roundtables start — I sat in on Pat Cappelaere of Vightel discussing workflows, Identity 2.0 and delegated authority using REST.

He showed how lightweight protocols like ATOM — rather than SOAP — can be used to allow the quick mashup of information in near real time. He spent quite a bit of time on the advantages of a RESTful approach (summary: it’s easier), and the nature of the basic commands (post, get, put, delete) for managing web-based resources.

Where identity comes into all this is that some resources that might contribute to a mashup could be behind some type of access control, and the source system can’t manage the identities of all of the people who might want to access the end product. Identity 2.0 allows for the delegation of authentication to a trusted provider, i.e., using your OpenID (from Yahoo or other providers) on other sites instead of creating a user account on that site directly. That looks after basic authentication, but there also needs to be some authorization or pre-approval of transactions, which is what OAuth has been created for.

He’s using the term workflow to mean (I believe) the steps to assemble and process various resources and services into a web application: a service orchestration of various resources on the web using lightweight protocols. To implement this, they’ve created a RESTful version of the workflow bindings defined by WfMC as WfXML.

This was interesting, but I’m not at all clear what it was doing at this conference.

Architecture & Process: Doug Reynolds

Doug Reynolds of AgilityPlus Solutions presented on critical success factors in a BPM implementation. I’ve known Doug a long time — back in 2000 when he was at Meta Software and I was at FileNet — and have had a lot of discussions with him over the years about the BPM implementations that we’ve seen, both the successes and the failures.

He talked about how BPM is similar to other performance improvement initiatives, but that there’s some key differences, too. Any successful BPM project has several facets: solution, project management and change management. Breaking the solution component down further, it includes people, process and technology. He feels that process is the place to start with any solution, since people and technology will need to meet the needs of the process.

In order to talk about success factors, it’s important to look at why projects fail. With IT projects, we have a lot of failures to choose from for examination, since various analyst sources report failure levels of up to 70% in IT projects. In many cases, projects fail because of poor requirements or ill-defined scope; in BPM projects, the business process analysis drives the requirements, which in turn drive the solution, highlighting the critical nature of business process analysis.

He highlighted eight signs of a healthy BPM implementation, using the word semiotic as a mnemonic:

  • Stability. The system must be stable so that the business is able to rely on its availability.
  • Exploitation. You need to exploit the technology — put it to work and continually improve your usage of it — in order to get the benefit. Buying a system that was very successful for someone else doesn’t automatically confer their level of success onto you.
  • Management and leadership. You need executive sponsorship with vision and direction, but also have to consider the impact on middle management, who are heavily affected by changing processes in terms of how they manage their workforce.
  • Inertia. You need to actively change the way people work, or they’ll keep doing things the old way with the new system.
  • Ownership. Ownership of the solution needs to stay with the business, not transfer to IT, in order to have the focus stay on benefit rather than capability.
  • Transparency. Some aspects of work may appear to be less transparent — e.g., you can’t tell how much work there is to do by walking around and looking at the piles of paper — but the metrics gathered by a BPMS actually provide much more information than manual methods. This “Big Brother” view of individual performance can be threatening to some people, and their perceptions may need to be managed.
  • Integration. Integration with other systems can be a huge contributor to the benefits by facilitating automation and decoupling process from data and functional services. However, this can be too complex and cause project delays if too much integration is attempted at once. I completely agree with this, and usually advocate getting something simpler in production sooner, then adding in the integration bits as you go along.
  • Change management. Change management is key to bringing people, process and technology together successfully, and needs to be active throughout a project, not as a last-minute task just before deployment.

Doug encouraged interaction throughout the presentation by asking us to identify which two of these eight are the most foundational, and eventually identified his two foundational picks as exploitation and inertia: using the system the best way possible, and ensuring that change happens, are two things that he often sees missing in less-than-successful implementations, and are required before the rest of the things can occur.

Architecture & Process: Jaakko Riihinen

Jaakko Riihinen, head of enterprise architecture for Nokia Siemens Networks, spoke about business process architecture: a deep dive into the details of one set of models that they use in their EA efforts. He started with definitions of architecture, process and abstract modeling, reinforcing that a presentation view of a model is just a view, not the entire model. In his definition of architecture, I especially like the analogy that he made of architecture being the external energy that keeps systems within an organization from evolving into chaos. In general, architectures tend to satisfy nonfunctional requirements, such as optimization of system economics.

One of the issues in modeling processes is the different types of models that may be used in different departments, and the ultimate goal is to have a set of process models for the entire organization rather than having them be constructed piecemeal. He characterizes processes in three types: transactional, creative (team collaboration on work product), and community (dynamic, self-organizing); the modeling method that is the focus of this presentation addresses transactional processes.

He shows three elements of their process architecture:

  • A process integration model, showing the high-level view of how processes and work products interact. This is the functional design of the process architecture.
  • A process behavior model, which is a standard swimlane process model that shows the detailed view of one of the process nodes from the process integration model. It’s different from BPMN in that the work products are shown within the sequence flow rather than attached as artifacts since they are focused on linking the activities closely to what triggers them and what they produce.
  • Work instructions for performing one activity in a process.

Other characteristics of a process are also modeled:

  • Process instantiations, which can be scheduled or event-driven; where event-driven can be based on work product (e.g., inbound document) or an explicit event (e.g., timeout).
  • Execution constraints, either a free-running schedule (activities execute as soon as inputs are available) or an imposed schedule (e.g., periodic reporting)

The process integration model shows the instantiation methods for each process, as well as showing how multiple processes can provide input to another process in a variety of ways, including both informational and triggering.

All of this provides the notational background to discuss the real issue: normalization of process models and process architecture, using methods derived from classic systems methodologies such as control system theory and critical path analysis. The benefits of normalization include unambiguous definitions that are easier to understand, and better recognition and reuse of process patterns, but the real benefit is that this turns process architecture from an art to a science. There are four basic rules for process architecture normalization:

  • Structural integrity: closing alternative paths within parallel paths, and closing parallel paths within alternative paths (these are basic topology constraints from graph theory, and would be enforced by most process modeling tools anyway)
  • Functional cohesion: no disconnected activities
  • Temporal cohesion: no synchronous processing by activities outside the process (which implies that you would not use the BPMN method of separate pools for separate organizations since messaging between pools would not be allowed, but would consider that separate organization’s activities to be part of the process if your internal process needs to wait for a response from the other organization before continuing the process)
  • Modularity: activities or roles having different cardinalities belong to separate processes (this helps to determine the boundaries between processes, e.g., sets of activities that pertain to the entire company are usually in separate processes from those for individual business units), variance at the process level (when alternative paths in a process become sufficiently complex or encompass most of the process, create two variants of the process), variance at the integration model level, deployment details, process components (subprocesses shared between processes)

Determining the boundaries of a single process may involve combining what are considered to be separate processes into one: we discussed the example of employee onboarding, which involves several departments but is really a single process. Looking at the triggers for activities and processes also helps to determine the boundaries: activities that asynchronously create work products that are consumed by other activities are usually in a separate process, for example, as are activities that are performed on different schedules.

They’re using ARIS, and have configured their own metamodel for the process integration model and behavior model. Riihinen is also interested in developing automated methods for normalizing process models.

His slides are incredibly dense with information, and fascinating, so I suggest that you check them out for more details on all of this. In particular, check out the slides that show examples of the four process normalization rules.

As you can tell by the above URL, the conference is using Slideshare to publish (but not allow downloads of) all presentations here.

Architecture & Process: Robert Pillar

The first breakout session of the day was on connecting BPM, SOA and EA for enterprise transformation, with Robert Pillar of Microsoft. He’s talking about how compliance is the key driver to the coalition of BPM, SOA and EA, but that the coalition starts with holistic collaboration. There are barriers to this:

  • Organizational barriers: IT organizations and silos between EA, SOA and BPM groups
  • Cultural barriers: lack of understanding the business value, lack of understanding the concepts, and old-style mentality
  • Political barriers: resistance to change
  • Collaboration barriers: resistance to meetings and collaboration

Risks and benefits must be measured.

At this point, someone in the audience spoke up and said “we understand all this, can you just skip ahead to any solutions to these issues that you have to present?” Incredibly rude, and really put the speaker on the spot, but he had a point.

He had a summary slide on why to choose SOA:

  • It offers a focus on business processes and goals: supports customer centric view of the business, allows design of solutions that keep requirement changes (agility) in mind
  • It offers an iterative and incremental approach following EA and BPM initiatives: make change happen over time, allow employees learn about the concept of services
  • It offers a means to reap the benefits of existing investments on technology: reuse IT resources, focus on business problems without being entangled in the technology

He sees EA and BPM as leading us to SOA, which is a valid point: if you do EA and BPM, you’ll definitely start to do SOA. However, I see many organizations starting with SOA in the absence of either EA or BPM.

Architecture & Process keynote: Tom Koulopoulos

The afternoon keynote was by Tom Koulopoulos, the well-known and well-respected consultant who founded Delphi Group and is currently also at Babson College’s think tank on business innovation. He’s worked extensively in the business process/knowledge management space, so I’ve tracked his excellent work and writings for years. He spoke to us today about innovation.

Innovation is change that creates value.

Innovation and invention are separate concepts. We’re surrounded by invention, and we have a hard time picking out innovation from the noise created by the relentless invention of (often useless) gadgets. Is this invention for the sake of invention, as Koulopoulos says, or is this the necessary process for eventually refining out a few good ideas?

Innovation is the collaboration between an idea and the marketplace. The market will not be able to predict the effect of true innovation, since there’s no context for understanding that: consider that the most extreme, ambitious prediction for the world market for mobile phones was 50 million handsets, a number that’s off by two orders of magnitude today.

Innovation really happens when behaviors start to change. It’s a process, it’s not about an invention. It’s an ecosystem where good ideas are captured, inventoried and reused to help them come to fruition. Today, many of the innovations are in business models, not the product or service.

Koulopoulos had seven lessons of innovation:

  1. It’s about the process, not the product.
  2. Build an innovation competency. You have to build a competency in innovation within your organization in order to encourage, support and reward it. Many organizations institutionalize the idea of being the incumbent rather than look for ways to find innovation, putting up both cultural and logistical barriers.
  3. Separate the seeds from the weeds. You need to allow some ideas to grow in order to see which they are and allow for disruption to occur, but at some point you need to be able to tell the difference.
  4. Fail fast. Some portion of your time needs to be spent on activities where there is no expectation of success, since it gives you the breathing room for serendipitous innovation.
  5. Build for the unknown. Sometimes an innovation intended for one application becomes a success for something completely different, but sometimes it’s just a complete leap of faith to build something for an environment that’s impossible to assess until you get there.
  6. Challenge the conventional. New companies, or those moving into a new field, are often more innovative than those who are existing experts in the field. Hoover couldn’t have invented the Roomba.
  7. Abandon success. If you’re in a position of strength, take a risk and do something completely different that hasn’t been done before.

He told how his son uses the interactive toy builder on lego.com, collaboratively creating toys online with other site visitors, and how there has been a fundamental shift from wanting to protect our ideas to wanting to share our ideas. Not just with today’s kids, either: science methodology, as indicated by trends in Nobel prize winners over the past century, is shifting from individual effort to team effort.

He closed with a quote from Morpheus in The Matrix: “I didn’t say it would be easy Neo. I just said it would be the truth.” So it is with innovation.

Architecture & Process: Rob Cloutier

The disadvantage of a small conference is that speakers tend to drop out more frequently than you’ll find in large conferences, and this afternoon my first choice didn’t show. However, it had been a tough choice in any case, so I was happy to attend the session with Rob Cloutier of Stevens Institute of Technology on patterns in enterprise architecture.

The analysis of patterns has been around along time in mathematics and engineering, but they’re often difficult to capture and reuse in real life. There are some real business drivers for enterprise architecture patterns: much of the knowledge about systems is still gathered through artifacts, not patterns, making it difficult to reuse on other projects. It also tends to control complexity, since systems are based on standard patterns, and creates a common syntax and understanding for discussing projects. This has the impact of reducing risk, since the patterns are well understood.

Patterns are not created, they’re mined from successful projects by determining the elements contributing to the success of the projects.

In an enterprise architecture sense, there’s the issue of the level of these patterns; Cloutier’s premise is that you define and use patterns relative to your scope within the architecture, so may be a system architecture pattern. He laid out a topology of patterns relative to the views within the Zachman framework: organization, business and mission patterns at the contextual/scope level; structure, role, requirements, activities and system processes at the conceptual/enterprise model level, system analysis, system design, system test, software architecture, software analysis, software requirements, hardware requirements, hardware design and operational patterns at the logical/system model level, and so on. He focused primarily on the five patterns in the enterprise model level.

He walked through an example with use cases, generalizing the purchase of a specific item to the purchase of any product: the actors, functions and data flow can be generalized, then adapted to any similar system or process by changing the names and dropping the pieces that aren’t relevant. He listed the characteristics of a pattern that need to be documented, and pointed out that it’s critical to model interfaces.

He showed the analysis that he had done of multiple command-and-control systems to create a command-and-control pattern containing four basic steps in an IDEF model — plan, detect, control and act — with the input, outputs, strategy and resources for each step. In fact, each of those steps was itself a pattern that could be used independently.

He had an interesting analogy of the electricity and distribution system as a service-oriented architecture: you can plug in a new device without notifying the provider, you might be served by multiple electricity producers without knowing, your usage is metered for your service provider to bill you for the usage, and the details of how electricity is generated is generally not known to you.

Like any enterprise architecture initiative, the development of EA patterns is often considered overhead in organizations, so may never be done. You have to take the time up front to discover and document the pattern so that it can be reused later; it’s at the first point of reuse where you start to save money, and subsequent reuses where it really starts to pay off. Although many examples of software patterns exist, enterprise architecture patterns are much rarer: Cloutier is researching the creation of an EA pattern repository in his work at Stevens Institute of Technology. Ultimately, the general availability of enterprise architecture patterns that have been created by others — a formalization of best practices — is where the real benefits lie, and can help to foster the acceptance of EA in more organizations.

Architecture & Process: Woody Woods

There’s one huge problem with this conference: too many interesting sessions going on simultaneously, so I’m sure that I’m missing something good no matter which I pick.

I finished the morning breakout sessions with Woody Woods of SI International discussing transitioning enterprise architectures to service-oriented architectures. He started out defining SOA, using the RUP definition: a conceptual description of the structure of a system in terms of its components and the services that they provide without regard for the underlying implementation of the components, services and connections between components). There are a number of reasons for implementing SOA, starting with the trend towards object oriented analysis and design, and including the loosely coupled nature that allows for easy interfaces between systems and between enterprises. Services are defined by two main standards (in the US, anyway): NCOW-RM, the DoD standard, and the OASIS reference model for SOA.

There are a number of steps in defining operational activities

  • Establish vision and mission
  • Determine enterprise boundaries
  • Identify enterprise use cases
  • Detail enterprise use cases to create an activity diagram and sequence diagram
  • Develop logical data model

The process for developing a service model, then, the following steps are taken (using RUP terminology):

  1. Identify the roles in a process.
  2. Identify the objects in a process, starting with triggers and results, and refining to include all objects, the initial actions and a complete action analysis, potentially creating a sequence diagram. Other types of process models could be used here instead, such as BPMN, although he didn’t mention that; they’re using Rational Rose so his talk is focused on RUP models.
  3. Identify boundary crossings, since every time an object crosses a boundary, it’s a potential service. By “boundary”, he means the boundaries between roles, that is, the lanes on a swimlane diagram; note that some boundary crossings can be ignored as artifacts of a two-dimensional modeling process, e.g., where an activity in lane 1 links to an activity in lane 3, the fact that lane 2 is in between them is irrelevant, and the boundary crossing is actually between lane 1 and 3.
  4. Identify potential services at each boundary crossing, which implies encapsulation of the functionality of that service within a role; the flip side of that is that it also implies a lack of visibility between the roles, although that’s inherent in object orientation. Each boundary crossing doesn’t necessarily form its own unique services, however; multiple boundary crossings may be combined into services (e.g., two different roles requesting information from a third role would use the same service, not two different services). In this sense, a service is not necessarily an automated or system service; it could be a business service based on a manual process.
  5. Identify interfaces. Once the potential services have been defined, those interfaces that occur between systems represent system interfaces, which can in turn be put into code. At this point, data interfaces can be defined between the two systems that specify the properties of the service.

In this context, he’s considering the RUP models to be the “enterprise architecture” that is being transitioned to a SOA, but this does provide a good methodology for working from a business process to the set of services that need to be developed in order to effectively implement the processes. I’ll be speaking on a similar topic — driving service definitions from business processes — next week at TUCON, and it was interesting to see how Woods is doing this using the RUP models.

Architecture & Process: Robert Shapiro

I met Robert Shapiro years ago, when I worked for FileNet and he was part of the impressive brain trust at Meta Software, but now he’s with Global 360 and here to talk to us about BPM and workforce management, which focuses on using analytics, simulation tools and optimization techniques together with a workforce scheduler.

He started with a quick overview of simulation in a BPMS environment, where a discrete event simulation is run based on scenarios that include the following:

  • A set of processes to be simulated
  • Incoming work (arrivals), both actual (from a BPMS or other operational system) and forecast
  • Resources, roles and shifts, including human, equipment and technology resources
  • Activity details, including the duration of each activity (likely a distribution) and the probability of each decision path.

The output of the simulation will show the staff requirements by role and time period, staff and equipment utilization, cycle times and SLAs, unprocessed work and bottlenecks, work arrival profile, and an activity summary.

He then went on to discuss workforce management schedulers, which is used to assign detailed schedules to staff within an organization based on the work load and the resource characteristics (usually from an HR management system). Note that I’m not talking about assigning work within a BPMS here; this is more general scheduling technology for creating a schedule for each resource while trying to precisely match the work load. Factors such as holidays, vacation, union rules and other factors that determine who may do what are all taken into account.

One of the key inputs into a workforce scheduler, however, is exactly what’s output from a process simulator: workload demand on a time basis. By working with these technologies together, it’s possible to come up an optimal workforce size and schedule as follows:

  • Gather analytics from a BPMS on work arrival patterns, resource utilization, work in progress and activity loads in order to extract workload demand (staff requirements by role and time period) for input to the scheduler.
  • Using the actual workload demand data and other data on individual staff characteristics, generate a best-fit schedule in the scheduler that matches workload and staff, minimizing under and overstaffing.
  • Feed the best-fit resource schedule back into the process simulator, and create a scenario based on this schedule and the actual analytics from the BPMS. The simulation can create an updated version of the workload demand and the effect of the new workforce assignment.
  • The workload demand generated by the simulator is fed back into the scheduler, which generates a new best-fit resource schedule.
  • Rinse and repeat (or rather, simulate and schedule) until no further optimization is possible.

This approach is most suited to well-structured business processes with repeatable patterns in work item arrivals, and a large total resource pool — Shapiro has seen 10-20% reduction in staff costs when these techniques are applied. A bit of scary old-style BPR fears here about cutting jobs, but that’s the reality in many industries.

Architecture & Process: Michael zur Muehlen

I always like hearing Michael zur Muehlen presenting: he’s both knowledgeable and amusing, and I’m sure that his students at Stevens Institute of Technology must learn a lot from him. Today, he discussed what every enterprise architect needs to know about BPM, much of which was focused on process discovery because of the link between architecture and developing models.

He looked at two of the common problems problems in process discovery:

  • Process confabulation, where a person being interviewed about the existing business process “makes up” how the process works, not through any bad intentions but because they don’t understand parts of it and are a bit intimidated by the presence of a consultant or business analyst asking the questions. (This, by the way, is why I almost always use job shadowing for current process discovery, rather than interviews)
  • Paper bias, where the automated process ends up mimicking the paper process since it’s difficult for the participants to envision how a process could change if paper were no longer a constraint.

There are a couple of different philosophies about process modeling, from only modeling processes that include 80% or more of the work in a department, to modeling everything in an enterprise process architecture. There are enterprise process architecture frameworks (what Michael calls an enterprise process map, or EPM) used by some organizations, where they have a template of the major processes within their company that can be easily applied to subsidiaries and departments. Not only does an EPM layout the major categories of processes, it highlights the integration points with external processes. There are also some industry-specific process reference models that can be used in some cases, rather than developing one specifically for an organization.

Within a process architecture, there are multiple levels of granularity or abstraction, just as in any more generalized enterprise architecture. One organization uses 6 levels: business activities, process groupings, core processes, business process flows, operational process flows, and detailed process flows. The top three levels are focused on “what”, whereas the lower three levels are focused on “how”, and there are defined techniques for refining models from one level to another. Hence an enterprise process architecture includes the enterprise process map (defining the major process categories) and the set of process levels (created for each major process).

As with any other type of enterprise architecture, an enterprise process architecture allows for easier collaboration between business and IT because it provides a common framework and syntax for discussions, and becomes a decision-making framework particularly at the lower levels that discuss specific technology implementations.

He went on to talk about SOA and some of the obstacles that we’re seeing. He made a very funny analogy with today’s complex home theater systems: the back of the device (with all the input/output interfaces) is like what the developer sees; the front of the device (with input selection functions) is like what the architect sees; and the remote control with single-button control to set all of the appropriate settings to watch TiVO is what the end user actually needs.

Keep in mind that customers don’t care about your processes, they care about the value that those processes provide to them. Having processes streamlined, automated, agile and encapsulated as services allows you to offer innovative services quickly, since processes can be mashed up with other services in a variety of ways to provide value to customers. The final takeaway points:

  • Technology enables process change
  • Processes define services
  • Core processes become commodities
  • Efficient process management crease room for problem solving
  • Industrialized processes enable innovation

As always, you’ll be able to find Michael’s slides on SlideShare.

Architecture & Process keynote: Bill Curtis

The second part of the morning keynote was by Bill Curtis, who was involved in developing CMM and CMMI, and now is working on the Business Process Maturity Model (BPMM). I’ve seen quite a bit about BPMM at OMG functions, but this is the first time that I’ve heard Curtis speak about it.

He started by talking about the process/function matrix, where functions focus on the performance of skills within an area of expertise, and processes focus on the flow and transformation of information or material. In other words, functions are the silos/departments in organizations (e.g., marketing, engineering, sales, admin, supply chain, finance, customer service), and processes are the flows that cut across them (e.g., concept to retire, campaign to order, order to cash, procure to pay, incident to close. Unfortunately, as we all know, the biggest problems occur with the white space in between the silos when the processes aren’t structured properly, and a small error at the beginning of the process causes increasingly large amounts of rework in other departments later in the process: items left off the bill of sale by sales created missing information in legal, incomplete specs in delivery, and incorrect invoices in finance. Typical for many industries is 30% rework — an alarming figure that would never be tolerated in manufacturing, for example, where rework is measured and visible.

Curtis’ point is that low maturity organizations have a staggering about of rework, causing incredibly inefficient processes, and they don’t even know about it because they’re not measuring it. As with many things, introspection breeds change. And just as Ted Lewis was talking about EA as not just being IT architecture, but a business-IT decision-making framework, Curtis talked about how the concepts of CMM in IT were expanded into BPMM, a measurement of both business and IT maturity relative to business processes.

In case you haven’t seen the BPMM, here’s the five levels:

  • Level 1 – Initial: inconsistent management (I would have called this Level 0 for consistency with CMM, but maybe that was considered too depressing for business organizations). Curtis called the haphazard measures at this level “the march of 1000 spreadsheets”, which is pretty accurate.
  • Level 2 – Managed: work unit management, achieved through repeatable practices. Measurements in place tend to be localized status and operational reports that indicate whether local work is on target or not, allowing them to start to manage their commitments and capacity.
  • Level 3 – Standardized: process management based on standardized practices. Transitioning from level 2 to 3 requires tailoring guidelines, allowing the creation of standard processes while still allowing for exceptions: this tends to strip a lot of the complexity out of the processes, and makes it worth considering automation (automation of level 2 just paves the cowpaths). Measurements are now focused on process measures, usually based on reacting to thresholds, which allows both more accurate processes and more accurate cost-time-quality measures for better business planning.
  • Level 4 – Predictable: capability management through statistically controlled practices. Statistical measurements throughout a process — true process analytics — are now used to predict the outcome: not only are the measurements more sophisticated, but the process is sufficiently repeatable (low variance) that accurate prediction is possible. If you’re using Six Sigma, this is where the full set of tools and techniques are used (although some will be used at levels 2 and 3). This allows predictive models to be used  both for predicting the results of work in progress, and for planning based on accurately estimated capabilities.
  • Level 5 – Innovative: innovation management through innovative practices. This is not just about innovation, but about the agility to implement that innovation. Measurements are used for what-if analysis to drive into proactive process experimentation and improvement.

The top two levels are really identical to innovative management practices, but the advantage of BPMM is that it provides a path to get from where we are now to these innovative practices. Curtis also sees this as a migration from a chaotic clash of cultures to a cohesive culture of innovation.

This was a fabulous, fast-paced presentation that left me with a much deeper understanding of — and appreciation for — BPMM. He had some great slides with this, which will apparently be available on the Transformation & Innovation website later this week.

Now the hard part starts: trying to pick between a number of interesting-sounding breakout sessions.