Architecture & Process: Rob Cloutier

The disadvantage of a small conference is that speakers tend to drop out more frequently than you’ll find in large conferences, and this afternoon my first choice didn’t show. However, it had been a tough choice in any case, so I was happy to attend the session with Rob Cloutier of Stevens Institute of Technology on patterns in enterprise architecture.

The analysis of patterns has been around along time in mathematics and engineering, but they’re often difficult to capture and reuse in real life. There are some real business drivers for enterprise architecture patterns: much of the knowledge about systems is still gathered through artifacts, not patterns, making it difficult to reuse on other projects. It also tends to control complexity, since systems are based on standard patterns, and creates a common syntax and understanding for discussing projects. This has the impact of reducing risk, since the patterns are well understood.

Patterns are not created, they’re mined from successful projects by determining the elements contributing to the success of the projects.

In an enterprise architecture sense, there’s the issue of the level of these patterns; Cloutier’s premise is that you define and use patterns relative to your scope within the architecture, so may be a system architecture pattern. He laid out a topology of patterns relative to the views within the Zachman framework: organization, business and mission patterns at the contextual/scope level; structure, role, requirements, activities and system processes at the conceptual/enterprise model level, system analysis, system design, system test, software architecture, software analysis, software requirements, hardware requirements, hardware design and operational patterns at the logical/system model level, and so on. He focused primarily on the five patterns in the enterprise model level.

He walked through an example with use cases, generalizing the purchase of a specific item to the purchase of any product: the actors, functions and data flow can be generalized, then adapted to any similar system or process by changing the names and dropping the pieces that aren’t relevant. He listed the characteristics of a pattern that need to be documented, and pointed out that it’s critical to model interfaces.

He showed the analysis that he had done of multiple command-and-control systems to create a command-and-control pattern containing four basic steps in an IDEF model — plan, detect, control and act — with the input, outputs, strategy and resources for each step. In fact, each of those steps was itself a pattern that could be used independently.

He had an interesting analogy of the electricity and distribution system as a service-oriented architecture: you can plug in a new device without notifying the provider, you might be served by multiple electricity producers without knowing, your usage is metered for your service provider to bill you for the usage, and the details of how electricity is generated is generally not known to you.

Like any enterprise architecture initiative, the development of EA patterns is often considered overhead in organizations, so may never be done. You have to take the time up front to discover and document the pattern so that it can be reused later; it’s at the first point of reuse where you start to save money, and subsequent reuses where it really starts to pay off. Although many examples of software patterns exist, enterprise architecture patterns are much rarer: Cloutier is researching the creation of an EA pattern repository in his work at Stevens Institute of Technology. Ultimately, the general availability of enterprise architecture patterns that have been created by others — a formalization of best practices — is where the real benefits lie, and can help to foster the acceptance of EA in more organizations.

Architecture & Process: Woody Woods

There’s one huge problem with this conference: too many interesting sessions going on simultaneously, so I’m sure that I’m missing something good no matter which I pick.

I finished the morning breakout sessions with Woody Woods of SI International discussing transitioning enterprise architectures to service-oriented architectures. He started out defining SOA, using the RUP definition: a conceptual description of the structure of a system in terms of its components and the services that they provide without regard for the underlying implementation of the components, services and connections between components). There are a number of reasons for implementing SOA, starting with the trend towards object oriented analysis and design, and including the loosely coupled nature that allows for easy interfaces between systems and between enterprises. Services are defined by two main standards (in the US, anyway): NCOW-RM, the DoD standard, and the OASIS reference model for SOA.

There are a number of steps in defining operational activities

  • Establish vision and mission
  • Determine enterprise boundaries
  • Identify enterprise use cases
  • Detail enterprise use cases to create an activity diagram and sequence diagram
  • Develop logical data model

The process for developing a service model, then, the following steps are taken (using RUP terminology):

  1. Identify the roles in a process.
  2. Identify the objects in a process, starting with triggers and results, and refining to include all objects, the initial actions and a complete action analysis, potentially creating a sequence diagram. Other types of process models could be used here instead, such as BPMN, although he didn’t mention that; they’re using Rational Rose so his talk is focused on RUP models.
  3. Identify boundary crossings, since every time an object crosses a boundary, it’s a potential service. By “boundary”, he means the boundaries between roles, that is, the lanes on a swimlane diagram; note that some boundary crossings can be ignored as artifacts of a two-dimensional modeling process, e.g., where an activity in lane 1 links to an activity in lane 3, the fact that lane 2 is in between them is irrelevant, and the boundary crossing is actually between lane 1 and 3.
  4. Identify potential services at each boundary crossing, which implies encapsulation of the functionality of that service within a role; the flip side of that is that it also implies a lack of visibility between the roles, although that’s inherent in object orientation. Each boundary crossing doesn’t necessarily form its own unique services, however; multiple boundary crossings may be combined into services (e.g., two different roles requesting information from a third role would use the same service, not two different services). In this sense, a service is not necessarily an automated or system service; it could be a business service based on a manual process.
  5. Identify interfaces. Once the potential services have been defined, those interfaces that occur between systems represent system interfaces, which can in turn be put into code. At this point, data interfaces can be defined between the two systems that specify the properties of the service.

In this context, he’s considering the RUP models to be the “enterprise architecture” that is being transitioned to a SOA, but this does provide a good methodology for working from a business process to the set of services that need to be developed in order to effectively implement the processes. I’ll be speaking on a similar topic — driving service definitions from business processes — next week at TUCON, and it was interesting to see how Woods is doing this using the RUP models.

Architecture & Process: Robert Shapiro

I met Robert Shapiro years ago, when I worked for FileNet and he was part of the impressive brain trust at Meta Software, but now he’s with Global 360 and here to talk to us about BPM and workforce management, which focuses on using analytics, simulation tools and optimization techniques together with a workforce scheduler.

He started with a quick overview of simulation in a BPMS environment, where a discrete event simulation is run based on scenarios that include the following:

  • A set of processes to be simulated
  • Incoming work (arrivals), both actual (from a BPMS or other operational system) and forecast
  • Resources, roles and shifts, including human, equipment and technology resources
  • Activity details, including the duration of each activity (likely a distribution) and the probability of each decision path.

The output of the simulation will show the staff requirements by role and time period, staff and equipment utilization, cycle times and SLAs, unprocessed work and bottlenecks, work arrival profile, and an activity summary.

He then went on to discuss workforce management schedulers, which is used to assign detailed schedules to staff within an organization based on the work load and the resource characteristics (usually from an HR management system). Note that I’m not talking about assigning work within a BPMS here; this is more general scheduling technology for creating a schedule for each resource while trying to precisely match the work load. Factors such as holidays, vacation, union rules and other factors that determine who may do what are all taken into account.

One of the key inputs into a workforce scheduler, however, is exactly what’s output from a process simulator: workload demand on a time basis. By working with these technologies together, it’s possible to come up an optimal workforce size and schedule as follows:

  • Gather analytics from a BPMS on work arrival patterns, resource utilization, work in progress and activity loads in order to extract workload demand (staff requirements by role and time period) for input to the scheduler.
  • Using the actual workload demand data and other data on individual staff characteristics, generate a best-fit schedule in the scheduler that matches workload and staff, minimizing under and overstaffing.
  • Feed the best-fit resource schedule back into the process simulator, and create a scenario based on this schedule and the actual analytics from the BPMS. The simulation can create an updated version of the workload demand and the effect of the new workforce assignment.
  • The workload demand generated by the simulator is fed back into the scheduler, which generates a new best-fit resource schedule.
  • Rinse and repeat (or rather, simulate and schedule) until no further optimization is possible.

This approach is most suited to well-structured business processes with repeatable patterns in work item arrivals, and a large total resource pool — Shapiro has seen 10-20% reduction in staff costs when these techniques are applied. A bit of scary old-style BPR fears here about cutting jobs, but that’s the reality in many industries.

Architecture & Process: Michael zur Muehlen

I always like hearing Michael zur Muehlen presenting: he’s both knowledgeable and amusing, and I’m sure that his students at Stevens Institute of Technology must learn a lot from him. Today, he discussed what every enterprise architect needs to know about BPM, much of which was focused on process discovery because of the link between architecture and developing models.

He looked at two of the common problems problems in process discovery:

  • Process confabulation, where a person being interviewed about the existing business process “makes up” how the process works, not through any bad intentions but because they don’t understand parts of it and are a bit intimidated by the presence of a consultant or business analyst asking the questions. (This, by the way, is why I almost always use job shadowing for current process discovery, rather than interviews)
  • Paper bias, where the automated process ends up mimicking the paper process since it’s difficult for the participants to envision how a process could change if paper were no longer a constraint.

There are a couple of different philosophies about process modeling, from only modeling processes that include 80% or more of the work in a department, to modeling everything in an enterprise process architecture. There are enterprise process architecture frameworks (what Michael calls an enterprise process map, or EPM) used by some organizations, where they have a template of the major processes within their company that can be easily applied to subsidiaries and departments. Not only does an EPM layout the major categories of processes, it highlights the integration points with external processes. There are also some industry-specific process reference models that can be used in some cases, rather than developing one specifically for an organization.

Within a process architecture, there are multiple levels of granularity or abstraction, just as in any more generalized enterprise architecture. One organization uses 6 levels: business activities, process groupings, core processes, business process flows, operational process flows, and detailed process flows. The top three levels are focused on “what”, whereas the lower three levels are focused on “how”, and there are defined techniques for refining models from one level to another. Hence an enterprise process architecture includes the enterprise process map (defining the major process categories) and the set of process levels (created for each major process).

As with any other type of enterprise architecture, an enterprise process architecture allows for easier collaboration between business and IT because it provides a common framework and syntax for discussions, and becomes a decision-making framework particularly at the lower levels that discuss specific technology implementations.

He went on to talk about SOA and some of the obstacles that we’re seeing. He made a very funny analogy with today’s complex home theater systems: the back of the device (with all the input/output interfaces) is like what the developer sees; the front of the device (with input selection functions) is like what the architect sees; and the remote control with single-button control to set all of the appropriate settings to watch TiVO is what the end user actually needs.

Keep in mind that customers don’t care about your processes, they care about the value that those processes provide to them. Having processes streamlined, automated, agile and encapsulated as services allows you to offer innovative services quickly, since processes can be mashed up with other services in a variety of ways to provide value to customers. The final takeaway points:

  • Technology enables process change
  • Processes define services
  • Core processes become commodities
  • Efficient process management crease room for problem solving
  • Industrialized processes enable innovation

As always, you’ll be able to find Michael’s slides on SlideShare.

Architecture & Process keynote: Bill Curtis

The second part of the morning keynote was by Bill Curtis, who was involved in developing CMM and CMMI, and now is working on the Business Process Maturity Model (BPMM). I’ve seen quite a bit about BPMM at OMG functions, but this is the first time that I’ve heard Curtis speak about it.

He started by talking about the process/function matrix, where functions focus on the performance of skills within an area of expertise, and processes focus on the flow and transformation of information or material. In other words, functions are the silos/departments in organizations (e.g., marketing, engineering, sales, admin, supply chain, finance, customer service), and processes are the flows that cut across them (e.g., concept to retire, campaign to order, order to cash, procure to pay, incident to close. Unfortunately, as we all know, the biggest problems occur with the white space in between the silos when the processes aren’t structured properly, and a small error at the beginning of the process causes increasingly large amounts of rework in other departments later in the process: items left off the bill of sale by sales created missing information in legal, incomplete specs in delivery, and incorrect invoices in finance. Typical for many industries is 30% rework — an alarming figure that would never be tolerated in manufacturing, for example, where rework is measured and visible.

Curtis’ point is that low maturity organizations have a staggering about of rework, causing incredibly inefficient processes, and they don’t even know about it because they’re not measuring it. As with many things, introspection breeds change. And just as Ted Lewis was talking about EA as not just being IT architecture, but a business-IT decision-making framework, Curtis talked about how the concepts of CMM in IT were expanded into BPMM, a measurement of both business and IT maturity relative to business processes.

In case you haven’t seen the BPMM, here’s the five levels:

  • Level 1 – Initial: inconsistent management (I would have called this Level 0 for consistency with CMM, but maybe that was considered too depressing for business organizations). Curtis called the haphazard measures at this level “the march of 1000 spreadsheets”, which is pretty accurate.
  • Level 2 – Managed: work unit management, achieved through repeatable practices. Measurements in place tend to be localized status and operational reports that indicate whether local work is on target or not, allowing them to start to manage their commitments and capacity.
  • Level 3 – Standardized: process management based on standardized practices. Transitioning from level 2 to 3 requires tailoring guidelines, allowing the creation of standard processes while still allowing for exceptions: this tends to strip a lot of the complexity out of the processes, and makes it worth considering automation (automation of level 2 just paves the cowpaths). Measurements are now focused on process measures, usually based on reacting to thresholds, which allows both more accurate processes and more accurate cost-time-quality measures for better business planning.
  • Level 4 – Predictable: capability management through statistically controlled practices. Statistical measurements throughout a process — true process analytics — are now used to predict the outcome: not only are the measurements more sophisticated, but the process is sufficiently repeatable (low variance) that accurate prediction is possible. If you’re using Six Sigma, this is where the full set of tools and techniques are used (although some will be used at levels 2 and 3). This allows predictive models to be used  both for predicting the results of work in progress, and for planning based on accurately estimated capabilities.
  • Level 5 – Innovative: innovation management through innovative practices. This is not just about innovation, but about the agility to implement that innovation. Measurements are used for what-if analysis to drive into proactive process experimentation and improvement.

The top two levels are really identical to innovative management practices, but the advantage of BPMM is that it provides a path to get from where we are now to these innovative practices. Curtis also sees this as a migration from a chaotic clash of cultures to a cohesive culture of innovation.

This was a fabulous, fast-paced presentation that left me with a much deeper understanding of — and appreciation for — BPMM. He had some great slides with this, which will apparently be available on the Transformation & Innovation website later this week.

Now the hard part starts: trying to pick between a number of interesting-sounding breakout sessions.

Architecture & Process keynote: Edward Lewis

I like coming to smaller conferences once in a while: although I usually have to pay my own way to get here, the networking tends to be much more real than at a larger one. In the 10 minutes before the keynote started this morning, I chatted with five people who I know, and had one complete stranger introduce himself to me and tell me that he reads my blog. Also, since this conference is focused on enterprise architecture and process, there will be a few people around who appreciate why I call this blog Column 2 (think Zachman).

To get my complaints in early: no wifi (as Michael zur Muehlen was quick to tell me) and no tea. And since this is DC, we start at 8am. Other than that, everything’s good.

Edward Lewis, who was the first CIO of the US federal government, is giving the opening keynote, discussing transformation for the 21st century by tapping the hidden potential of enterprise architecture. He started with how many organizations are seriously out of date in how they operate, and gave a list of “great reasons” not to use enterprise architecture: too complicated, too much change, takes too long, costs too much. However, business wants results in term of organizational and technological change, and Lewis sees four major areas of focus for visionary organizations:

  • Not just supply chains: global supply and demand chains that are more complex, synchronized and focused on all processes, activities and technologies
  • Strategic partner relationship management for seamless interoperability
  • High-performing organizations: people and culture are the most important factors
  • Achieving the “perfect order” of total business-wide integration versus the “perfect storm”

He stressed the importance of the global supply chain: not just what you’re doing, but what your supply chain partners are doing that contribute to the timely, accurate and continuous flow of your product/service, information and revenue.

He believes that enterprise architecture has a key role in achieving the breakthrough performance required for visionary organizations, by supporting strategic thinking and planning, and allowing the alignment and integration of people and culture with the business and IT strategies, organizational structure, processes and technologies.

EA is the strategic configuration of business and information resources, a type of strategic decision-making framework.

His ten critical success factors:

  • Strong executive leadership: support, commitment and involvement
  • Dynamic business and IT strategic planning environment: coordinated, integrated, flexible, long-term strategic focus
  • Formal business and IT infrastructure: framework, policies, standards, methods and tools
  • Integration architecture models: business plan, organization, data, applications, technology
  • Dynamic business and IT processes: macro and micro, internal and external
  • Use information and knowledge effectively: all aspects of data and information
  • Use information technology effectively: enterprise-wide, fully integrated
  • Use dynamic implementation and migration plan: well-defined projects
  • Have an innovative culture and access to skilled individuals: education and training, teamwork, culture and organizational learning
  • Have an effective change management environment: integrating change programs, high-performing organization

He boils this all down to three factors: people, culture and leadership.

How to run a great conference

About a month ago, someone I know who is organizing a conference, and knows how many conferences that I attend, asked me for my list of top components that make a killer conference. My reply to him follows.

First of all, about the content:

  • It’s all about the content. You need to have good content, and that means both engaging and relevant. I’m not a big fan of "celebrity" speakers who know nothing about the conference subject; I’d rather see experts from the industry, customers, and the vendor than listen to a generic speech that I could have found on YouTube. At this year’s FASTforward conference, there were very few keynotes by FAST executives or employees; they put their own product information in breakout sessions so that I had a choice of whether to hear about the product or hear from customers.
  • Keep the speakers to 45 minutes, tops. 30 minutes works well for keynote addresses, and 45 minutes for the breakouts.
  • Schedule frequent breaks and have them in an area that encourages conversation. The contrast between the atmosphere at conferences with fewer, shorter breaks — making it seem rushed between sessions — and ones where there are more breaks and much more networking between participants, was marked.
  • Publish your agenda online as soon as you possibly can. If possible, provide some sort of customized portal so that people can select the sessions that they want and see a personalized schedule (both Gartner and FAST did this recently), but that’s not completely necessary; however, an online schedule is critical. First of all, you’ll attract more people because they’ll be able to see the value of what you’re offering, and secondly, you’ll have more people stay right to the end of the conference if they see that there are valuable sessions on the last afternoon.
  • Publish slides from the sessions on a USB drive, not a CD, and distribute them at the beginning of the conference. Many people (including myself) have a smaller laptop that doesn’t include an onboard CD/DVD, and I never carry the external one on trips, so I can’t load a CD to look at the presentation materials. Sometimes, looking at the slides ahead of time helps me to select which session that I want to attend. An alternative, if you have a personalized portal for attendees, is to post the presentation materials online so that it’s available before people even get to the conference.

Secondly, on logistics:

  • Wifi is a key component for any technology-related conference. Selecting a hotel that offers free wifi throughout the building, as opposed to just putting your own wifi in the conference area, is an important factor. If more conference organizers demand free wifi throughout the hotel — which you know costs the hotel nearly nothing — then more hotels will start to offer it. I realize that only 1 in 5 people will use their laptop in the conference, but those people will find it critical.
  • Along with wifi, provide power at the tables in the conference rooms. Wifi is no good when your battery runs out after a couple of hours, and I hate having to search around for an electrical outlet to get 15 minutes of charge instead of taking a break and networking with people. Again, only 1 in 5 will use it, but they’ll find it incredibly useful.
  • Offer a free luggage storage area in the conference center on the last day. I’ve seen this at several conferences, and all it really takes is one or two people to watch over what’s happening, you don’t need a formal bag check process. It saves a ton of time for people when it’s time to go if they can just pick up their bag from there rather than have to wait in line to get it from the bell desk at the hotel, which usually isn’t set up to handle that volume all at once.
  • Something that I saw at the FASTforward conference were free buses to the airport provided for conference attendees at the end of the last day — the conference chartered a couple of full-size coaches and shuttled us to the airport. Nice touch.
  • Another nice thing from the FASTforward organizers was a single sheet of paper in each conference package — printed on the conference letterhead but likely done at the last minute — that summarized all departure information. They listed the luggage storage service, the hotel checkout time and the airport bus transfers plus had some reminders for people who drove their own car to the event. A nice summary page.

TIBCO seminar slides

If you were interested in the TIBCO seminar that I attended last week, you can download Paul Brown’s slides (PDF, no registration required). I particularly like his graphics showing the current model for today’s business processes with EAI and ETL tying things together (slides 6-7), then the slides showing how SOA and BPM refine that structure (slides 8, 12,13, 14 and 15).

BPM/SOA events calendar reminder

As you can guess from my previous post, we’re in the middle of prime conference season, as everyone tries to get these in before the summer. That results in a lot of potential scheduling conflicts: today, I had a request to speak at a conference during a time that I’m already committed to another conference, which unfortunately I had to decline.

Although not a perfect solution, I created a public Google calendar last year in response to a very similar set of circumstances, and several other people are authors on the calendar including Todd Biske, who adds most of the SOA events. It’s already being used by some event organizers to check for potential conflicts, as well as serving as a resource for attendees to locate event information. I have it embedded on this page, but you can also access it directly, add it to your Google Calendars, or subscribe to the RSS feed.

This is not, of course, my personal calendar: if I attended every event on here, I would be both superhuman and divorced.

If you’re organizing an event, you might want to check the calendar for conflicts before selecting the date, then get your event posted on here by contacting me with the details or a request to become an author on the calendar.

If you’re looking for an event, subscribe to the calendar in Google Calendar, then use the "Search My Calendars" function there to locate a specific event.

Travel-crazy again

Having spent almost two months without getting on a plane, I’m back on the road for the next few weeks:

  • April 22-23: Washington DC for the Architecture and Process conference
  • April 29-May 2: San Francisco for TUCON, TIBCO’s user conference
  • May 5-7: Orlando for SAPPHIRE, SAP’s user conference
  • May 13-14: Chicago for BEA.Participate.08, BEA’s user conference

Expect to see lots of blogging from the conferences. If you’re attending any of them, ping me for a meet-up.

Disclosure: for the three vendor conferences, the respective vendors are paying my travel expenses to attend.