Architecture & Process: Rob Cloutier

The disadvantage of a small conference is that speakers tend to drop out more frequently than you’ll find in large conferences, and this afternoon my first choice didn’t show. However, it had been a tough choice in any case, so I was happy to attend the session with Rob Cloutier of Stevens Institute of Technology on patterns in enterprise architecture.

The analysis of patterns has been around along time in mathematics and engineering, but they’re often difficult to capture and reuse in real life. There are some real business drivers for enterprise architecture patterns: much of the knowledge about systems is still gathered through artifacts, not patterns, making it difficult to reuse on other projects. It also tends to control complexity, since systems are based on standard patterns, and creates a common syntax and understanding for discussing projects. This has the impact of reducing risk, since the patterns are well understood.

Patterns are not created, they’re mined from successful projects by determining the elements contributing to the success of the projects.

In an enterprise architecture sense, there’s the issue of the level of these patterns; Cloutier’s premise is that you define and use patterns relative to your scope within the architecture, so may be a system architecture pattern. He laid out a topology of patterns relative to the views within the Zachman framework: organization, business and mission patterns at the contextual/scope level; structure, role, requirements, activities and system processes at the conceptual/enterprise model level, system analysis, system design, system test, software architecture, software analysis, software requirements, hardware requirements, hardware design and operational patterns at the logical/system model level, and so on. He focused primarily on the five patterns in the enterprise model level.

He walked through an example with use cases, generalizing the purchase of a specific item to the purchase of any product: the actors, functions and data flow can be generalized, then adapted to any similar system or process by changing the names and dropping the pieces that aren’t relevant. He listed the characteristics of a pattern that need to be documented, and pointed out that it’s critical to model interfaces.

He showed the analysis that he had done of multiple command-and-control systems to create a command-and-control pattern containing four basic steps in an IDEF model — plan, detect, control and act — with the input, outputs, strategy and resources for each step. In fact, each of those steps was itself a pattern that could be used independently.

He had an interesting analogy of the electricity and distribution system as a service-oriented architecture: you can plug in a new device without notifying the provider, you might be served by multiple electricity producers without knowing, your usage is metered for your service provider to bill you for the usage, and the details of how electricity is generated is generally not known to you.

Like any enterprise architecture initiative, the development of EA patterns is often considered overhead in organizations, so may never be done. You have to take the time up front to discover and document the pattern so that it can be reused later; it’s at the first point of reuse where you start to save money, and subsequent reuses where it really starts to pay off. Although many examples of software patterns exist, enterprise architecture patterns are much rarer: Cloutier is researching the creation of an EA pattern repository in his work at Stevens Institute of Technology. Ultimately, the general availability of enterprise architecture patterns that have been created by others — a formalization of best practices — is where the real benefits lie, and can help to foster the acceptance of EA in more organizations.

Architecture & Process: Woody Woods

There’s one huge problem with this conference: too many interesting sessions going on simultaneously, so I’m sure that I’m missing something good no matter which I pick.

I finished the morning breakout sessions with Woody Woods of SI International discussing transitioning enterprise architectures to service-oriented architectures. He started out defining SOA, using the RUP definition: a conceptual description of the structure of a system in terms of its components and the services that they provide without regard for the underlying implementation of the components, services and connections between components). There are a number of reasons for implementing SOA, starting with the trend towards object oriented analysis and design, and including the loosely coupled nature that allows for easy interfaces between systems and between enterprises. Services are defined by two main standards (in the US, anyway): NCOW-RM, the DoD standard, and the OASIS reference model for SOA.

There are a number of steps in defining operational activities

  • Establish vision and mission
  • Determine enterprise boundaries
  • Identify enterprise use cases
  • Detail enterprise use cases to create an activity diagram and sequence diagram
  • Develop logical data model

The process for developing a service model, then, the following steps are taken (using RUP terminology):

  1. Identify the roles in a process.
  2. Identify the objects in a process, starting with triggers and results, and refining to include all objects, the initial actions and a complete action analysis, potentially creating a sequence diagram. Other types of process models could be used here instead, such as BPMN, although he didn’t mention that; they’re using Rational Rose so his talk is focused on RUP models.
  3. Identify boundary crossings, since every time an object crosses a boundary, it’s a potential service. By “boundary”, he means the boundaries between roles, that is, the lanes on a swimlane diagram; note that some boundary crossings can be ignored as artifacts of a two-dimensional modeling process, e.g., where an activity in lane 1 links to an activity in lane 3, the fact that lane 2 is in between them is irrelevant, and the boundary crossing is actually between lane 1 and 3.
  4. Identify potential services at each boundary crossing, which implies encapsulation of the functionality of that service within a role; the flip side of that is that it also implies a lack of visibility between the roles, although that’s inherent in object orientation. Each boundary crossing doesn’t necessarily form its own unique services, however; multiple boundary crossings may be combined into services (e.g., two different roles requesting information from a third role would use the same service, not two different services). In this sense, a service is not necessarily an automated or system service; it could be a business service based on a manual process.
  5. Identify interfaces. Once the potential services have been defined, those interfaces that occur between systems represent system interfaces, which can in turn be put into code. At this point, data interfaces can be defined between the two systems that specify the properties of the service.

In this context, he’s considering the RUP models to be the “enterprise architecture” that is being transitioned to a SOA, but this does provide a good methodology for working from a business process to the set of services that need to be developed in order to effectively implement the processes. I’ll be speaking on a similar topic — driving service definitions from business processes — next week at TUCON, and it was interesting to see how Woods is doing this using the RUP models.

Architecture & Process keynote: Bill Curtis

The second part of the morning keynote was by Bill Curtis, who was involved in developing CMM and CMMI, and now is working on the Business Process Maturity Model (BPMM). I’ve seen quite a bit about BPMM at OMG functions, but this is the first time that I’ve heard Curtis speak about it.

He started by talking about the process/function matrix, where functions focus on the performance of skills within an area of expertise, and processes focus on the flow and transformation of information or material. In other words, functions are the silos/departments in organizations (e.g., marketing, engineering, sales, admin, supply chain, finance, customer service), and processes are the flows that cut across them (e.g., concept to retire, campaign to order, order to cash, procure to pay, incident to close. Unfortunately, as we all know, the biggest problems occur with the white space in between the silos when the processes aren’t structured properly, and a small error at the beginning of the process causes increasingly large amounts of rework in other departments later in the process: items left off the bill of sale by sales created missing information in legal, incomplete specs in delivery, and incorrect invoices in finance. Typical for many industries is 30% rework — an alarming figure that would never be tolerated in manufacturing, for example, where rework is measured and visible.

Curtis’ point is that low maturity organizations have a staggering about of rework, causing incredibly inefficient processes, and they don’t even know about it because they’re not measuring it. As with many things, introspection breeds change. And just as Ted Lewis was talking about EA as not just being IT architecture, but a business-IT decision-making framework, Curtis talked about how the concepts of CMM in IT were expanded into BPMM, a measurement of both business and IT maturity relative to business processes.

In case you haven’t seen the BPMM, here’s the five levels:

  • Level 1 – Initial: inconsistent management (I would have called this Level 0 for consistency with CMM, but maybe that was considered too depressing for business organizations). Curtis called the haphazard measures at this level “the march of 1000 spreadsheets”, which is pretty accurate.
  • Level 2 – Managed: work unit management, achieved through repeatable practices. Measurements in place tend to be localized status and operational reports that indicate whether local work is on target or not, allowing them to start to manage their commitments and capacity.
  • Level 3 – Standardized: process management based on standardized practices. Transitioning from level 2 to 3 requires tailoring guidelines, allowing the creation of standard processes while still allowing for exceptions: this tends to strip a lot of the complexity out of the processes, and makes it worth considering automation (automation of level 2 just paves the cowpaths). Measurements are now focused on process measures, usually based on reacting to thresholds, which allows both more accurate processes and more accurate cost-time-quality measures for better business planning.
  • Level 4 – Predictable: capability management through statistically controlled practices. Statistical measurements throughout a process — true process analytics — are now used to predict the outcome: not only are the measurements more sophisticated, but the process is sufficiently repeatable (low variance) that accurate prediction is possible. If you’re using Six Sigma, this is where the full set of tools and techniques are used (although some will be used at levels 2 and 3). This allows predictive models to be used  both for predicting the results of work in progress, and for planning based on accurately estimated capabilities.
  • Level 5 – Innovative: innovation management through innovative practices. This is not just about innovation, but about the agility to implement that innovation. Measurements are used for what-if analysis to drive into proactive process experimentation and improvement.

The top two levels are really identical to innovative management practices, but the advantage of BPMM is that it provides a path to get from where we are now to these innovative practices. Curtis also sees this as a migration from a chaotic clash of cultures to a cohesive culture of innovation.

This was a fabulous, fast-paced presentation that left me with a much deeper understanding of — and appreciation for — BPMM. He had some great slides with this, which will apparently be available on the Transformation & Innovation website later this week.

Now the hard part starts: trying to pick between a number of interesting-sounding breakout sessions.

Architecture & Process keynote: Edward Lewis

I like coming to smaller conferences once in a while: although I usually have to pay my own way to get here, the networking tends to be much more real than at a larger one. In the 10 minutes before the keynote started this morning, I chatted with five people who I know, and had one complete stranger introduce himself to me and tell me that he reads my blog. Also, since this conference is focused on enterprise architecture and process, there will be a few people around who appreciate why I call this blog Column 2 (think Zachman).

To get my complaints in early: no wifi (as Michael zur Muehlen was quick to tell me) and no tea. And since this is DC, we start at 8am. Other than that, everything’s good.

Edward Lewis, who was the first CIO of the US federal government, is giving the opening keynote, discussing transformation for the 21st century by tapping the hidden potential of enterprise architecture. He started with how many organizations are seriously out of date in how they operate, and gave a list of “great reasons” not to use enterprise architecture: too complicated, too much change, takes too long, costs too much. However, business wants results in term of organizational and technological change, and Lewis sees four major areas of focus for visionary organizations:

  • Not just supply chains: global supply and demand chains that are more complex, synchronized and focused on all processes, activities and technologies
  • Strategic partner relationship management for seamless interoperability
  • High-performing organizations: people and culture are the most important factors
  • Achieving the “perfect order” of total business-wide integration versus the “perfect storm”

He stressed the importance of the global supply chain: not just what you’re doing, but what your supply chain partners are doing that contribute to the timely, accurate and continuous flow of your product/service, information and revenue.

He believes that enterprise architecture has a key role in achieving the breakthrough performance required for visionary organizations, by supporting strategic thinking and planning, and allowing the alignment and integration of people and culture with the business and IT strategies, organizational structure, processes and technologies.

EA is the strategic configuration of business and information resources, a type of strategic decision-making framework.

His ten critical success factors:

  • Strong executive leadership: support, commitment and involvement
  • Dynamic business and IT strategic planning environment: coordinated, integrated, flexible, long-term strategic focus
  • Formal business and IT infrastructure: framework, policies, standards, methods and tools
  • Integration architecture models: business plan, organization, data, applications, technology
  • Dynamic business and IT processes: macro and micro, internal and external
  • Use information and knowledge effectively: all aspects of data and information
  • Use information technology effectively: enterprise-wide, fully integrated
  • Use dynamic implementation and migration plan: well-defined projects
  • Have an innovative culture and access to skilled individuals: education and training, teamwork, culture and organizational learning
  • Have an effective change management environment: integrating change programs, high-performing organization

He boils this all down to three factors: people, culture and leadership.

How to run a great conference

About a month ago, someone I know who is organizing a conference, and knows how many conferences that I attend, asked me for my list of top components that make a killer conference. My reply to him follows.

First of all, about the content:

  • It’s all about the content. You need to have good content, and that means both engaging and relevant. I’m not a big fan of "celebrity" speakers who know nothing about the conference subject; I’d rather see experts from the industry, customers, and the vendor than listen to a generic speech that I could have found on YouTube. At this year’s FASTforward conference, there were very few keynotes by FAST executives or employees; they put their own product information in breakout sessions so that I had a choice of whether to hear about the product or hear from customers.
  • Keep the speakers to 45 minutes, tops. 30 minutes works well for keynote addresses, and 45 minutes for the breakouts.
  • Schedule frequent breaks and have them in an area that encourages conversation. The contrast between the atmosphere at conferences with fewer, shorter breaks — making it seem rushed between sessions — and ones where there are more breaks and much more networking between participants, was marked.
  • Publish your agenda online as soon as you possibly can. If possible, provide some sort of customized portal so that people can select the sessions that they want and see a personalized schedule (both Gartner and FAST did this recently), but that’s not completely necessary; however, an online schedule is critical. First of all, you’ll attract more people because they’ll be able to see the value of what you’re offering, and secondly, you’ll have more people stay right to the end of the conference if they see that there are valuable sessions on the last afternoon.
  • Publish slides from the sessions on a USB drive, not a CD, and distribute them at the beginning of the conference. Many people (including myself) have a smaller laptop that doesn’t include an onboard CD/DVD, and I never carry the external one on trips, so I can’t load a CD to look at the presentation materials. Sometimes, looking at the slides ahead of time helps me to select which session that I want to attend. An alternative, if you have a personalized portal for attendees, is to post the presentation materials online so that it’s available before people even get to the conference.

Secondly, on logistics:

  • Wifi is a key component for any technology-related conference. Selecting a hotel that offers free wifi throughout the building, as opposed to just putting your own wifi in the conference area, is an important factor. If more conference organizers demand free wifi throughout the hotel — which you know costs the hotel nearly nothing — then more hotels will start to offer it. I realize that only 1 in 5 people will use their laptop in the conference, but those people will find it critical.
  • Along with wifi, provide power at the tables in the conference rooms. Wifi is no good when your battery runs out after a couple of hours, and I hate having to search around for an electrical outlet to get 15 minutes of charge instead of taking a break and networking with people. Again, only 1 in 5 will use it, but they’ll find it incredibly useful.
  • Offer a free luggage storage area in the conference center on the last day. I’ve seen this at several conferences, and all it really takes is one or two people to watch over what’s happening, you don’t need a formal bag check process. It saves a ton of time for people when it’s time to go if they can just pick up their bag from there rather than have to wait in line to get it from the bell desk at the hotel, which usually isn’t set up to handle that volume all at once.
  • Something that I saw at the FASTforward conference were free buses to the airport provided for conference attendees at the end of the last day — the conference chartered a couple of full-size coaches and shuttled us to the airport. Nice touch.
  • Another nice thing from the FASTforward organizers was a single sheet of paper in each conference package — printed on the conference letterhead but likely done at the last minute — that summarized all departure information. They listed the luggage storage service, the hotel checkout time and the airport bus transfers plus had some reminders for people who drove their own car to the event. A nice summary page.

Travel-crazy again

Having spent almost two months without getting on a plane, I’m back on the road for the next few weeks:

  • April 22-23: Washington DC for the Architecture and Process conference
  • April 29-May 2: San Francisco for TUCON, TIBCO’s user conference
  • May 5-7: Orlando for SAPPHIRE, SAP’s user conference
  • May 13-14: Chicago for BEA.Participate.08, BEA’s user conference

Expect to see lots of blogging from the conferences. If you’re attending any of them, ping me for a meet-up.

Disclosure: for the three vendor conferences, the respective vendors are paying my travel expenses to attend.

BPEL for Java Developers Webinar

Active Endpoints is hosting a webinar this Thursday on BPEL Basics for Java Developers, featuring Ron Romano, their principal consulting architect. From their information:

A high-level overview of BPEL and its importance in a web-services environment is presented, along with a brief discussion of the basic BPEL activities and how they relate to Java concepts. The following topics will be covered:

  • Parsing the Language of SOA with Java as a guide
  • Breaking out of the VM: evolving from RPC to Web Services
  • BPEL Activities – Receive, Reply, Invoke
  • BPEL Facilities – Fault Handling and Compensation (“Undo”)

The VP of Marketing assures me that he was allowed only two slides at the end of the presentation, and that otherwise this is focused on the technical goodies.

You need to register in advance at the link above.

IT360: Web 2.0 – Now and its future for business

Mike Fox of Brightlights, a recruiter serving small and medium-sized software companies, is giving a talk on Web 2.0 and business; I started out unsure of why a recruiter is talking to us about Web 2.0, and ended up pretty much of the same mind.

He started with some very basic concepts, like the original Web 2.0 meme map and the themes it contains, and discussed some well-known (and well-worn) Web 2.0 case studies, such as Proctor&Gamble’s crowdsourcing of research, and Barack Obama’s online campaign presence. He asked questions like "has anyone heard of the $100 laptop?", "have you ever seen Slideshare?" and "has anyone heard of the ‘long tail’?", which I think you’d have to have been living under a rock not to know about, but maybe I’ve been drinking too much of the Koolaid.

He moved on to talk about Web 2.0 in the recruiting world, including the use of information aggregation tools such as ZoomInfo. He spent some time talking about using LinkedIn as both a recruiting tool and a job search tool, although I tend to think that this audience is probably pretty aware of what LinkedIn does. He mentioned some other recruitment-focused search sites like Jigsaw, and then stepped us through how to use Google Advanced Search for finding more information about companies and individuals — again, a bit basic, particularly (I think) for this audience.

He went through how to apply Web 2.0 thinking to business, based on the different goals and expectations for different generations of workers. He’s definitely been reading too much Tapscott. He did look at how to apply Web 2.0 to some fundamentals of adding value to a business, such as SaaS as both a cost reduction technique and a channel for offering products to customers.

He considers sending a monthly PDF newsletter by email and using Skype for long distance to be part of how he incorporates Web 2.0 into his business, which is a bit sad, although he does use a SaaS recruitment management system. There’s so much more that could be done with Web 2.0 in recruitment for a small recruiter like him: collaboration on resumes and job postings via Google Docs; blogging to show thought leadership in recruitment and engage the audience instead of a static monthly newsletter in PDF; publishing job postings listed on his site via RSS; hosting a discussion forum for job applicants.

That’s it for my coverage of IT360; I have to get back to some real work for the rest of the day. The Google talk was definitely the highlight for me, although I did really enjoy making the Director General of Industry Canada squirm yesterday, too.

IT360: Matthew Glotzbach, Google Enterprise

I’m at the IT360 for a couple of hours this morning, mostly to hear Matthew Glotzbach, director of product management for Google Enterprise. It’s a sad commentary on the culture of Canadian IT conferences shows that this session is entitled "Meet Matthew Glotzbach of Google" in the conference guide, as if he doesn’t need to actually talk about anything, just show up here in the frozen north — we need to work on that "we’re not worthy" attitude. 🙂

Google’s Enterprise division includes, as you might expect, search applications such as site search and dedicated search appliances, but also includes Google Apps which many of us now use for hosting email, calendaring and document collaboration functions.

Glotzbach’s actual presentation title is "Head in the Clouds", referring to cloud computing, or more properly in this context, software as a service. He made an analogy between SaaS applications and electricity, referencing Nicholas Carr’s book The Big Switch, talking about the shift from each factory generating its own power to the centralized generation of electricity that is now sold as a service on the power grid. Just as it took a cultural shift to move from each company having their own power generation facilities (and a VP of electricity who was intent on defending his turf), we’re now undergoing a cultural shift to move from each company managing all of their own IT services to using best-of-breed services at a much lower cost over the internet.

He discussed five things that cloud computing has given us:

  1. Democratization of information, giving anyone the chance to have their say in some way, from Wikipedia to Twitter to blogs. This is dependent upon and facilitated by standards, particularly simple, easy-to-use standards like RSS; in fact, all public APIs for Google Apps are RSS-based. What IT can learn from this is to keep things simple, something that enterprise IT is not really known for. Cloud computing also allows for a much freer exchange of information between people who don’t speak the same language, through real-time translation capabilities that aren’t feasible on a desktop platform: for example, add en2zh ([email protected]) to your Google group chat so that you can have a text chat with someone with one of you typing in English and the other in Mandarin Chinese.
  2. Economics of the new information supply chain. Cloud computing fundamentally changes the economics of enterprise IT: the massive scale of cloud-based storage (e.g., Google Apps, Amazon S3) and computing (e.g., Amazon EC2) drives down the cost so much that it’s almost ridiculous not to consider using some of that capacity for enterprise functionality. Of course, we’ve been seeing this manifested in consumer applications for a couple of years now, with practically unlimited storage offered in online email and photo storage applications, but more companies need to start making this part of their enterprise strategy to reduce costs on systems that are essential but not a competitive differentiator.
  3. Democratization of capabilities, allowing a developing nation to compete with a developed country, or a small business to compete with a major corporation, since they all have access to the same type of IT-related functionality through the cloud. In fact, those without legacy infrastructure are sometimes in a better position since they can start with a clean slate of new technology and become innovative collaborators. It’s also possible for any company, no matter how small, to get the necessary Googlejuice for a high ranking in search results if they have quality, targeted information on their site — as the cartoon says, on the internet no one knows you’re a dog.
  4. Consumer-driven innovation will set the pace, and will drive IT. The consumer market is much more Darwinian in nature: consumers have more choices, and are notoriously fast to switch to another vendor. Businesses tend not to do this because of the high costs involved in both the selection process and in switching between vendors; I’m not sure that Glotzbach is giving enough weight to the costs of switching corporate applications, since he seems to indicate that companies may adopt more of a consumer-like fickleness in their vendor relationships. As more companies adopted more cloud computing, that will likely change as it becomes easier to switch providers.
  5. Barriers to adoption of cloud computing are falling away. The main challenges have been connectivity, security, offline access, reliability and user experience; all of these have either been fully addressed or are in the process of being addressed. My biggest issue is still connectivity/offline access (which are really two sides of the same coin) such that I use a lot of desktop applications so that I can work on planes, in hotels with flaky access, or at the Toronto convention centre that I’m at today. He had some interesting stats on security: 60% of corporate data resides on desktop and laptop computers, and 1 in 10 laptops are stolen within 12 months of purchase — the FBI lost 160 laptops in the last 44 months — such that corporate security professionals consider laptops to be one of the biggest security risks. In other words, the data is probably safer in the cloud than it is on corporate laptops.

He finished up with a slide showing a list of well-known companies, all of which use Google Apps; alarmingly, I heard someone behind me say "just show me one Canadian company doing that". I’m not sure if that is an indication of the old-school nature of this conference’s attendees, or of Canadian IT businesspeople in general.

Glotzbach’s closing thoughts:

  • On-premise software is not going away
  • Most of the interesting innovation in software and technology over the next decade will be "in the cloud"
  • Market will have lots of competitors
  • Your new employees are the cloud generation, both welcoming and expecting that some big part of their social graph lives in the cloud
  • We (Google and other cloud providers) need to earn your trust.

Great presentation, and well worth braving the pouring rain to come out this morning.