Ndu Emuchay, IBM, on standards in cloud computing #ogtoronto

Today has an track devoted mostly to cloud computing, and we started with Ndu Emuchay of IBM discussing the cloud computing landscape and the importance of standards. IBM is pretty innovative in many areas of new technology – I’ve blogged in the past about their Enterprise 2.0 efforts, and just this morning saw an article on what they’re doing with the internet of things where they’re integrating sensors and real-time messaging, much of which would be cloud-based, by the nature of the objects to which the sensors are attached.

He started with a list of both business and IT benefits for considering the cloud:

  • Cost savings
  • Employee and service mobility
  • Responsiveness and agility in new solutions
  • Allows IT to focus on their core competencies rather than running commodity infrastructure – as the discussion later in presentation pointed out, this could result in reduced IT staff
  • Economies of scale
  • Flexibility of hybrid infrastructure spanning public and private platforms

From a business standpoint, users only care that systems are available when they need them, do what they want, and are secure; it doesn’t really matter if the servers are in-house or not, or if they own the software that they’re running.

Clouds can range from private, which are leased or owned by an enterprise, to community and public clouds; there’s also the concept of internal and external clouds, although I’m not sure that I agree that anything that’s internal (on premise) could actually be considered as cloud. The Jericho Forum (which appears to be part of the Open Group) publishes a paper describing their cloud cube model (direct PDF link):

There’s a big range of cloud-based services available now: people services (e.g., Amazon’s Mechanical Turk), business services (e.g., business process outsourcing), application services (e.g., Google Apps), platform services and infrastructure services (e.g., Amazon S3); it’s important to determine what level of services that you want to include within your architecture, and the risks and benefits associated with each. This is a godsend for small enterprises like my one-person firm – I use Google Apps to host my email/calendar/contacts, synchronized to Outlook on my desktop, and use Amazon S3 for secure daily backup – but we’re starting to see larger organizations put 10’s of 1000’s of users on Google Apps to replace their Exchange servers, and greatly reduce their costs without compromising functionality or security.

Emuchay presented a cloud computing taxonomy from a paper on cloud computing use cases (direct PDF link) that includes hosting, consumption and development as the three main categories of participants.

There’s a working group, organized using a Google Group, that developed this paper and taxonomy, so join in there if you feel that you can contribute to the efforts.

As he points out, many inhibitors to cloud adoption can be addressed through security, interoperability, integration and portability standards. Interoperability is the ability for loose coupling or data exchange between that appear as a black box to each other; integration combines components or systems into an overall system; and portability considers the ease of moving components and data from one system to another, such as when switching cloud providers. These standards impact the five different cloud usage models: end user to cloud; enterprise to cloud to end user; enterprise to cloud to enterprise (interoperability); enterprise to cloud (integration); and enterprise to cloud (portability). He walked through the different types of standards required for each of these use cases, highlighting where there were accepted standards and some of the challenges still to be resolved. It’s clear that open standards play a critical role in cloud adoption.

Alain Perry, Treasury Board Secretariat, on EA in Canadian government #ogtoronto

Also on Monday, we heard from Alain Perry from the CIO branch of the Treasury Board Secretariat on how enterprise architecture, supported by the use of TOGAF, is making its way into the Canadian government at all levels. The EA community of practice is supporting the use of TOGAF 9 in order to enable a common vocabulary and taxonomy for creating and using architectural artifacts, and to create reusable reference models, policy instruments and standards to support the architectural work.

Using the classic “city planning” analogy that we hear so often in EA discussions, Perry showed how they break down their levels of detail into strategic/enterprise architecture (“city scapes”) for vision and principles required for long-term direction, program/segment architecture (“district designs”) for specific programs and portfolios to provide context for solution architectures, and solution architecture (“detailed building design and specs”) for a specific project.

They used an adaptation of TOGAF to create the common content model for each of those three levels: architecture vision, architecture requirements, business architecture, architecture (including data and application architecture), technology architecture, and architecture realization.

They’ve created the Canadian Governments Reference Model (CGRM) that allows different levels of government to share standards, tools and capabilities: in Canada, that includes at least federal, provincial and municipal, plus sometimes regional, all with their own political agendas, so this is no mean feat.

Allen Brown of Open Group on their internal use of TOGAF #ogtoronto

I was taking notes in a paper notebook at the conference on Monday, and only now have had time to review them and write up a summary.

The general sessions opened with Allen Brown of the Open Group discussing their own use of TOGAF in architecting their internal systems. Since they’re making a push to have TOGAF used in more small and medium businesses, using themselves as a test case was an excellent way to make their point. This seemed to be primarily a systems architecture exercise, responding to threats such as operational risk and security; however, the problem that they had was primarily that of systems, not of the general organizational strategy.

So far, as part of the overall project, they’ve replaced their obsolete financial system, outsourced credit card handling, moved to hosted offsite servers, added a CRM system, and are adding a CMS to reduce their dependency on a webmaster for making website updates. These are all great moves forward for them, but the interesting part was how they approached architecture: they involved everyone in the organization and documented the architecture on the intranet, so that everyone was aware of what was happening and the impacts that it would have. Since this included business priorities and constraints, the non-technical people could contribute to and validate the scenarios, use cases and processes in the business architecture.

They developed a robust set of architecture artifacts, documenting the business, applications, data and technology architectures, then identified opportunities and solutions as part of an evaluation report that fed into the eventual system implementations.

This was a great case study, since it showed how to incorporate architecture into a small organization without a lot of resources. They had no budget to hire new staff or invest in new tools, and had to deal with the reality of revenue-generating work taking precedence over the architecture activities. They weren’t able to create every possible artifact, so were forced to focus on the ones most critical to success (a technique that could be used well in larger, better-funded organizations). Yet they still experienced a great deal of success, since TOGAF forced them to think at all levels rather that just getting mired in the technical details of any of the specific system upgrades: this resulted in appropriate outsourcing and encouraged reuse. At the end of the day, Brown stated that they could not have achieved half of what they have without TOGAF.

Heather Kreger, IBM, on SOA standards

It’s impossible for me to pass up a standards discussion (how sad is that?), so I switched from the business analysis stream to the SOA stream for Heather Kreger’s discussion of SOA standards at an architectural level. OASIS, the Open Group and OMG got together to talk about some of the overlapping standards impacting this: they branded the process as “SOA harmonization” and even wrote a paper about it, Navigating the SOA Open Standards Landscape Around Architecture (direct PDF link).

As Kreger points out, there are differences between the different groups’ standards, but they’re not fatal. For example, both the Open Group and OASIS have SOA reference architectures; the Open Group one is more about implementation, but there’s nothing that’s completely contradictory about them. Similarly, there are SOA governance standards from both the Open Group and OASIS

They created a continuum of reference architectures, from the most abstract conceptual SOA reference architectures through generic reference architectures to SOA solution architectures.

The biggest difference in the standards is that of viewpoint: the standards are written based on what the author organizations are trying to do with them, but contain a lot of common concepts. For example, the Open Group tends to focus on how you build something within your own organization, whereas OASIS looks more at cross-organization orchestration. In some cases, specifications can be complementary (not complimentary as stated in the presentation 🙂 ), as we see with SoaML being used with any of the reference architectures.

Good summary, and I’ll take time to review the paper later.

Ron Tolido, Capgemini, on (or not on) open BA methodology #ogtoronto

Ron Tolido of Capgemini presented on the case for an open methodology for business analysis. There’s a big component of standardization here, particularly a shared language (terminology, not necessarily natural language) to enable collaboration. He considers the core competencies of business analysis to be information analysis, subject matter expertise and business process management, there’s also an aspect of consultancy around managing change.

In spite of his categorization of BPM as an “IT tool”, he highlighted the importance of process in business analysis today, and how process orchestration (although he didn’t call it that) and business rules can create applications much faster than before. This allows business rules to be changed on the fly in order to tune the business processes, and the creation of configurable user experiences by non-IT people.

Echoing the confusion of the previous presentation on the IIBA, Tolido stated that business architecture and business analysis are different, although business analysts might be involved in business architecture work without being business architects themselves. It appears that he’s making the distinction of business analysts as project-specific resources, and business architects as enterprise resources, although it’s not clear what functions or capabilities are different. There was a lot of audience interest in this issue; there appears to be the will to combine the disciplines in some way, but it’s just not there yet. I’m not sure that there’s sufficient common understanding of the term “architecture” as it pertains to non-technical disciplines.

Kathleen Barret, IIBA, on the Business Analyst role #ogtoronto

Kathleen Barret of the International Institute of Business Analysis discussed how the role of Business Analyst moved from assistant Project Manager and scribe to the focal point for understanding and articulating the business need for a solution or change.

She started by talking about why there is such a strong case now for business analysts. Organizations have been designing solutions for years without proper business analysis, resulting in a spotty success rate; in today’s economy, however, no one can afford the misses any more, prompting the drive towards having solid business analysis built in to any project. There’s also a much stronger focus now on business rather than technology in most organizations, with business strategy driving the big projects.

IIBA was created 5 years ago to help support business analysis practices and create standards for practice and certification. Its goals are to develop awareness and recognition of the BA role, and to develop the BA Body of Knowledge (BABOK) to support BAs.

Business analysis is about understanding the organization: why it exists, its goals and objectives, how it accomplishes those objectives, how it works, and how it needs to change to meet its objectives and future needs. As Barret points out, there’s a big overlap with what business architects do (she posits that they are now the same, or that an enterprise business analyst is the same as a business architect – I’m not sure that IIBA has a well-thought-out position on this yet); the difference may be purely a matter of scope, or of general analysis versus project-specific analysis.

The BA works as a liaison amongst stakeholders in order to elicit, analyze, communicate and validate requirements for changes to processes, policies and systems. This could be at the enterprise level – probably what most of us would refer to as a business architect – or at the project level. This can be a subject matter expert or domain practitioner (which I don’t consider a true BA in many cases) or a consultative BA who works with SMEs to elicit business requirements. In a large, complex organization, there may be several types of BAs: there is a need for both specialists (in terms of business vertical, methodologies and technologies) and generalists.

IIBA will continue to extend the BABOK, and will be releasing a competency model by the end of 2009 to help BAs identify gaps in their capabilities and to help organization to assess current needs and capabilities. In my experience, “business analyst” is one of the most over-used and misused term in business today, so anything that IIBA does to help clarify the role and expected capabilities has to help the situation.

David Foote on EA careers #ogtoronto

Foote presented some interesting – but for this primarily Canadian audience, not completely relevant – statistics on US unemployment; he added the comment “I assume it’s the same in Canada”. Would have been good if he had actually taken 5 minutes to research our job market before presenting here, because there are some significant differences, although many similarities. He followed this with the rather obvious observation that there is always a shortage of talented people with specific skills, and that in an economic downturn, companies are looking to hire quality rather than quantity.

He brought forward recent Gartner research that showed that more than half of EA programs will be stopped in 2009, and that the remaining ones will embrace cloud computing but will struggle with framework and information management problems. Foote pooh-poohed this, and said that there were only a handful of good analysts out there, and that this was not based in fact. The implication is, of course, that he’s one of those good analysts. 🙂

There’s some pretty interesting numbers about pay scales for architects in his research, which was gathered from cities across the US and Canada: although there are a lot of people out of work and salaries are going down, architects with certain certifications are holding steady or even increasing their worth. Whereas the value (presumably measured by pay) of web developers has dropped by over 28% in the past 12 months, architects and project managers – which were, inexplicably, combined – increased by over 4%. Topping the list are Check Point Certified Master Architects and Microsoft Certified Architects, each of which increased in value by 20% in the past 12 months. There are some non-certified skills gaining in value, too: I’m not at all surprised to see process leading the pack at 8.3% increase, since an economic downturn favors process improvement projects.

He showed us some detailed stats on pay scales for architects across a range of US and Canadian cities, and summarized for each country: enterprise architects, data architects, information architects, senior applications architects, applications architects, security architect and director of architecture. He presents these (and therefore architecture in general) as purely IT roles: this is all in the context of IT pay scales, and contains nothing on business architects.

The presentation finished with some of the barriers to enterprise architecture:

  • Many EAs live in a siloed world, funded by business silo, and are expected to bridge silos with “nickel and dime” funding
  • There is a disconnect between IT leadership and EA governance, with many CIOs focused on short-term operational demands versus long-term optimization
  • EAs will have to go through various stages of maturity before their job potential is fulfilled
  • The EA role is not defined well enough to model and operate a successful EA organization

This last bit was interesting, but didn’t really flow from the remainder of the presentation, which presented the IT architect job and pay survey results.

Minaz Sarangi, TD Bank, on EA in financial services #ogtoronto

I still haven’t posted my notes from yesterday – I made the mistake of not bringing my laptop yesterday, and my notes are trapped in my paper notebook until I get a chance to review and transcribe them.

I only caught the last 10 minutes of Minaz Sarangi’s presentation due to a meeting elsewhere, but was able to download the presentation and get caught up in time for the Q&A. TD has grown significantly through mergers and acquisitions, including the major acquisition of Canada Trust in 2000, and a big part of their enterprise architecture efforts are to standardize across the various IT infrastructures that exist in the subsidiaries in order to simplify the platforms. This is not fundamentally different from most large financial services, although in many cases, the acquired companies are run as siloed business units, leading to inefficiencies and inability to provide a complete customer view across all product lines.

TD, in developing a true enterprise architecture strategy that spans all the business units, is having the business strategy guide their EA efforts and their enterprise technology strategies. They’ve developed architecture domain practices for business, applications, data, security and technology, and define prescriptive architectures in order to align solutions delivery with their enterprise standards. All of the technology building blocks are defined in the context of the EA, and each has both a strategy and a reference architecture in order to articulate the current and future state, the roadmap, current and future capabilities required, deployed solutions associated with the capabilities, reference implementations, and technology standards.

From a business standpoint, the key goals are to make employees more productive and to enhance customer experience, but there’s also issues of risk reduction, security, system availability and cost reductions due to standardized technology platforms.

How TD is achieving this is through what they call “EA simplification”:

  • Reduction of operational footprint for greater agility, through application rationalization and enterprise shared services
  • Consolidation and standardization of core technology platform for scalability
  • Automation of repetitive architectural and engineering processes for sustainability, for risk reduction and process optimization

They’re starting to see some success with this approach, but as can be expected from such a diverse organization, it’s slow going. They have some enterprise shared services, including content management, and are getting the sponsorship and commitment in place that they require to push forward.

I’m of two minds about programs like this: I certainly see the need for technology standardization within an organization, but it seems like some of these massive EA efforts serve to just extend the delays for new technology implementation and create a significant set of rapids in advance of the still very waterfall methodologies.

The Open Group Conference

I was already planning to attend the Open Group Conference in Toronto next week to catch up on what’s happening in the enterprise architecture space, and now I’ve been invited to join Dana Gardner’s panel on Monday morning, which will also be recorded as a podcast. The panel is on architecture’s scope extending beyond the enterprise, bringing together elements of EA, SOA and cloud computing. Also on the panel will be John Gotze, president of the Associate of Enterprise Architects, and Tim Westbrock from EAdirections. There are big aspects of business process to this, specifically when you start orchestrating processes that span the enterprise’s boundaries, and I hope that we’ll have time to explore that.

I’ll probably be at the conference each day to check out some of the other sessions, and I may stick around for some of the evening events, such as CloudCamp on Wednesday. If you’re there, drop by my session and say hi.

Social processes #e2open

For the last session of the day – and what will be the last session of the Enterprise 2.0 conference for me – I shifted over to the Enterprise2Open unconference for a discussion on social processes with Mark Masterson. As part of his job developing software for insurance companies, he put together a mockup of a social front end for an insurance claims adjuster’s workplace. The home page is dominated by the activity stream, which includes links to tasks, blog posts, documents and other systems that are relevant to this person’s work. It’s not just the usual social network stuff; it also includes information from enterprise systems such as ECM and BPM systems. There would be rules to set priorities on what’s in any given user’s activity stream.

There’s also more purely social features, such as a personal profile with the ability to provide status updates and indicate presence.

When the user clicks on an item in the activity stream representing an enterprise BPM task, the information from the task and its process is pulled into this environment, rather than launching the BPM system’s user interface; this becomes a unified desktop for the user, rather than just a launchpad. Information about a claim could include external data that is mashed up into the interface, such as Google maps. The right panel of the interface changes so that it always shows information to support what is happening in the main pane; when a BPM work item is open, for example, the right panel includes links to people and content that might be related to that specific case. It also includes a tag cloud that can be used to click through to information across the enterprise about that subject; for example, clicking on the “fraudulent injury” tag showed a list of people who are related in some way (that is, they are a resource with some experience) to fraudulent injury claims, and what their role in the process might be.

Masterson presents this as a vision for what he thinks is the best type of interface to present to all the participants in the claims process: no jumping around between multiple applications, no green screens, and the relationships between information from multiple systems combined in ways that make sense relative to the adjuster’s work. I see some of this type of functionality being built into some of the more modern BPM systems, but that’s not what a lot of insurance companies are using: they’re using out-of-date versions of FileNet and other more traditional BPM systems.

As with most unconference sessions, this is a small bit of presentation and a lot of audience discussion. Some in the group made a distinction between collaboration and social, and didn’t see the sort of collaboration within business processes that happens within organizations as social. Masterson (and I) disagree: whenever you deviate from the structured business process in a process such as claims adjudication, it’s an inherently social activity since people are relying on their tacit knowledge about what other people can bring to the process, and using (often) ad hoc methods for bringing them into the flow. I think that they are confusing “social” with “public”, and have been drinking too much of the E2.0 Kool-Aid that’s being passed around at this conference.

The real unique thing here is not putting a pretty front end on enterprise systems (although that’s a nice feature, it’s just a relatively well-understood integration issue); it’s the home page as a unified view of a user’s work environment – I hesitate to call it a unified inbox since it’s not just about delivering tasks or messages to be acted upon – and the information relationships that allow the right panel to be populated with relevant information and links for the specific work context. As opposed to tagging of process instances to use as future templates for exception cases, an idea that I’ve been knocking about for a while, this goes beyond that to collect information that might be related to a process instance from a variety of sources including blogs and wikis. Consider that the claims adjuster is handling a specific exception case, and someone else did a very similar case previously and documented their actions in a procedures wiki: this sort of environment could bring in information about the previous case when the user is processing the current case. The information in the right panel is replacing the user’s memory and the line of sticky notes that they have on the edge of their screen.

There’s some cool ideas in here, and I hope that it develops into a working prototype so that they can get this in front of actual users and refine the ideas. There’s a lot that’s broken in how enterprise processes work, even those that have been analyzed and automated with BPM, and bringing in contextual information to help with a specific work step (especially case management steps such as claims adjudication) is going to improve things at least a little bit.