Enterprise Architects in the cloud

A couple of weeks ago, I attended the Open Group’s Enterprise Architecture conference in Toronto (my coverage here), and ended up being invited to speak on Dana Gardner’s panel on how the cloud is pushing the enterprise architects’ role beyond IT into business process optimization.

You can now find the podcast here, subscribe to the Briefings Direct podcast series on iTunes here, and read the transcript here.

CloudCamp Toronto #cloudcamp #cloudcamptoronto

I attended my first unconference, commonly referred to as “-camps”, almost 2-1/2 years ago, when I went to Mountain View for MashupCamp, and have attended several since then, including more MashupCamps, BarCamp, TransitCamp, ChangeCamp and DemoCamp. I like the unconference format: although I rarely propose and lead a session, I actively participate, and find that this sort of conversational and collaborative discussion provides a lot of value.

We started with an unpanel, a format that I’ve never seen before but really like: the MC has the audience shout out topics of interest, which he writes on a flipchart, then the four panelists each have 60 seconds to pick one of the topics and expand on it.

We then had the usual unconference format where people can step up and propose their own session, although two of the ten slots are prefilled: one with “what is cloud computing” and the other with “cloud computing business scenario workshop”; check the wiki page to see what we came up with for the other sessions, as well as (hopefully) everyone’s notes on the sessions linked from that page.

I’ll be sticking with the #cloudcamp hashtag after this since it leaves more room for chatter 🙂

Dana Gardner’s panel on cloud security #ogtoronto

After a quick meeting down the street, I made it back within a few minutes of the start of Dana Gardner’s panel on cloud security, including Glenn Brunette of Sun, Doug Howard of Perimeter eSecurity, Chris Hoff of Cisco, Richard Reiner of Enomaly and Tim Grant of NIST.

There was a big discussion about what should and shouldn’t be deployed to the cloud, echoing a number of the points made by Martin Harris this morning, but with a strong tendency not to put “mission critical” applications or data in the cloud due to the perceived risk; I think that these guys need to review some of the pain points that we gathered in the business scenario workshop, where at least one person said that their security increased when they moved to the cloud.

One of the key issues around cloud security is risk assessment: someone needs to do an objective comparison of on-premise versus cloud, because it’s not just a slam-dunk that on-premise is more secure than cloud, especially when there needs to be access by customers or partners. It’s hardly fair to hold cloud platforms to a higher level of security standards than on-premise systems: do a fair comparison, then look at the resulting costs and agility.

The panel seems pretty pessimistic about the potential for cloud platforms to outperform on-premise systems: I’m usually the ultra-conservative, risk-averse one in the room, but they’re making me feel like a cowboy. One of them used the example of Gmail – the free version, not the paid Google Apps – stating that it was still in beta (it’s not, as of a week ago) and that it might just disappear someday, and implying that you get what you pay for. No kidding, cheapskate: don’t expect to get enterprise-quality cloud environments for free. Pony up the $50/user/year for the paid version of Google Apps, however, and you get 99.9% availability (less than 9 hours of downtime per year): not sufficient for mission-critical applications, but likely sufficient for your office applications that it would replace.

A lot of other discussion topics, ending with some interesting points on standards and best practices: for interoperability, integration, portability, and even audit practices. You can catch the replay on Dana Gardner’s Briefings Direct in a couple of weeks.

That’s it for the Enterprise Architecture Practitioners Conference. Tonight is CloudCamp, and tomorrow the Security Practitioners Conference continues.

Dana Gardner’s panel on EA skills in a downturn #ogtoronto

I was in a panel here on Monday, hosted by Dana Gardner and also including John Gøtze of the Association of Enterprise Architects, and Tim Westbrock of EAdirections, where we discussed the issues of extending the scope of architecture beyond the enterprise. This was recorded and will be included in Dana’s usual podcast series within a couple of weeks; I’ll post a link to it when I see it, or you can subscribe in iTunes and catch it there..

Today, he’s moderating two more panels, and I sat in on the beginning of the one about which skills and experience differentiate an enterprise architect in a downtown, although I unfortunately had to duck out to a client meeting before the end (the disadvantage of attending a conference in my home city). This one featured James de Raeve and Len Fehskens of The Open Group, David Foote of Foote Partners, and Jason Uppal of QRS. From the conference program description:

As the architect’s role continues to evolve in scope with the increasingly global and distributed enterprise so to do the core skills and experiences required of them. Professional certifications, such as ITAC, ITSC and TOGAF, can be career differentiators at any time, but are particularly crucial during periods of economic downturn such as we’re currently experiencing.

This panel will examine the evolving job requirements of today’s enterprise architect, discuss the value of professional certification programs and how certifications help to not only legitimize and validate the profession, but also provide much-needed demand for the skills, capabilities and experience that certified professionals have within their organizations.  The panel will also include perspectives on how certification can affect market demand and salary levels for those certified.

It’s almost impossible to capture everything said on a panel or who said what, so just a few unattributed comments:

  • A lot of organizations are in panic mode, trying to cut costs but not lose (any more) customers; IT is afraid of being blamed for costs and inefficiencies
  • There needs to be more coherence around the definition of EA so that this position doesn’t get squeezed out during budget cuts due to lack of common understanding of what EAs do (I’m thinking it’s a bit late for that)
  • Issues in governance are becoming critical; architects need to have knowledge and skills in governance in order to remain relevant
  • Architects need to have the low level technical skills, but also have to develop the higher-level strategic and collaborative skills
  • Many organizations don’t have a career development framework in place to develop an EA team
  • In many cases, you will need to perform the role of architect before anyone is willing to call you that (that seems obvious to me): it’s as much about experience as about any sort of certification
  • HR doesn’t, in general, understand what an architect does, but sees it as just a fancy title that you give to someone in IT instead of giving them a raise (much the way that we used to give them the “project manager” title 15 years ago)

I hated to leave mid-session, but I’ll catch the replay on Dana Gardner’s Briefings Direct in a couple of weeks. I’m hoping to be back for at least some of the panel later today on cloud security, and likely stick around for CloudCamp tonight.

Cloud Computing Business Scenario Workshop #ogtoronto

I’ve never attended an Open Group event before, but apparently interactive customer requirements workshops are part of what they do. We’re doing a business scenario workshop to gather requirements for cloud computing, led by Terry Blevins of MITRE, also on the board of the Open Group. The goal is to capture real business requirements, with the desired outcome to have the vendors understand and respond to customers’ needs. The context presented for this is a call to action for cloud vendors to develop and adhere to open standards, and we were tasked with considering the following questions:

  • What are the pain points and ramifications of not addressing the pain points, relative to cloud computing?
  • What are the key processes that would take advantage of cloud computing?
  • What are the desired objectives of handling/addressing the pain points?
  • Who are the human actors and their roles?
  • What are the major computer actors and their roles?
  • What are the known needs that cloud computing must fulfill to help improve the processes?

We started with brainstorming on the pain points: in the context of cloud computing, given my critical use of Google Apps and Amazon S3, I found myself contributing as an end user. My key pain point (or it was, before I solved it) was the risk of losing data in a physical disaster such as fire/flood/theft and the need for offsite backup. There were a ton of other pain points:

  • Security – one person stated that their security is better since moving to cloud applications
  • Sizing and capacity
  • Flexibility in bundling and packaging their own products for selling
  • Complex development environments
  • Pressure to reduce capital investments
  • Operating costs
  • Ineffective support
  • Functional alignment to business needs
  • Need to align IT with business
  • Cost of physical space and energy (including green concerns)
  • Cost of failure discourages innovation
  • Compliance standards
  • Difficulties in governance and management
  • Incremental personnel costs as applications are added
  • Infrastructure startup cost barrier
  • Time to get solutions to market
  • Hard to separate concerns
  • Operational risk of using old equipment
  • Resource sharing across organizations
  • No geographic flexibility/location independence
  • Training cost and time
  • Loss of control by users provisioning cloud resources on their own
  • No access to new technology
  • Dependency on a few key individuals to maintain systems
  • Being stifled by in-house IT departments
  • Need to understand the technology in order to use it
  • Do-it-yourself in-house solutions
  • Lack of integrated, well-managed infrastructure
  • Overhead of compliance requirements, particularly in multinational context
  • Long time to market
  • Disposal costs of decommissioned systems
  • Cost of help desk
  • Legal/goodwill implications of security breaches
  • Can’t leverage latest ideas

This is a rough list thrown out by audience members, but certainly lots of pain here. This was consolidated into 9 categories:

  1. Resource optimization
  2. Cost
  3. Timeliness
  4. Business continuity (arguably, this is part of risk)
  5. Risk
  6. Security
  7. Inability to innovate
  8. Compliance
  9. Quality of IT

Things then got even more participatory: we all received 9 post-it notes, giving us 9 votes for these categories in order to collaboratively set priorities on them. We could cast all of our votes for one category, vote once for each category, or anything in between; this is intended to be from our own perspective, not our customers or what we feel is best for enterprises in general. For me, the key issues are business continuity and security, so I cast three votes for each. Cost is also important, so I gave it two votes, and timeliness got one vote. I’ve seen this same voting technique used before, but never with so much ensuing confusion over what to do. 🙂 Blevins pointed out that it sometimes works better to hand out (fake) money, since people understand that that they’re assigning value to the ideas if they’re dividing up the money between them.

The three winners were 1, 2, and 3 from the original list, which (no surprise) translate to better, cheaper and fast. The voting fell out as follows:

Category # of votes
Resource optimization 37
Cost 34
Timeliness 41
Business continuity 8
Risk 20
Security 29
Inability to innovate 29
Compliance 17
Quality of IT 16

Great session, and some really good input gathered.

TOGAF survey results #ogtoronto

Another flashback to Monday, when Jane Varnus of Bank of Montreal and Navdeep Panaich of Capgemini presented the results of a survey about TOGAF 9. They covered a lot of stats about EAs and their organizations, a few of which I found particularly interesting:

  • Architects form 2-4% of IT staff (the fact that the question was asked this way just reinforces the IT nature of architecture)
  • Most architecture practices started within the past 4-5 years
  • 65% of architecture initiatives are charged centrally rather than to individual projects
  • More than 60% of architecture groups are sponsored by the CTO or CIO, and more than 70% report up to the CTO or CIO
  • EAs have surprisingly low levels of responsibility and authority and decision-making in both enterprise efforts and projects, but are usually involved or consulted
  • The primary driver for EA, with 44%, is business-IT alignment; better strategic planning and better IT decision-making come in next at 11% each
  • Business-IT alignment is also one of the key benefits that companies are achieving with EA; when they look at the future desired benefits, this expands to include agility, better strategic planning, and better IT decision-making
  • 32% of organizations have no KPIs for measuring EA effectiveness, and another 34% have 1-5 KPIs
  • More thought needs to be given to EA metrics: 40% of the KPIs are perception-oriented (e.g., stakeholder satisfaction), 33% are value-oriented (e.g., cost reduction) and 26% are activity-oriented (e.g., number of artifacts created)
  • EA frameworks are not yet used in a standardized fashion: 27% are using a standard architecture framework in a standardized manner (this is from a survey of Open Group members!), 44% have selected a framework but its use is ad hoc, and 27% select and use frameworks on an ad hoc basis
  • TOGAF (8 and 9 combined) is the predominant framework, used in more than 50% of organizations, with Zachman coming in second at 24%
  • Drivers for architect certification are unclear, and many organizations don’t require it

There’s a ton of additional information here; the whole presentation is here (direct PDF link), although it may be taken down after the conference.

Martin Harris, Platform Computing, on benefits of cloud computing in the enterprise #ogtoronto

Martin Harris from Platform Computing presented what they’ve learned by implementing cloud computing within large enterprises; he doesn’t see cloud as new technology, but an evolution of what we’re already doing. I would tend to agree: the innovations are in the business models and impacts, not the technology itself.

He points out that large enterprises are starting with “private clouds” (i.e., on-premise cloud – is it really cloud if you own/host the servers, even if someone else manages it? or if you have exclusive use of the servers hosted elsewhere?), but that attitudes to public/shared cloud platforms are opening up since there are significant benefits when you start to look at sharing at least some infrastructure components. Consider, for example, development managers within a large organization being able to provision a virtual server on Amazon for a developer in a matter of minutes for less than the cost of a cappuccino per day, rather than going through a 6-8 week approval and purchasing process to get a physical server: each developer and test group could have their own virtual server for a fraction of the cost, time and hassle of an on-premise server, paid for only during the period in which it is required.

Typically, enterprise servers (and other resources) are greatly under-utilized: they’re sized to be greater than the maximum expected load even if that load occurs rarely, and often IT departments are reluctant to combine applications on a server since they’re not sure of any interaction byproducts. Virtualization solves the latter problem, but making utilization more efficient is still a key cost issue. To make this work, whether in a private or public cloud, there needs to be some smart and automated resource allocation going on, driven by policies, application performance characteristics, plus current and expected load.

You don’t need to move everything in your company into the cloud; for example, you can have development teams use cloud-based virtual servers while keeping production servers on premise, or replace Exchange servers with Google Apps while keeping your financial applications in-house. There are three key factors for determining an application’s suitability to the cloud:

  • Location – sensitivity to where the application runs
  • Workload – predictability and continuity of application load
  • Service level – severity and priority of service level agreements

Interestingly, he puts email gateways in the “not viable for cloud computing” category, but stated that this was specific to the Canadian financial services industry in which he works; I’m not sure that I agree with this, since there are highly secure outsourced email services available, although I also work primarily with Canadian financial services and find that they can be overly cautious sometimes.

He finished up with some case studies for cloud computing within enterprises: R&D at SAS, enterprise corporate cloud at JPMC, grid to cloud computing at Citi, and public cloud usage at Alatum telecom. There’s an obvious bias towards private cloud since that’s what his company provides (to the tune of 5M managed CPUs), but some good points here regardless of your cloud platform.

Ndu Emuchay, IBM, on standards in cloud computing #ogtoronto

Today has an track devoted mostly to cloud computing, and we started with Ndu Emuchay of IBM discussing the cloud computing landscape and the importance of standards. IBM is pretty innovative in many areas of new technology – I’ve blogged in the past about their Enterprise 2.0 efforts, and just this morning saw an article on what they’re doing with the internet of things where they’re integrating sensors and real-time messaging, much of which would be cloud-based, by the nature of the objects to which the sensors are attached.

He started with a list of both business and IT benefits for considering the cloud:

  • Cost savings
  • Employee and service mobility
  • Responsiveness and agility in new solutions
  • Allows IT to focus on their core competencies rather than running commodity infrastructure – as the discussion later in presentation pointed out, this could result in reduced IT staff
  • Economies of scale
  • Flexibility of hybrid infrastructure spanning public and private platforms

From a business standpoint, users only care that systems are available when they need them, do what they want, and are secure; it doesn’t really matter if the servers are in-house or not, or if they own the software that they’re running.

Clouds can range from private, which are leased or owned by an enterprise, to community and public clouds; there’s also the concept of internal and external clouds, although I’m not sure that I agree that anything that’s internal (on premise) could actually be considered as cloud. The Jericho Forum (which appears to be part of the Open Group) publishes a paper describing their cloud cube model (direct PDF link):

There’s a big range of cloud-based services available now: people services (e.g., Amazon’s Mechanical Turk), business services (e.g., business process outsourcing), application services (e.g., Google Apps), platform services and infrastructure services (e.g., Amazon S3); it’s important to determine what level of services that you want to include within your architecture, and the risks and benefits associated with each. This is a godsend for small enterprises like my one-person firm – I use Google Apps to host my email/calendar/contacts, synchronized to Outlook on my desktop, and use Amazon S3 for secure daily backup – but we’re starting to see larger organizations put 10’s of 1000’s of users on Google Apps to replace their Exchange servers, and greatly reduce their costs without compromising functionality or security.

Emuchay presented a cloud computing taxonomy from a paper on cloud computing use cases (direct PDF link) that includes hosting, consumption and development as the three main categories of participants.

There’s a working group, organized using a Google Group, that developed this paper and taxonomy, so join in there if you feel that you can contribute to the efforts.

As he points out, many inhibitors to cloud adoption can be addressed through security, interoperability, integration and portability standards. Interoperability is the ability for loose coupling or data exchange between that appear as a black box to each other; integration combines components or systems into an overall system; and portability considers the ease of moving components and data from one system to another, such as when switching cloud providers. These standards impact the five different cloud usage models: end user to cloud; enterprise to cloud to end user; enterprise to cloud to enterprise (interoperability); enterprise to cloud (integration); and enterprise to cloud (portability). He walked through the different types of standards required for each of these use cases, highlighting where there were accepted standards and some of the challenges still to be resolved. It’s clear that open standards play a critical role in cloud adoption.

Alain Perry, Treasury Board Secretariat, on EA in Canadian government #ogtoronto

Also on Monday, we heard from Alain Perry from the CIO branch of the Treasury Board Secretariat on how enterprise architecture, supported by the use of TOGAF, is making its way into the Canadian government at all levels. The EA community of practice is supporting the use of TOGAF 9 in order to enable a common vocabulary and taxonomy for creating and using architectural artifacts, and to create reusable reference models, policy instruments and standards to support the architectural work.

Using the classic “city planning” analogy that we hear so often in EA discussions, Perry showed how they break down their levels of detail into strategic/enterprise architecture (“city scapes”) for vision and principles required for long-term direction, program/segment architecture (“district designs”) for specific programs and portfolios to provide context for solution architectures, and solution architecture (“detailed building design and specs”) for a specific project.

They used an adaptation of TOGAF to create the common content model for each of those three levels: architecture vision, architecture requirements, business architecture, architecture (including data and application architecture), technology architecture, and architecture realization.

They’ve created the Canadian Governments Reference Model (CGRM) that allows different levels of government to share standards, tools and capabilities: in Canada, that includes at least federal, provincial and municipal, plus sometimes regional, all with their own political agendas, so this is no mean feat.

Allen Brown of Open Group on their internal use of TOGAF #ogtoronto

I was taking notes in a paper notebook at the conference on Monday, and only now have had time to review them and write up a summary.

The general sessions opened with Allen Brown of the Open Group discussing their own use of TOGAF in architecting their internal systems. Since they’re making a push to have TOGAF used in more small and medium businesses, using themselves as a test case was an excellent way to make their point. This seemed to be primarily a systems architecture exercise, responding to threats such as operational risk and security; however, the problem that they had was primarily that of systems, not of the general organizational strategy.

So far, as part of the overall project, they’ve replaced their obsolete financial system, outsourced credit card handling, moved to hosted offsite servers, added a CRM system, and are adding a CMS to reduce their dependency on a webmaster for making website updates. These are all great moves forward for them, but the interesting part was how they approached architecture: they involved everyone in the organization and documented the architecture on the intranet, so that everyone was aware of what was happening and the impacts that it would have. Since this included business priorities and constraints, the non-technical people could contribute to and validate the scenarios, use cases and processes in the business architecture.

They developed a robust set of architecture artifacts, documenting the business, applications, data and technology architectures, then identified opportunities and solutions as part of an evaluation report that fed into the eventual system implementations.

This was a great case study, since it showed how to incorporate architecture into a small organization without a lot of resources. They had no budget to hire new staff or invest in new tools, and had to deal with the reality of revenue-generating work taking precedence over the architecture activities. They weren’t able to create every possible artifact, so were forced to focus on the ones most critical to success (a technique that could be used well in larger, better-funded organizations). Yet they still experienced a great deal of success, since TOGAF forced them to think at all levels rather that just getting mired in the technical details of any of the specific system upgrades: this resulted in appropriate outsourcing and encouraged reuse. At the end of the day, Brown stated that they could not have achieved half of what they have without TOGAF.