BPMS Usage Patterns

I’ve been catching up on the replays of some webinars, and today went through BPM Institute’s one on BPMS usage patterns (free, but registration required), featuring Dr. Ketabchi of Savvion.

Dr. K started with a short definition of BPM, including the business value, then a bit about Savvion and their products; you can skip through to the 24-minute point on the replay if you’re already up to speed on this.

He describes seven key BPMS usage patterns:

  • Human-centric (which may be long-running)
  • Document-centric
  • System-centric
  • Decision-centric
  • Case management
  • Project-centric
  • Event-centric

Although all of the vendors and analysts have their own spin on this, looking at usage patterns when selecting a BPMS is important to ensure that you end up with a system that can support most or all of your usage patterns.

The first three of these (human, document and system) are very well understood, but the others require a bit more explanation

Case management, on the other hand, requires extensive analytics about the cases to be handled as well as a fairly comprehensive interface for agents to use when handling the case, and potentially handling the customer simultaneously. This is not just about a transaction being processed through a BPMS, but potentially a set of interacting transactions and ad hoc collaboration, as well as documents and other data or content. There’s been a lot written on case management BPM lately, including a new OMG RFP for case management; this has become strangely high-profile lately.

Decision-centric BPM is heavily based on rules, either based on data or events, where complex decisions need to be evaluated either for automated processing or to present to a user for further decision-making or work. In the past couple of years, we’ve increasingly seen the integration of business rules with BPMS, with some BPMS vendors including full business rules capabilities directly in their product, while others integrate with one or more of the mainstream BRMS platforms.

Project-centric BPM is a view that’s a bit unique to Savvion, a crossover between project/portfolio management and BPMS that includes resource optimization across processes, milestones definition and other project management-like techniques within the context of a process. Think of these as long-running processes that the process owners think of as projects, not processes, but where these projects always follow a pre-defined template. Mapping the GANTT chart to a process map isn’t that big of a step once you think of the projects as processes, and can provide an alternative view for monitoring the progress of the project/process.

Event-centric BPM is usually focused on improvement of existing business processes: taking various sources of events, including processes, then analyzing them to pinpoint bottlenecks and present the results to a user in a monitoring dashboard. This is the monitoring and optimization part of the model-execute-monitor-optimize cycle of BPM.

I’m not sure that I agree that decision-centric and event-centric are unique BPMS usage patterns: I see decisioning as part of the normal operation within most processes, particularly system-centric processes; event-centricity seems to be the monitoring and feedback end of any other type of process as well. In other words, these two aren’t really usage patterns, they’re functionality that is likely required in the context of one of the other usage patterns. Although Forrester split out human-centric and document-centric as two unique patterns a couple of years ago, I would argue that document-centric is just a subset of human-centric (since realistically, you’re not going to be doing a lot of document processing without people involved) rather than its own unique pattern. Similarly, project-centric processes are a type of long-running human-centric processes, and may not represent a unique usage pattern, although I like the visualization of the process in a project-like GANTT chart view.

Not every company has every usage pattern, or doesn’t consider every usage pattern to be mission-critical, but it’s important to keep in mind that although you start with one or two usage patterns, it would be hugely beneficial to be able to expand to other usage patterns on the same BPMS platform. A lot of companies that I’ve seen focus on just a human-centric (or even worse, a more limited document-centric) solution for a specific workflow-type application, or a system-centric service orchestration solution for their straight-through processing; inevitably, they end up regretting this decision down the road when they want to do other types of BPM, and have to either buy another BPMS solution to address those needs, or try to bash their current solution (with a ton of customization) into a usage pattern for which it was never intended.

Some interesting thoughts in this webinar about usage patterns. As the BPM suites (mostly formerly the “pure plays”) battle against the stack vendors, the issue of which usage patterns are covered by which product is coming to the fore.

Disclosure: Savvion is a customer for whom I have created podcasts and performed strategic analysis, although I was not compensated in any way for writing this webinar review (or, in fact, anything else in this blog).

International academic BPM conference 2009

Last year, I attended BPM 2008, an international conference that brings together academics, researchers and practitioners to take a rather academic look at what is happening in BPM research. This is important to those of us who work daily with BPM systems, since some of this research will be finding its way into products over the next few years. Also, it was in Milan, and I never pass up the opportunity for a trip to Italy.

The conference organizers were kind enough to extend a press invitation to me again this year (that means that I don’t pay the conference fee, but I do pay my own expenses) to attend BPM 2009 in Ulm, Germany, and I’ll be headed that way in a few weeks. I’ll also be attending the one-day workshop on BPM and social software prior to the conference.

Travel budgets are tight for everyone this year, but I highly recommend that if you’re a vendor of BPMS software, you get one or two of your architects/designers/developers/brain trust to Ulm next month. This is not a conference to send your marketing people and glad-hand all around; this is a place for serious learning about BPM research. Consider it a small investment in a huge future: having your product designers exposed to this research and networking with the researchers could make a competitive difference for you in years to come.

I’ll also be hanging out for a week after the conference, probably traveling around Germany, so any travel suggestions are welcome.

Lombardi Blueprint update

Home pageI recently had a chance for an in-depth update on Lombardi’s Blueprint – a cloud-based process modeling tool – to see a number of the new features in the latest version. I haven’t had a chance to look at it in detail for over a year, and am impressed by the social networking tools that are built in now: huge advances in the short two years since Lombardi first launched Blueprint. The social networking tools make this more than just a Visio replacement: it’s a networking hub for people to collaborate on process discovery and design, complete with a home page that shows a feed of everything that has changed on processes that you are involved in. There’s also a place for you to bookmark your favorite processes so that you can easily jump to them or see who has modified them recently.

At a high level, creating processes hasn’t changed all that much: you can create a process using the outline view by just typing in a list of the main process activities or milestones; this creates the discovery map simultaneously on the screen, which then allows you to drag steps under the main milestone blocks to hierarchically indicate all the steps that make up that milestone. There have been a number of enhancements in specifying the details for each step however: you can assign roles or specific people as the participant, business owner or expert for that step; document the business problems that occur at that step to allow for some process analysis at later stages; create documentation for that step; and attach any documents or files to make available as reference materials for this step. Once the details are specified, the discovery map view (with the outline on the left and the block view on the right) shows the participants aligned below each milestone, and clicking on a participant shows not only where it is used in this process, but where it is used in all other processes in the repository.

New step and gateway added - placement and validation automaticAt this point, we haven’t yet seen a bit of BPMN or anything vaguely resembling a flowchart: just a list of the major activities and the steps to be done in each one, along with some details about each step. It would be pretty straightforward for most business users to learn how to use this notation to do an initial sketch of a process during discovery, even if they don’t move on to the BPMN view.

Switching to the process diagram view, we see the BPMN process map corresponding to the outline view created in the discovery map view, and you can switch back and forth between them at any time. The milestones are shown as time bands, and if participants were identified for any of the steps, swimlanes are created corresponding to the participants. Each of the steps is placed in a simple sequential flow to start; you can then create gateways and any other elements directly in the process map in this view. The placement of each element is enforced by Blueprint, as well as maintaining a valid BPMN process map.

There’s also a documentation view of the process, showing all of the documentation entered in the details for any step.

Not everyone will have access to Blueprint, however, so you can also generate a PowerPoint file with all of the process details, including analysis of problem areas identified in the step details; a PDF of the process map; a Word document containing the step documentation; an Excel spreadsheet containing the process data; and a BPDM or XPDL output of the process definition. It will also soon support BPMN 2.0 exports. Process maps can also be imported from Visio; Blueprint analyzes the Visio file to identify the process maps within it, then allows the user to select the mapping to use from the Visio shapes into Blueprint element types.

Ballons on steps indicate comments from reviewersThere are other shared process modeling environments that do many of the same things, but the place where Blueprint really shines is in collaboration. It’s a shared whiteboard concept, so that users in different locations can work together and see the changes that each other makes interactively without waiting for one person to check the final result into a repository: an idea that is going to take hold more with the advent of technologies such as Google Wave that raise the bar for expectations of interactive content sharing. This level of interactivity will undoubtedly reduce the need for face-to-face sessions: if multiple people can view and interact simultaneously on a process design, there probably needs to be less time spent in a room together doing this on a whiteboard.There’s a (typed) chat functionality built right into the product, although most customers apparently still use conference calls while they are working together rather than the chat feature: hard to drag and drop things around on the process map while typing in chat at the same time, I suppose. Blueprint maintains a proper history of the changes to processes, and allows viewing of or reverting to previous versions.

Newly added is the ability to share processes in reviewer mode to a larger audience for review and feedback: users with review permissions (participants as opposed to authors) can view the entire process definition but can’t make modifications; they can, however, add comments on steps which are then visible to the other participants and authors. Like authors, reviewers can switch between discovery map, process diagram and documentation views, although their views are read-only, and add comments to steps in either of the first two views. Since Blueprint is hosted in the cloud, both authors and reviewers can be external to your company; however, user logins aren’t shared between Blueprint accounts but have to be created by each company in their account. It would be great if Blueprint provided authentication outside the context of each company’s account so that, for example, if I were participating in two project with different clients who were both Blueprint customer and I was also a Blueprint customer, they wouldn’t both have to create a login for me, but could reuse my existing login. Something like this is being done by Freshbooks, an online time tracking and invoicing applications, so that Freshbooks customers can easily more interact. Blueprint is providing the ability to limit access in order to meet some security standards: access to a company’s account can be limited to their own network (by IP address), and external participants can be restricted to be from specific domains.

One issue that I have with Blueprint, and have been vocal about in the past, is the lack of a non-US hosting option. Many organizations, including all of my Canadian banking customers, will not host anything on US-based servers due to the differences in privacy laws; even though, arguably, Blueprint doesn’t contain any customer information since it’s just the process models, not the executable processes, most of them are pretty conservative. I know that many European organizations have the same issues, and I think that Lombardi needs to address this issue if they want to break into non-US markets in a significant way. Understandably, Lombardi has resisted allowing Blueprint to be installed inside corporate firewalls since they lose control of the upgrade cycle, but many companies will accept hosting within their own country (or group of countries, in the case of the EU) even if it’s not on their own gear.

Using a cloud-based solution for process modeling makes a lot of sense in many situations: nothing to install on your own systems and low-cost subscription pricing, plus the ability to collaborate with people outside your organization. However, as easy as it is to export from Blueprint into a BPMS, there’s still the issue of round-tripping if you’re trying to model mostly automated processes.

Gartner webinar on using BPM to survive, thrive and capitalize

Michele Cantara and Janelle Hill hosted a webinar this morning, which will be repeated at 11am ET (I was on the 8am ET version) – not sure if that will be just the recording of this morning’s session, or if they’ll do it all over again.

Cantara started talking about the sorry state of the economy, complete with a picture of an ax-wielding executioner, and how many companies are laying off staff to attempt to balance their budgets. Their premise is that BPM can turn the ax-man into a surgeon: you’ll still have cuts, but they’re more precise and less likely to damage the core of your organization. Pretty grim start, regardless.

They show some quotes from customers, such as “the current economic climate is BPM nirvana” and “BPM is not a luxury”, pointing out that companies are recognizing that BPM can provide the means to do business more efficiently to survive the downturn, and even to grow and transform the organization by being able to outperform their competition. In other words, if a bear (market) is chasing you, you don’t have to outrun the bear, you only have to outrun the person running beside you.

Hill then took over to discuss some of the case studies of companies using BPM to avoid costs and increase the bottom line in order to survive the downturn. These are typical of the types of business cases used to justify implementing a BPMS within conservative organizations in terms of visibility and control over processes, although I found one interesting: a financial services company used process modelling in order to prove business cases, with the result that 33% of projects were not funded since they couldn’t prove their business case. Effectively, this provided a more data-driven approach to setting priorities on project funding, rather than the more common emotional and political decision-making that occurs, but through process modelling rather than automation using a BPMS.

There can be challenges to implementing BPM (as we all know so well), so she recommends a few things to ensure that your BPM efforts are successful: internal communication and education to address the cultural and political issues; establishing a center of excellence; and implementing some quick wins to give some street cred to BPM within your organization.

Cantara came back to discuss growth opportunities, rather than just survival: for organizations that are in reasonably good shape in spite of the economy, BPM can allow them to grow and gain relative market share if their competition is not able to do the same. One example was a hospital that increased surgical capacity by 20%, simply by manually modelling their processes and fixing the gaps and redundancies – like the earlier case of using modelling to set funding priorities, this project wasn’t about deploying a BPMS and automating processes, but just having a better understanding of their current processes so that they can optimize them.

In some cases, cost savings and growth opportunities are just two sides of the same coin, like a pharmaceutical company that used a BPMS to optimize their clinical trials process and grant payments process: this lowered costs per process by reducing the resources required for each, but this in turn increased capacity also allowed them to handle 2.5x more projects than before. A weaker company would have just used the cost saving opportunity to cut headcount and resource usage, but if in a stable financial position, these cost savings allow for revenue growth without headcount increases instead.

In fact, rather than two sides of a coin, cost savings and growth opportunities could be considered two points on a spectrum of benefits. If you push further along the spectrum, as Hill returned to tell us about, you start to approach business transformation, where companies gain market share by offering completely new processes that were identified or facilitated by BPM, such as a rail transport company that leveraged RFID-driven BPM to avoid derailments through early detection of overheating problems on the rail cars.

Hill finished up by reinforcing that BPM is a management discipline, not just technology, as shown by a few of their case studies that had nothing to do with automating processes with a BPMS, but really were about process modelling and optimization – the key is to tie it to corporate performance and continuous improvement, not view BPM as a one-off project. A center of excellence (or competency center, as Gartner calls it) is a necessity, as are explicit process models and metrics that can be shared between business and IT.

If you miss the later broadcast today, Gartner provides their webinars for replay. Worth the time to watch it.

CloudCamp Toronto #cloudcamp #cloudcamptoronto

I attended my first unconference, commonly referred to as “-camps”, almost 2-1/2 years ago, when I went to Mountain View for MashupCamp, and have attended several since then, including more MashupCamps, BarCamp, TransitCamp, ChangeCamp and DemoCamp. I like the unconference format: although I rarely propose and lead a session, I actively participate, and find that this sort of conversational and collaborative discussion provides a lot of value.

We started with an unpanel, a format that I’ve never seen before but really like: the MC has the audience shout out topics of interest, which he writes on a flipchart, then the four panelists each have 60 seconds to pick one of the topics and expand on it.

We then had the usual unconference format where people can step up and propose their own session, although two of the ten slots are prefilled: one with “what is cloud computing” and the other with “cloud computing business scenario workshop”; check the wiki page to see what we came up with for the other sessions, as well as (hopefully) everyone’s notes on the sessions linked from that page.

I’ll be sticking with the #cloudcamp hashtag after this since it leaves more room for chatter 🙂

Dana Gardner’s panel on cloud security #ogtoronto

After a quick meeting down the street, I made it back within a few minutes of the start of Dana Gardner’s panel on cloud security, including Glenn Brunette of Sun, Doug Howard of Perimeter eSecurity, Chris Hoff of Cisco, Richard Reiner of Enomaly and Tim Grant of NIST.

There was a big discussion about what should and shouldn’t be deployed to the cloud, echoing a number of the points made by Martin Harris this morning, but with a strong tendency not to put “mission critical” applications or data in the cloud due to the perceived risk; I think that these guys need to review some of the pain points that we gathered in the business scenario workshop, where at least one person said that their security increased when they moved to the cloud.

One of the key issues around cloud security is risk assessment: someone needs to do an objective comparison of on-premise versus cloud, because it’s not just a slam-dunk that on-premise is more secure than cloud, especially when there needs to be access by customers or partners. It’s hardly fair to hold cloud platforms to a higher level of security standards than on-premise systems: do a fair comparison, then look at the resulting costs and agility.

The panel seems pretty pessimistic about the potential for cloud platforms to outperform on-premise systems: I’m usually the ultra-conservative, risk-averse one in the room, but they’re making me feel like a cowboy. One of them used the example of Gmail – the free version, not the paid Google Apps – stating that it was still in beta (it’s not, as of a week ago) and that it might just disappear someday, and implying that you get what you pay for. No kidding, cheapskate: don’t expect to get enterprise-quality cloud environments for free. Pony up the $50/user/year for the paid version of Google Apps, however, and you get 99.9% availability (less than 9 hours of downtime per year): not sufficient for mission-critical applications, but likely sufficient for your office applications that it would replace.

A lot of other discussion topics, ending with some interesting points on standards and best practices: for interoperability, integration, portability, and even audit practices. You can catch the replay on Dana Gardner’s Briefings Direct in a couple of weeks.

That’s it for the Enterprise Architecture Practitioners Conference. Tonight is CloudCamp, and tomorrow the Security Practitioners Conference continues.

Dana Gardner’s panel on EA skills in a downturn #ogtoronto

I was in a panel here on Monday, hosted by Dana Gardner and also including John Gøtze of the Association of Enterprise Architects, and Tim Westbrock of EAdirections, where we discussed the issues of extending the scope of architecture beyond the enterprise. This was recorded and will be included in Dana’s usual podcast series within a couple of weeks; I’ll post a link to it when I see it, or you can subscribe in iTunes and catch it there..

Today, he’s moderating two more panels, and I sat in on the beginning of the one about which skills and experience differentiate an enterprise architect in a downtown, although I unfortunately had to duck out to a client meeting before the end (the disadvantage of attending a conference in my home city). This one featured James de Raeve and Len Fehskens of The Open Group, David Foote of Foote Partners, and Jason Uppal of QRS. From the conference program description:

As the architect’s role continues to evolve in scope with the increasingly global and distributed enterprise so to do the core skills and experiences required of them. Professional certifications, such as ITAC, ITSC and TOGAF, can be career differentiators at any time, but are particularly crucial during periods of economic downturn such as we’re currently experiencing.

This panel will examine the evolving job requirements of today’s enterprise architect, discuss the value of professional certification programs and how certifications help to not only legitimize and validate the profession, but also provide much-needed demand for the skills, capabilities and experience that certified professionals have within their organizations.  The panel will also include perspectives on how certification can affect market demand and salary levels for those certified.

It’s almost impossible to capture everything said on a panel or who said what, so just a few unattributed comments:

  • A lot of organizations are in panic mode, trying to cut costs but not lose (any more) customers; IT is afraid of being blamed for costs and inefficiencies
  • There needs to be more coherence around the definition of EA so that this position doesn’t get squeezed out during budget cuts due to lack of common understanding of what EAs do (I’m thinking it’s a bit late for that)
  • Issues in governance are becoming critical; architects need to have knowledge and skills in governance in order to remain relevant
  • Architects need to have the low level technical skills, but also have to develop the higher-level strategic and collaborative skills
  • Many organizations don’t have a career development framework in place to develop an EA team
  • In many cases, you will need to perform the role of architect before anyone is willing to call you that (that seems obvious to me): it’s as much about experience as about any sort of certification
  • HR doesn’t, in general, understand what an architect does, but sees it as just a fancy title that you give to someone in IT instead of giving them a raise (much the way that we used to give them the “project manager” title 15 years ago)

I hated to leave mid-session, but I’ll catch the replay on Dana Gardner’s Briefings Direct in a couple of weeks. I’m hoping to be back for at least some of the panel later today on cloud security, and likely stick around for CloudCamp tonight.

Cloud Computing Business Scenario Workshop #ogtoronto

I’ve never attended an Open Group event before, but apparently interactive customer requirements workshops are part of what they do. We’re doing a business scenario workshop to gather requirements for cloud computing, led by Terry Blevins of MITRE, also on the board of the Open Group. The goal is to capture real business requirements, with the desired outcome to have the vendors understand and respond to customers’ needs. The context presented for this is a call to action for cloud vendors to develop and adhere to open standards, and we were tasked with considering the following questions:

  • What are the pain points and ramifications of not addressing the pain points, relative to cloud computing?
  • What are the key processes that would take advantage of cloud computing?
  • What are the desired objectives of handling/addressing the pain points?
  • Who are the human actors and their roles?
  • What are the major computer actors and their roles?
  • What are the known needs that cloud computing must fulfill to help improve the processes?

We started with brainstorming on the pain points: in the context of cloud computing, given my critical use of Google Apps and Amazon S3, I found myself contributing as an end user. My key pain point (or it was, before I solved it) was the risk of losing data in a physical disaster such as fire/flood/theft and the need for offsite backup. There were a ton of other pain points:

  • Security – one person stated that their security is better since moving to cloud applications
  • Sizing and capacity
  • Flexibility in bundling and packaging their own products for selling
  • Complex development environments
  • Pressure to reduce capital investments
  • Operating costs
  • Ineffective support
  • Functional alignment to business needs
  • Need to align IT with business
  • Cost of physical space and energy (including green concerns)
  • Cost of failure discourages innovation
  • Compliance standards
  • Difficulties in governance and management
  • Incremental personnel costs as applications are added
  • Infrastructure startup cost barrier
  • Time to get solutions to market
  • Hard to separate concerns
  • Operational risk of using old equipment
  • Resource sharing across organizations
  • No geographic flexibility/location independence
  • Training cost and time
  • Loss of control by users provisioning cloud resources on their own
  • No access to new technology
  • Dependency on a few key individuals to maintain systems
  • Being stifled by in-house IT departments
  • Need to understand the technology in order to use it
  • Do-it-yourself in-house solutions
  • Lack of integrated, well-managed infrastructure
  • Overhead of compliance requirements, particularly in multinational context
  • Long time to market
  • Disposal costs of decommissioned systems
  • Cost of help desk
  • Legal/goodwill implications of security breaches
  • Can’t leverage latest ideas

This is a rough list thrown out by audience members, but certainly lots of pain here. This was consolidated into 9 categories:

  1. Resource optimization
  2. Cost
  3. Timeliness
  4. Business continuity (arguably, this is part of risk)
  5. Risk
  6. Security
  7. Inability to innovate
  8. Compliance
  9. Quality of IT

Things then got even more participatory: we all received 9 post-it notes, giving us 9 votes for these categories in order to collaboratively set priorities on them. We could cast all of our votes for one category, vote once for each category, or anything in between; this is intended to be from our own perspective, not our customers or what we feel is best for enterprises in general. For me, the key issues are business continuity and security, so I cast three votes for each. Cost is also important, so I gave it two votes, and timeliness got one vote. I’ve seen this same voting technique used before, but never with so much ensuing confusion over what to do. 🙂 Blevins pointed out that it sometimes works better to hand out (fake) money, since people understand that that they’re assigning value to the ideas if they’re dividing up the money between them.

The three winners were 1, 2, and 3 from the original list, which (no surprise) translate to better, cheaper and fast. The voting fell out as follows:

Category # of votes
Resource optimization 37
Cost 34
Timeliness 41
Business continuity 8
Risk 20
Security 29
Inability to innovate 29
Compliance 17
Quality of IT 16

Great session, and some really good input gathered.

TOGAF survey results #ogtoronto

Another flashback to Monday, when Jane Varnus of Bank of Montreal and Navdeep Panaich of Capgemini presented the results of a survey about TOGAF 9. They covered a lot of stats about EAs and their organizations, a few of which I found particularly interesting:

  • Architects form 2-4% of IT staff (the fact that the question was asked this way just reinforces the IT nature of architecture)
  • Most architecture practices started within the past 4-5 years
  • 65% of architecture initiatives are charged centrally rather than to individual projects
  • More than 60% of architecture groups are sponsored by the CTO or CIO, and more than 70% report up to the CTO or CIO
  • EAs have surprisingly low levels of responsibility and authority and decision-making in both enterprise efforts and projects, but are usually involved or consulted
  • The primary driver for EA, with 44%, is business-IT alignment; better strategic planning and better IT decision-making come in next at 11% each
  • Business-IT alignment is also one of the key benefits that companies are achieving with EA; when they look at the future desired benefits, this expands to include agility, better strategic planning, and better IT decision-making
  • 32% of organizations have no KPIs for measuring EA effectiveness, and another 34% have 1-5 KPIs
  • More thought needs to be given to EA metrics: 40% of the KPIs are perception-oriented (e.g., stakeholder satisfaction), 33% are value-oriented (e.g., cost reduction) and 26% are activity-oriented (e.g., number of artifacts created)
  • EA frameworks are not yet used in a standardized fashion: 27% are using a standard architecture framework in a standardized manner (this is from a survey of Open Group members!), 44% have selected a framework but its use is ad hoc, and 27% select and use frameworks on an ad hoc basis
  • TOGAF (8 and 9 combined) is the predominant framework, used in more than 50% of organizations, with Zachman coming in second at 24%
  • Drivers for architect certification are unclear, and many organizations don’t require it

There’s a ton of additional information here; the whole presentation is here (direct PDF link), although it may be taken down after the conference.

Martin Harris, Platform Computing, on benefits of cloud computing in the enterprise #ogtoronto

Martin Harris from Platform Computing presented what they’ve learned by implementing cloud computing within large enterprises; he doesn’t see cloud as new technology, but an evolution of what we’re already doing. I would tend to agree: the innovations are in the business models and impacts, not the technology itself.

He points out that large enterprises are starting with “private clouds” (i.e., on-premise cloud – is it really cloud if you own/host the servers, even if someone else manages it? or if you have exclusive use of the servers hosted elsewhere?), but that attitudes to public/shared cloud platforms are opening up since there are significant benefits when you start to look at sharing at least some infrastructure components. Consider, for example, development managers within a large organization being able to provision a virtual server on Amazon for a developer in a matter of minutes for less than the cost of a cappuccino per day, rather than going through a 6-8 week approval and purchasing process to get a physical server: each developer and test group could have their own virtual server for a fraction of the cost, time and hassle of an on-premise server, paid for only during the period in which it is required.

Typically, enterprise servers (and other resources) are greatly under-utilized: they’re sized to be greater than the maximum expected load even if that load occurs rarely, and often IT departments are reluctant to combine applications on a server since they’re not sure of any interaction byproducts. Virtualization solves the latter problem, but making utilization more efficient is still a key cost issue. To make this work, whether in a private or public cloud, there needs to be some smart and automated resource allocation going on, driven by policies, application performance characteristics, plus current and expected load.

You don’t need to move everything in your company into the cloud; for example, you can have development teams use cloud-based virtual servers while keeping production servers on premise, or replace Exchange servers with Google Apps while keeping your financial applications in-house. There are three key factors for determining an application’s suitability to the cloud:

  • Location – sensitivity to where the application runs
  • Workload – predictability and continuity of application load
  • Service level – severity and priority of service level agreements

Interestingly, he puts email gateways in the “not viable for cloud computing” category, but stated that this was specific to the Canadian financial services industry in which he works; I’m not sure that I agree with this, since there are highly secure outsourced email services available, although I also work primarily with Canadian financial services and find that they can be overly cautious sometimes.

He finished up with some case studies for cloud computing within enterprises: R&D at SAS, enterprise corporate cloud at JPMC, grid to cloud computing at Citi, and public cloud usage at Alatum telecom. There’s an obvious bias towards private cloud since that’s what his company provides (to the tune of 5M managed CPUs), but some good points here regardless of your cloud platform.