Lean application development strategies #BTF09

Dan Carmel from SpringCM gave the second keynote today, focused on his premise that SaaS = Lean. Although I would agree that many SaaS applications are Lean from a customer’s standpoint, that’s not true with all of them. Yes, using SaaS applications potentially has a much leaner footprint for a customer since there is no hardware or software on their own site, but you also need to consider the efforts to integrate with other systems, including on-premise systems. If the SaaS app (or any on-premise app, for that matter) can be reconfigured and integrated with a minimal effort, then things continue to look Lean; if it’s closed and requires custom kludges to integrate, then not so much.

He went through some good examples of Lean and extensible SaaS environments, such as Salesforce.com and Webex Connect, then pointed out some areas where on-premise systems can be a big challenge, but SaaS can provide sufficient business value even at lower volumes: ECM, for example (no surprise, since that’s what SpringCM sells), where high initial costs tend to keep all but large companies from deploying internally.

He then introduced Joe Graves of Stratus Technologies (a SpringCM customer) about their journey with SaaS. They started using Salesforce.com about five years ago, deploying to 170 users worldwide in a matter of weeks from the start of the project. They use a number of applications integrated with Salesforce.com, and when they needed ECM for contract management, they selected SpringCM because it’s tightly integrated and because they were already sold on the value of SaaS. He outlined their benefits: lower upfront costs with no capital outlay, quicker implementation time, reduced operational issues such as storage management and disaster recovery, and allows IT to focus higher up in the value chain rather that fussing with operational issues that don’t improve competitive differentiation. Although many people have concerns about customization and integration, security, and uptime of SaaS apps, Graves pointed out that there are ways to deal with all of these when you’re working with a properly built app, and that as long as it meets your functional and operational requirements, there isn’t a problem. [As I like to point out to people who use the highly publicized downtime of SaaS apps such as Salesforce.com and Gmail as justification for not using SaaS: your internal systems go down too, it’s just not publicized across the internet; in fact, the level of transparency that a SaaS provider has around their failures can increase customer commitment.]

Skelta BPM.NET

A while back, I had an email from Phil Larson, who I have known since he was at Appian; he has spent the summer in India as an MBA internship. One thing led to another, he connected me up with Skelta, and I fostered India-Canada relationships by getting up early for an online demo with Sanjay Shah and Arvind Agarwal of Skelta. They’ve published a corporate presentation if you want to take a look.

Application with BPM embeddedThey started by creating OEM workflow components that were embedded in other products, then built that out into a full-blown BPM suite, BPM.NET, while retaining a focus on componentized, embeddable pieces. They have significant penetration into the Indian business process outsourcing (BPO) market, both as the BPOs’ product offerings and for their own internal processes. Because of the OEM nature of their product, they also end up embedded in SaaS BPM implementations, although white-labeled so you may not know that they’re there. This is much more like the Fujitsu model – create BPM primarily for the OEM market, then launch as a direct BPMS product – and Skelta has leveraged this into business that includes OEM and full product sales as well as multi-tenanted hosted BPM. Even their browser-based process modeler can be embedded as a component in another application, not just the run-time UI components.

SharePoint activities built inAs you might guess from the product name, they have a strong Microsoft bias: there is significant integration with SharePoint and other Microsoft products to capture and act on events generated from those systems, plus adapters for SAP, PeopleSoft and Microsoft Dynamics. The number of integration services that they provide is quite extensive, and is likely what has made their product attractive to the BPOs to use as a base upon which to build applications. These are available directly from the process modeler: there is a palette of SharePoint activities built in, as well as BizTalk activities and other integration activities.

The process modeler includes the ability to set up data points that will be used as KPIs in reporting. Queue filtering and prioritization can be based on multiple factors so that process participants see only the work that they should be able to access, served to them in the correct order. Process models are consumed directly by the process engine without translation.

Personal work listThey include an AJAX forms designer for creating task user interfaces, including scripting to control contextual behavior: the view on the form (and therefore the visible/editable fields) can change depending on which step that the process is at. The main processing paradigm has a user requesting the next item at a particular process step from a shared queue, which moves it to their personal work list for working with that AJAX form; escalation can be based on the time that a work item spends in a shared queue before selection, or in a user’s work list. The user’s view can have some monitoring graphs built in, since these are all components that can be assembled into a web application. The user can view the process map for the current instance, including a history of the process to date.

There is not a full rules engine as part of the product: expressions and rules can be built into the forms and process definitions, or rules services can be integrated using web services, calling BizTalk rules, or writing the rules in .Net.

There’s a big focus on components used to monitor processes and their SLAs: this is critical for the BPO market, since their compensation is typically based on meeting SLAs, and they likely have penalty clauses associated with missing them so need to monitor them closely. There are other needs of BPO vendors that Skelta is seeking to address: the ability to embed white-labeled BPM within other applications; multi-tenancy software-as-a-service infrastructure; and, for the Indian BPO marketplace, the fact that Microsoft infrastructure is cheaper to build and maintain in India than a comparable Java infrastructure. In some ways, BPOs have needs similar to that of large enterprises, such as quickly-changing user requirements that can vary widely across the user base, and the need to simplify training and roll-out of the system.

Community participation in a hosted BPM system #BPM2009 #BPMS2’09

Rania Khalaf of IBM’s T.J. Watson Research Center presented a paper on enabling community participation for workflows through extensibility and sharing, specifically within a hosted BPM system.

She is focused on three areas of collaboration: extension activities (services), collaborative workflow modeling, and collaboration on executing workflow instances. There are two key aspects to this: method and technical enablement, and the business and security aspects.

This is really about the community and how they participate: developers who create extension activities, process designers who create process models and include the extension activities, and participants in the executing workflows. For extension activities, they’re leveraging the Bite language and runtime, which uses REST-based interaction, and allows developers to create extensions in their language of choice and publish them directly in a catalog that is available to process designers. Workflow designers can provide feedback on the extensions via comments. Essentially, this is a sort of collaborative SOA using REST instead of WS-*: developers create extensions and publish them, and designers can pull from a marketplace of extensions available in the hosted environment. Much lighter weight than most corporate SOA efforts, although undeniably more nimble.

Process models can be shared, either for read-only or edit access, with others both within and outside your organization in order to gather feedback and improve the process. Once created, the URL for instantiating a process can be sent directly to end users to kick off their own processes as designed.

This is part of several inter-related IBM efforts, including the newly-released BPM BlueWorks and the still-internal Virtuoso Business Mashups, and seems to fall primarily under the LotusLive family. This is likely an indication of what we’ll see in BlueWorks in the future; they’ll be adding more social capabilities such as process improvement and an extensions marketplace, and addressing the business and security issues.

AlignSpace social BPM community

Process discovery participantsA couple of months ago, Software AG launched AlignSpace, a social BPM community, and gave a webinar to explain what it’s about (replay here). AlignSpace is intended to be a vendor-neutral place where people doing process discovery can share ideas and collaborate on process discovery. Gartner estimates that over 40% of BPM project time is spent on process discovery, which is inherently a collaborative activity including everyone from process participants through developers and a BPM center of excellence, but there aren’t a lot of great tools out there to do this.

Software AG looked at a lot of social media sites to understand the key features that people want when working together online, and created a cloud-based platform where people can capture process requirements and model processes. This is intended to be beyond what Lombardi is already doing with Blueprint, where people can collaborate on create a specific organization’s process models, and create the potential for a marketplace as well as a collaboration platform. AlignSpace process discovery viewThat being said, their initial process outline view has a lot in common with Blueprint, with stages/milestones comprising activities, and the way that can be also visualized as a process map. You can import a model from Visio or XPDL for sharing in AlignSpace, then export it back out again. They also have a home page that shows what’s happening in processes in which you’re involved, and links to your contacts on other social sites.

The AlignSpace Marketplace is intended to be able to find or document BPM resources, whether people or products/models, then allow participants to rank those resources for others to see.

They’re still in a closed beta, but you can go there and sign up to participate. AlignSpace will be free to use, and although vendor-independent, it will be launched with a library and community of resources (some of which will, necessarily, have particular vendor expertise). There’s some lightweight Software AG branding on it, but it’s not their intention to block anyone from it: it’s really intended to be an open BPM community. I give them a lot of credit for this, since most of the other BPM communities launched by vendors are very much specific to their own products, which is going to stifle a lot of good discussion. Software AG seems to recognize, even in these economic times, is that a rising tide floats all boats: if more people are interested in BPM, and AlignSpace helps to get them over the initial barriers of adoption, then all BPM vendors will benefit. Outside the BPM vendor-specific offerings, there are definitely other collaborative workspaces and social networks around, but few with a BPM focus.

AlignSpace home pageSecurity is obviously going to be a serious consideration: even though most companies don’t put customer data in their process models (as opposed to the executing processes), the processes may represent intellectual property that provide them with a competitive advantage. They are looking at corporate-restricted versions, such that only users from within your domain can access it; the same sorts of security measures have already been put in place in Blueprint, and you can be sure that other cloud solutions are going to have to solve the same problem.

They have ambitions to move this beyond BPM and provide a collaborative space for discovery/requirements for other sorts of IT projects: a bit like ConceptShare, but with more of a focus on technology implementations rather than media and design.

I had a chance to talk to Miko Matsumura of Software AG around the time of the initial AlignSpace announcement; he admitted (which is what I love about Miko) that initially AlignSpace is a lot of big ideas but not much delivered. Like Google with its betas, the idea is to get something out there for people to use, then use their early feedback in order to decide what gets added in next. Although they’re trying to focus on “data format promiscuity” in order to allow customers from many BPMS vendors to participate, the process models are publish and subscribe rather than an interactive whiteboard model in their BPM sketchpad. The big focus is on creating fertile ground for the concept of collaborative process improvement, pulling together innovators from across multiple organizations and infecting companies with process innovation. Data formats are only one issue, as he points out: there is as much tribalism and heterogeneity in the people issues as in the systems that they use, and we need to get the tribes to disband, or at least come to a neutral territory.

From a social media standpoint, the AlignSpace presence doesn’t get full marks: their blog hasn’t been updated since June, their Twitter stream is mostly links to other BPM resources rather than any original material or updates on AlignSpace, and on Facebook they have both a group and a page, without a clear distinction between how each is used.

This all sounds great, but as yet, I haven’t seen the beta. Yes, that’s a hint.

BPM and Twitter (and other social destinations)

Professor Michael Rosemann of the BPM Research Group of Queensland University of Technology has published a short paper on BPM and Twitter on the ARIS Community site, where he lists three possible uses of Twitter with BPM:

  • Use Twitter to update you whenever there are changes to a process that you’re following. In this case, he’s talking about following processes, not process instances, so that you receive notifications for things such as changes to the process maps/roles, or new aggregate monitoring statistics.
  • Have a process follow you on Twitter (or an automated stream that knows when you’re scheduled to be unavailable), so that it knows when you’re away and assigns substitutes for your role.
  • Have a process instance tweet, either for milestone notification or with a link to the process instance, acting as a BPM inbox.

I’m not so sure about the second one, but the first and last are really just a matter of capturing the events as they occur, and sending them off to Twitter. Most BPMS can generate events for some or all of these activities, potentially available through an RSS feed or by posting them onto an ESB; as Rosemann points out in his article, there are a number of different ways to then get them onto Twitter.

My other half did a series of experiments several months ago on process events, including output to Twitter; he used a GPS as input (I wanted him to use a BPMS, but he was keen on the location events) and simple Python scripts to send the messages to Twitter. He tested out a number of other interfaces, including Coral8 for event stream processing, two blogging platforms, Gtalk, email, Google’s App Engine and Amazon’s Simple Queue Service; the idea is that with some simple event processing in the middle, you can take the relevant events from your BPMS (or any system that generates events) and send them pretty much wherever you want without a lot of customization.

I think that using Twitter to monitor process instances is the most interesting concept of the three that Rosemann presents, since you can potentially send tweets to people inside or outside your organization about process milestones that interest them. If you’re nervous about using Twitter, either for security reasons or fear of the fail whale, you can run your own microblogging service using an open source platform such as laconi.ca or a commercial solution such as Socialtext’s Signals.

I’ll be attending the workshop on BPM and social software at the upcoming BPM research conference in Ulm, Germany; I haven’t seen the papers to be delivered at the workshop (or the rest of the conference), but I’d be very surprised if there isn’t a lot of discussion about how to incorporate Twitter and other social tools into our more enterprise-y BPM existence.

Lombardi Blueprint update

Home pageI recently had a chance for an in-depth update on Lombardi’s Blueprint – a cloud-based process modeling tool – to see a number of the new features in the latest version. I haven’t had a chance to look at it in detail for over a year, and am impressed by the social networking tools that are built in now: huge advances in the short two years since Lombardi first launched Blueprint. The social networking tools make this more than just a Visio replacement: it’s a networking hub for people to collaborate on process discovery and design, complete with a home page that shows a feed of everything that has changed on processes that you are involved in. There’s also a place for you to bookmark your favorite processes so that you can easily jump to them or see who has modified them recently.

At a high level, creating processes hasn’t changed all that much: you can create a process using the outline view by just typing in a list of the main process activities or milestones; this creates the discovery map simultaneously on the screen, which then allows you to drag steps under the main milestone blocks to hierarchically indicate all the steps that make up that milestone. There have been a number of enhancements in specifying the details for each step however: you can assign roles or specific people as the participant, business owner or expert for that step; document the business problems that occur at that step to allow for some process analysis at later stages; create documentation for that step; and attach any documents or files to make available as reference materials for this step. Once the details are specified, the discovery map view (with the outline on the left and the block view on the right) shows the participants aligned below each milestone, and clicking on a participant shows not only where it is used in this process, but where it is used in all other processes in the repository.

New step and gateway added - placement and validation automaticAt this point, we haven’t yet seen a bit of BPMN or anything vaguely resembling a flowchart: just a list of the major activities and the steps to be done in each one, along with some details about each step. It would be pretty straightforward for most business users to learn how to use this notation to do an initial sketch of a process during discovery, even if they don’t move on to the BPMN view.

Switching to the process diagram view, we see the BPMN process map corresponding to the outline view created in the discovery map view, and you can switch back and forth between them at any time. The milestones are shown as time bands, and if participants were identified for any of the steps, swimlanes are created corresponding to the participants. Each of the steps is placed in a simple sequential flow to start; you can then create gateways and any other elements directly in the process map in this view. The placement of each element is enforced by Blueprint, as well as maintaining a valid BPMN process map.

There’s also a documentation view of the process, showing all of the documentation entered in the details for any step.

Not everyone will have access to Blueprint, however, so you can also generate a PowerPoint file with all of the process details, including analysis of problem areas identified in the step details; a PDF of the process map; a Word document containing the step documentation; an Excel spreadsheet containing the process data; and a BPDM or XPDL output of the process definition. It will also soon support BPMN 2.0 exports. Process maps can also be imported from Visio; Blueprint analyzes the Visio file to identify the process maps within it, then allows the user to select the mapping to use from the Visio shapes into Blueprint element types.

Ballons on steps indicate comments from reviewersThere are other shared process modeling environments that do many of the same things, but the place where Blueprint really shines is in collaboration. It’s a shared whiteboard concept, so that users in different locations can work together and see the changes that each other makes interactively without waiting for one person to check the final result into a repository: an idea that is going to take hold more with the advent of technologies such as Google Wave that raise the bar for expectations of interactive content sharing. This level of interactivity will undoubtedly reduce the need for face-to-face sessions: if multiple people can view and interact simultaneously on a process design, there probably needs to be less time spent in a room together doing this on a whiteboard.There’s a (typed) chat functionality built right into the product, although most customers apparently still use conference calls while they are working together rather than the chat feature: hard to drag and drop things around on the process map while typing in chat at the same time, I suppose. Blueprint maintains a proper history of the changes to processes, and allows viewing of or reverting to previous versions.

Newly added is the ability to share processes in reviewer mode to a larger audience for review and feedback: users with review permissions (participants as opposed to authors) can view the entire process definition but can’t make modifications; they can, however, add comments on steps which are then visible to the other participants and authors. Like authors, reviewers can switch between discovery map, process diagram and documentation views, although their views are read-only, and add comments to steps in either of the first two views. Since Blueprint is hosted in the cloud, both authors and reviewers can be external to your company; however, user logins aren’t shared between Blueprint accounts but have to be created by each company in their account. It would be great if Blueprint provided authentication outside the context of each company’s account so that, for example, if I were participating in two project with different clients who were both Blueprint customer and I was also a Blueprint customer, they wouldn’t both have to create a login for me, but could reuse my existing login. Something like this is being done by Freshbooks, an online time tracking and invoicing applications, so that Freshbooks customers can easily more interact. Blueprint is providing the ability to limit access in order to meet some security standards: access to a company’s account can be limited to their own network (by IP address), and external participants can be restricted to be from specific domains.

One issue that I have with Blueprint, and have been vocal about in the past, is the lack of a non-US hosting option. Many organizations, including all of my Canadian banking customers, will not host anything on US-based servers due to the differences in privacy laws; even though, arguably, Blueprint doesn’t contain any customer information since it’s just the process models, not the executable processes, most of them are pretty conservative. I know that many European organizations have the same issues, and I think that Lombardi needs to address this issue if they want to break into non-US markets in a significant way. Understandably, Lombardi has resisted allowing Blueprint to be installed inside corporate firewalls since they lose control of the upgrade cycle, but many companies will accept hosting within their own country (or group of countries, in the case of the EU) even if it’s not on their own gear.

Using a cloud-based solution for process modeling makes a lot of sense in many situations: nothing to install on your own systems and low-cost subscription pricing, plus the ability to collaborate with people outside your organization. However, as easy as it is to export from Blueprint into a BPMS, there’s still the issue of round-tripping if you’re trying to model mostly automated processes.

CloudCamp Toronto #cloudcamp #cloudcamptoronto

I attended my first unconference, commonly referred to as “-camps”, almost 2-1/2 years ago, when I went to Mountain View for MashupCamp, and have attended several since then, including more MashupCamps, BarCamp, TransitCamp, ChangeCamp and DemoCamp. I like the unconference format: although I rarely propose and lead a session, I actively participate, and find that this sort of conversational and collaborative discussion provides a lot of value.

We started with an unpanel, a format that I’ve never seen before but really like: the MC has the audience shout out topics of interest, which he writes on a flipchart, then the four panelists each have 60 seconds to pick one of the topics and expand on it.

We then had the usual unconference format where people can step up and propose their own session, although two of the ten slots are prefilled: one with “what is cloud computing” and the other with “cloud computing business scenario workshop”; check the wiki page to see what we came up with for the other sessions, as well as (hopefully) everyone’s notes on the sessions linked from that page.

I’ll be sticking with the #cloudcamp hashtag after this since it leaves more room for chatter 🙂

Dana Gardner’s panel on cloud security #ogtoronto

After a quick meeting down the street, I made it back within a few minutes of the start of Dana Gardner’s panel on cloud security, including Glenn Brunette of Sun, Doug Howard of Perimeter eSecurity, Chris Hoff of Cisco, Richard Reiner of Enomaly and Tim Grant of NIST.

There was a big discussion about what should and shouldn’t be deployed to the cloud, echoing a number of the points made by Martin Harris this morning, but with a strong tendency not to put “mission critical” applications or data in the cloud due to the perceived risk; I think that these guys need to review some of the pain points that we gathered in the business scenario workshop, where at least one person said that their security increased when they moved to the cloud.

One of the key issues around cloud security is risk assessment: someone needs to do an objective comparison of on-premise versus cloud, because it’s not just a slam-dunk that on-premise is more secure than cloud, especially when there needs to be access by customers or partners. It’s hardly fair to hold cloud platforms to a higher level of security standards than on-premise systems: do a fair comparison, then look at the resulting costs and agility.

The panel seems pretty pessimistic about the potential for cloud platforms to outperform on-premise systems: I’m usually the ultra-conservative, risk-averse one in the room, but they’re making me feel like a cowboy. One of them used the example of Gmail – the free version, not the paid Google Apps – stating that it was still in beta (it’s not, as of a week ago) and that it might just disappear someday, and implying that you get what you pay for. No kidding, cheapskate: don’t expect to get enterprise-quality cloud environments for free. Pony up the $50/user/year for the paid version of Google Apps, however, and you get 99.9% availability (less than 9 hours of downtime per year): not sufficient for mission-critical applications, but likely sufficient for your office applications that it would replace.

A lot of other discussion topics, ending with some interesting points on standards and best practices: for interoperability, integration, portability, and even audit practices. You can catch the replay on Dana Gardner’s Briefings Direct in a couple of weeks.

That’s it for the Enterprise Architecture Practitioners Conference. Tonight is CloudCamp, and tomorrow the Security Practitioners Conference continues.

Cloud Computing Business Scenario Workshop #ogtoronto

I’ve never attended an Open Group event before, but apparently interactive customer requirements workshops are part of what they do. We’re doing a business scenario workshop to gather requirements for cloud computing, led by Terry Blevins of MITRE, also on the board of the Open Group. The goal is to capture real business requirements, with the desired outcome to have the vendors understand and respond to customers’ needs. The context presented for this is a call to action for cloud vendors to develop and adhere to open standards, and we were tasked with considering the following questions:

  • What are the pain points and ramifications of not addressing the pain points, relative to cloud computing?
  • What are the key processes that would take advantage of cloud computing?
  • What are the desired objectives of handling/addressing the pain points?
  • Who are the human actors and their roles?
  • What are the major computer actors and their roles?
  • What are the known needs that cloud computing must fulfill to help improve the processes?

We started with brainstorming on the pain points: in the context of cloud computing, given my critical use of Google Apps and Amazon S3, I found myself contributing as an end user. My key pain point (or it was, before I solved it) was the risk of losing data in a physical disaster such as fire/flood/theft and the need for offsite backup. There were a ton of other pain points:

  • Security – one person stated that their security is better since moving to cloud applications
  • Sizing and capacity
  • Flexibility in bundling and packaging their own products for selling
  • Complex development environments
  • Pressure to reduce capital investments
  • Operating costs
  • Ineffective support
  • Functional alignment to business needs
  • Need to align IT with business
  • Cost of physical space and energy (including green concerns)
  • Cost of failure discourages innovation
  • Compliance standards
  • Difficulties in governance and management
  • Incremental personnel costs as applications are added
  • Infrastructure startup cost barrier
  • Time to get solutions to market
  • Hard to separate concerns
  • Operational risk of using old equipment
  • Resource sharing across organizations
  • No geographic flexibility/location independence
  • Training cost and time
  • Loss of control by users provisioning cloud resources on their own
  • No access to new technology
  • Dependency on a few key individuals to maintain systems
  • Being stifled by in-house IT departments
  • Need to understand the technology in order to use it
  • Do-it-yourself in-house solutions
  • Lack of integrated, well-managed infrastructure
  • Overhead of compliance requirements, particularly in multinational context
  • Long time to market
  • Disposal costs of decommissioned systems
  • Cost of help desk
  • Legal/goodwill implications of security breaches
  • Can’t leverage latest ideas

This is a rough list thrown out by audience members, but certainly lots of pain here. This was consolidated into 9 categories:

  1. Resource optimization
  2. Cost
  3. Timeliness
  4. Business continuity (arguably, this is part of risk)
  5. Risk
  6. Security
  7. Inability to innovate
  8. Compliance
  9. Quality of IT

Things then got even more participatory: we all received 9 post-it notes, giving us 9 votes for these categories in order to collaboratively set priorities on them. We could cast all of our votes for one category, vote once for each category, or anything in between; this is intended to be from our own perspective, not our customers or what we feel is best for enterprises in general. For me, the key issues are business continuity and security, so I cast three votes for each. Cost is also important, so I gave it two votes, and timeliness got one vote. I’ve seen this same voting technique used before, but never with so much ensuing confusion over what to do. 🙂 Blevins pointed out that it sometimes works better to hand out (fake) money, since people understand that that they’re assigning value to the ideas if they’re dividing up the money between them.

The three winners were 1, 2, and 3 from the original list, which (no surprise) translate to better, cheaper and fast. The voting fell out as follows:

Category # of votes
Resource optimization 37
Cost 34
Timeliness 41
Business continuity 8
Risk 20
Security 29
Inability to innovate 29
Compliance 17
Quality of IT 16

Great session, and some really good input gathered.

Martin Harris, Platform Computing, on benefits of cloud computing in the enterprise #ogtoronto

Martin Harris from Platform Computing presented what they’ve learned by implementing cloud computing within large enterprises; he doesn’t see cloud as new technology, but an evolution of what we’re already doing. I would tend to agree: the innovations are in the business models and impacts, not the technology itself.

He points out that large enterprises are starting with “private clouds” (i.e., on-premise cloud – is it really cloud if you own/host the servers, even if someone else manages it? or if you have exclusive use of the servers hosted elsewhere?), but that attitudes to public/shared cloud platforms are opening up since there are significant benefits when you start to look at sharing at least some infrastructure components. Consider, for example, development managers within a large organization being able to provision a virtual server on Amazon for a developer in a matter of minutes for less than the cost of a cappuccino per day, rather than going through a 6-8 week approval and purchasing process to get a physical server: each developer and test group could have their own virtual server for a fraction of the cost, time and hassle of an on-premise server, paid for only during the period in which it is required.

Typically, enterprise servers (and other resources) are greatly under-utilized: they’re sized to be greater than the maximum expected load even if that load occurs rarely, and often IT departments are reluctant to combine applications on a server since they’re not sure of any interaction byproducts. Virtualization solves the latter problem, but making utilization more efficient is still a key cost issue. To make this work, whether in a private or public cloud, there needs to be some smart and automated resource allocation going on, driven by policies, application performance characteristics, plus current and expected load.

You don’t need to move everything in your company into the cloud; for example, you can have development teams use cloud-based virtual servers while keeping production servers on premise, or replace Exchange servers with Google Apps while keeping your financial applications in-house. There are three key factors for determining an application’s suitability to the cloud:

  • Location – sensitivity to where the application runs
  • Workload – predictability and continuity of application load
  • Service level – severity and priority of service level agreements

Interestingly, he puts email gateways in the “not viable for cloud computing” category, but stated that this was specific to the Canadian financial services industry in which he works; I’m not sure that I agree with this, since there are highly secure outsourced email services available, although I also work primarily with Canadian financial services and find that they can be overly cautious sometimes.

He finished up with some case studies for cloud computing within enterprises: R&D at SAS, enterprise corporate cloud at JPMC, grid to cloud computing at Citi, and public cloud usage at Alatum telecom. There’s an obvious bias towards private cloud since that’s what his company provides (to the tune of 5M managed CPUs), but some good points here regardless of your cloud platform.