Ndu Emuchay, IBM, on standards in cloud computing #ogtoronto

Today has an track devoted mostly to cloud computing, and we started with Ndu Emuchay of IBM discussing the cloud computing landscape and the importance of standards. IBM is pretty innovative in many areas of new technology – I’ve blogged in the past about their Enterprise 2.0 efforts, and just this morning saw an article on what they’re doing with the internet of things where they’re integrating sensors and real-time messaging, much of which would be cloud-based, by the nature of the objects to which the sensors are attached.

He started with a list of both business and IT benefits for considering the cloud:

  • Cost savings
  • Employee and service mobility
  • Responsiveness and agility in new solutions
  • Allows IT to focus on their core competencies rather than running commodity infrastructure – as the discussion later in presentation pointed out, this could result in reduced IT staff
  • Economies of scale
  • Flexibility of hybrid infrastructure spanning public and private platforms

From a business standpoint, users only care that systems are available when they need them, do what they want, and are secure; it doesn’t really matter if the servers are in-house or not, or if they own the software that they’re running.

Clouds can range from private, which are leased or owned by an enterprise, to community and public clouds; there’s also the concept of internal and external clouds, although I’m not sure that I agree that anything that’s internal (on premise) could actually be considered as cloud. The Jericho Forum (which appears to be part of the Open Group) publishes a paper describing their cloud cube model (direct PDF link):

There’s a big range of cloud-based services available now: people services (e.g., Amazon’s Mechanical Turk), business services (e.g., business process outsourcing), application services (e.g., Google Apps), platform services and infrastructure services (e.g., Amazon S3); it’s important to determine what level of services that you want to include within your architecture, and the risks and benefits associated with each. This is a godsend for small enterprises like my one-person firm – I use Google Apps to host my email/calendar/contacts, synchronized to Outlook on my desktop, and use Amazon S3 for secure daily backup – but we’re starting to see larger organizations put 10’s of 1000’s of users on Google Apps to replace their Exchange servers, and greatly reduce their costs without compromising functionality or security.

Emuchay presented a cloud computing taxonomy from a paper on cloud computing use cases (direct PDF link) that includes hosting, consumption and development as the three main categories of participants.

There’s a working group, organized using a Google Group, that developed this paper and taxonomy, so join in there if you feel that you can contribute to the efforts.

As he points out, many inhibitors to cloud adoption can be addressed through security, interoperability, integration and portability standards. Interoperability is the ability for loose coupling or data exchange between that appear as a black box to each other; integration combines components or systems into an overall system; and portability considers the ease of moving components and data from one system to another, such as when switching cloud providers. These standards impact the five different cloud usage models: end user to cloud; enterprise to cloud to end user; enterprise to cloud to enterprise (interoperability); enterprise to cloud (integration); and enterprise to cloud (portability). He walked through the different types of standards required for each of these use cases, highlighting where there were accepted standards and some of the challenges still to be resolved. It’s clear that open standards play a critical role in cloud adoption.

The Open Group Conference

I was already planning to attend the Open Group Conference in Toronto next week to catch up on what’s happening in the enterprise architecture space, and now I’ve been invited to join Dana Gardner’s panel on Monday morning, which will also be recorded as a podcast. The panel is on architecture’s scope extending beyond the enterprise, bringing together elements of EA, SOA and cloud computing. Also on the panel will be John Gotze, president of the Associate of Enterprise Architects, and Tim Westbrock from EAdirections. There are big aspects of business process to this, specifically when you start orchestrating processes that span the enterprise’s boundaries, and I hope that we’ll have time to explore that.

I’ll probably be at the conference each day to check out some of the other sessions, and I may stick around for some of the evening events, such as CloudCamp on Wednesday. If you’re there, drop by my session and say hi.

Gartner warns against shelfware-as-a-service

Gartner’s had a good webinar series lately, including one last month with Alexa Bona on software licensing and pricing (link to “roll your own webinar” download of slides in PDF and audio in mp3 separately), as part of their series on IT and the economy. As enterprises look to tighten their belts, software licenses are one place to do that, both on-premise and software-as-a-service, but you need to have flexible terms and conditions in your software contract in order to be able to negotiate a reduction in fees, particularly if there are high switching costs to move to another platform.

For on-premise enterprise software, keep in mind that you don’t own the software, you just have a license to use it. There’s no secondary market for enterprise software: you can’t sell off your Oracle or SAP licenses if you don’t need them any more. Even worse, in many cases, maintenance is from a single source: the original vendor. It’s not that easy to walk away from enterprise software, however, even if you do find a suitable replacement: you’ve probably spent 3-7 times the cost of the licenses on non-reusable external services (customization, training, ongoing services, maintenance), plus the time spent by internal resources and the commitment to build mindshare within the company to support the product. In many cases, changing vendors is not an option and, unfortunately, the vendors know that.

There are a lot of factors in software licensing that can come under dispute:

  • Oracle’s (and many other vendors’) definition of “named user” includes non-human processes that interact with the database, not just the people who are running applications. This became a huge issue a few years back when enterprise systems started being connected in some way to the internet: is the internet gateway process a single user, or do all potential users have to have individual licenses?
  • Virtualization and multi-core issues need to be addressed; in many cases, these hardware partitioning is often not adequately covered in license contracts, and you need to ensure that you’re not paying for the maximum potential capacity of the underlying hardware, not what you’re actually using.
  • Make sure that you have the right to change the platform (including hardware or underlying database) without onerous fees.
  • Watch out for license minimums embedded within the contract, or cases where upgrading to a larger server will cost you more even if you don’t have any more users. Minimums are for small organizations that barely meet discounting thresholds, not large enterprises. Vendors should not be actively promoting shelfware by enforcing minimums.

Maintenance fees are also on the increase, since vendors are very reliant on the revenue generated from that in the face of decreasing software sales. Customers who have older, stable versions of a product and don’t generate a lot of support issues feel that costs should be decreasing, especially since many vendors are offshoring support so that it is cheaper for vendor to supply it. Of course, it’s not about what the maintenance actually costs, it’s about what the market will bear. Gartner suggests negotiating maintenance caps, the ability to reduce your maintenance if you use less licenses, and the right to switch to a cheaper maintenance offering. Document what you’re entitled to as part of your maintenance, rather than relying on a link to the vendor’s “current maintenance offering”, to ensure that they can’t decrease your benefits. Watch out for what is covered by maintenance upgrades: sometimes the vendor will release what they call a new product but what the customer sees as just a functional upgrade on their existing product. To get around that, you can try licensing the generic functionality rather than the specific products by name (e.g., stating “word processing functionality” rather than “Microsoft Word”).

When polled, 64% of the audience said that they have been approached by a vendor to do a software audit in the past 12 months. In some cases, vendors may be doing this in order to recover license fees if they have lost a sale to the customer and feel that they might find them out of compliance. Be sure to negotiate how the audit is conducted, who pays for it, and what price that you pay for additional licenses if you are found to be out of compliance. Many software vendors are finding it a convenient time to conduct license audits in order to bolster revenues, and for the first time ever, I’ve heard radio advertisements urging people to blow the whistle on their employer if they are aware of pirated or misused software licenses, which is a sort of crowd-sourced software audit.

Software as a service licensing has its pitfalls as well, and they’re quite different from on-premise pricing issues. Many SaaS contracts have minimums or do not allow for reductions in volumes, leading to shelfware-as-a-service – consider it a new business model for wasting your money on software license fees. There is aggressive discounting going on right now – Gartner is seeing some deals at $70/user/month for enterprise-class software – but there may be much higher fees on renewal (when you’re hooked). There are also some unrecognized fees in SaaS contracts: storage (if beyond some minimum that they provide as part of the service, which is often charged at a rate far above cloud storage on the open market), additional costs for a development and test sandbox, premium maintenance that is more aligned with typical on-premise enterprise software support, non-corporate use (e.g., customers/partners accessing the system), integration, and termination fees including the right to get your data out of their system. Make sure that you know what the SaaS provider’s privacy/security policies are, especially related to the location of the data storage. Most of the Canadian financial services firms that I deal with, for example, will not allow their data to be stored in the United States, and many will not allow it to be stored outside Canada.

Furthermore, SaaS vendor SLAs will only cover their uptime, not your connectivity to them, so there are different points of failure than you would have for on-premise software. You can hardly blame the vendor if your internet connectivity fails, but you need to consider all of the points of failure and establishing appropriate SLAs for them.

Bona finished up with some very funny (but true) reinterpretations of clauses in vendor contracts, for example:

  • What the vendor means: “We are going to send you software that you are not licensed to use. If you use this software in error, you will be out of compliance with this contract, and woe to you if we audit.”
  • What they actually wrote: “Licensee shall not access or use any portion of the software not expressly licensed and paid for by the licensee.”
  • What you probably want to change it to: “Licensor shall not ship any software to licensee that licensee is not authorized to use.”

The summary of all this is that it’s not a task for amateurs. Unless you want to just let the vendor have their way with you on a large contract, you should consider engaging professionals to help out with this. Gartner provides this type of service, of course, but there are also high-quality independents (mostly former analysts) such as Vinnie Mirchandani.

Appian Analyst Update

Matt Calkins and Samir Gulati from Appian were on a short analyst call today to give us a summary of 2008 and a preview of 2009. They had some big changes this year: expanding their marketing efforts, launching their SaaS offering with customers like Starbucks and Manulife, and expanding geographically into Europe and Asia. Much of this is fuelled by the $10M in VC funding that they took on in 2008, the first external funding in their 10-year history; based on the timing of the funding, I’m guessing that they got a much better valuation than if it had happened a few months later.

Their sales numbers are counter-cyclical, with their Q4 in 2008 being their biggest closing quarter ever. Although they built their business on their US federal government business, they’ve broadened out to a number of commercial clients in financial services, manufacturing and other verticals. They’ve also seen some milestones with systems already in place, such as a total of 1B logins to the system that they have at the US Army. I think that they’re just getting starting with BPM there, so this is likely mostly on their portal platform; still, that’s a lot of logins.

Appian’s big push in 2008 was their SaaS platform, Appian Anywhere, which is forming an estimated 30% of their new business. Currently, it’s still only available to selected large customers in a dedicated and fault-tolerant hosting environment: in other words, not a multi-tenanted SaaS solution that you can just sign up for online at any time, but more like just having your BPM servers sitting in someone else’s location. They’ll be releasing a lower-end offering hosted on Amazon EC2 in early February, with 30-day free trials for small businesses, where each customer is hosted on their own instance. This is the same sort of configuration approach adopted by Intalio, as discussed in the comments on a post that I wrote for the BPM Think Tank; there are many who would say that this is not multi-tenancy, it’s virtualization, and it doesn’t provide the level of scalability (both up and down) that’s needed for true SaaS. The subscription cost for Appian Anywhere on EC2 will be $35/user/month.

Regardless of the platform – on-premise Appian Enterprise, the high-end hosted Appian Anywhere, or the EC2-hosted Appian Anywhere – it’s the same code base, so there shouldn’t be a problem moving from one to another as the need arises. This also means that they’re not trying to split their engineering team in three directions to serve three markets: it’s all the same code.

At the same time as the EC2 launch, Appian will be launching an application framework to allow for faster development and deployment of vertical applications, and an application marketplace to provide applications developed by Appian or partners on a subscription basis. Some initial applications will be free, with others coming in at around $10/user/month on top of the base subscription price.

Appian’s focus is on making BPM frictionless: allowing it to be purchased and deployed within an organization without all the usual hoopla that it takes for on-premise systems. I think that there could be some challenges ahead, however, with the lack of multi-tenancy causing additional administrative overhead and setting limits on how big (or small) you can get with your Appian Anywhere system and still have it be cost-effective all around.

Ultimus: Me on the Future of BPM

Here’s the presentation that I just delivered at the Ultimus user conference:

First time that I’ve given this in this format, but it’s a combination of so much that I’m already writing or talking about, it flowed pretty well. I’m writing a paper for publication right now on Enterprise 2.0 and BPM, which will expand on some of these ideas.

Pegasystems’ Platform as a Service

Last week, Pegasystems announced their BPM “Platform as a Service” (PaaS) offering, and I had a chance prior to that to chat with Kerim Akgonul, VP of product management. My first thought on reading the phrase “internal cloud” was that they were just hitching a ride on the cloud bandwagon — check out James Governor’s 15 Ways to Tell It’s Not Cloud Computing for all the reasons that this isn’t cloud computing — but there are definite cloud-like capabilities to what they’re offering from the viewpoint of the individual projects, although not to the organization as a whole.

A problem that I see in many large customer organizations is that BPM projects end up being departmental, and even if the vendor manages to sell enterprise-wide licensing, it often ends up only deployed in one department. In many cases, this is because departments don’t want to share BPMS instances, and it’s just too hard to go through the effort of deploying another separate server and instance for every project. There’s also the need for multiple instances for development and testing, usually hand-installed at some cost. This is exacerbated in large organizations with a variety of geographically-dispersed business units, where they may have several different independent BPM projects on the go at the same time, and have difficulty in applying successes in one area to another.

Pega’s PaaS offering is a platform on top of SmartBPM that allows corporate IT to offer out independent BPMS instances to business units: true multi-tenanted instances for individual projects, but sharing the same infrastructure. Effectively, they’re turning corporate IT into an internal cloud BPM vendor, and the individual projects as customers of that offering. This gives some of the benefits of an externally-hosted SaaS BPMS — shared infrastructure, fast provisioning — while alleviating (perceived) security concerns of using external services for operational systems. You still have to buy and maintain the servers, and have in-house Pega system administration knowledge (which would not be necessary in a true SaaS environment); the real benefits come to the individual projects.

Allowing each project/department to deploy their own virtual BPMS will definitely speed some projects along, but it feeds the habits of some of the problems of departmental solutions that we’re trying to get away from. Encouraging the continuation of the silo culture makes it difficult to get to a true process-centric view of your business and tackle those end-to-end processes. However, Pega is allowing for some cross-instance sharing of artifacts using a new registry/repository to encourage reusability: different instances can share services, processes and rules directly, or make a copy into their local instance.

There’s some nice features in terms of synchronizing upgrades of instances: instances can opt out of an upgrade for a period of time to allow for custom application synchronization, although there would be limitations to how long that they could delay the upgrade. This capability is critical since many of the instances are likely to have custom applications built within them, and they’ll need time to test and make adjustments to those applications for a new version of the underlying platform.

At this point, there’s no billing capabilities or other modules that would allow this to be used as a multi-customer SaaS offering, but next year, they’ll be offering  a series of applications that may be offered internally or externally, for example, by business process outsourcing firms.

The first version of PaaS is planned for release before the end of the year, although it wasn’t yet in beta when I had my discussion with them three weeks ago. I expect to see more at PegaWorld in just over a week.

I’m viewing trends like this as a long-overdue maturation of the industry: vendors are starting to realize that customers are having serious problems with rolling out large BPM programs across their organization, and starting to offer products and advice on how to accomplish that.

BPM Think Tank: On-Demand BPM Vendor Panel

George Barlow of Appian, Jim Rudden of Lombardi, Bino Jos of Intalio and Derek Miers of BPM Focus discussed the intersection of BPM and SaaS. Appian has Appian Anywhere and Lombardi has Blueprint, Derek has opinions about everything, but I’m not sure why a process expert from Intalio (who appears to have little understanding of where they fit from a SaaS standpoint, which is more a matter of being able to use cloud-based servers such as EC2 rather than a multi-tenanted hosted offering, and talked about everything except SaaS) ended up on this panel.

Paul Vincent of TIBCO popped up with a question about whether everyone would soon be doing BPM in the cloud; the panel responded that that’s not really the target, but rather to lower the entry costs for SMBs or departments within larger enterprises, or to provide some of the inter-enterprise collaborative functionality.

There was a discussion about the need for standards in a SaaS offering (I think triggered by Fred Cummins); BPMN is seen as important, although that’s really independent of on-demand versus on-premise.

BPM is perceived as being good in a down market, when companies are trying to cut headcount and become more efficient; SaaS is also good in a down market since there’s little or no capital outlay. In some cases, where the full BPMS is available both on-demand and on-premise (as with Appian), SaaS is the gateway drug to on-premise licensing.

George and Jim are the big contributors here: first of all, they’re both pretty smart guys, they have some major points of disagreement to liven up the conversation, and they’re the only two on the panel who actually have on-demand BPM services.

As always, it’s difficult to blog about a panel since it’s a bit disjointed, plus I had to do a wrapup of my roundtable sessions immediately following and had to comb through my notes with one brain while listening to the panel with the other. Oh, wait, maybe that’s the problem…

Gartner BPM: SaaS and BPM

Having bugged out of the Agile BPM session, I arrived late to Michele Cantera’s discussion of whether software as a service is a viable option for process improvement projects. She covered off some of the same material as the SaaS and BPM session in February, but there was some new information as well. I won’t repeat the material from that session on the topic of BPM SaaS delivery and multi-tenancy models, so you might want to go back to that post and check that out as background for this. Go ahead, I’ll wait.

One interesting bit, based on 2007 estimates, segmented the BPM SaaS adopters into four categories:

  • Pragmatists, forming 49% of the market, who are replacing departmental on-premise applications but don’t have an enterprise-wide scope.
  • Beginners, 40% of the market, who are replacing low-end software tools with simple utility applications. These are often small or medium businesses who don’t want to grow an IT department.
  • Masters, 10% of the market, who are weaving SaaS applications into their enterprise-wide application portfolio.
  • Visionaries, a mere 1%, who are actively replacing on-premise applications with SaaS wherever possible.

She showed these plotted out on two axes: comprehensive strategy versus IT ability to execute. Pragmatists are low on comprehensive strategy but high on IT ability to execute; beginners are low on both, masters are high on both, and visionaries are high on strategy but low on ability to execute (since they don’t need to have internal IT skills). I really like this segmentation, since I think that it provides a good way to characterize SaaS customers in general, not just SaaS BPM customers.

She went through the list of current BPMS SaaS vendors, split out into business process modeling, process-based applications, and BPMS as a service. The SaaS modeling vendors are Lombardi, Metastorm and Appian; BPMS as a service is offered by Appian and Fujitsu. Process-based applications are typically offered by companies who have taken a commercial BPMS and built a specific vertical application on top of it; the underlying BPMS is not necessarily offered as SaaS directly, and in most cases, the BPMS vendor is not the one providing the service (with the exception of DST, whose BPM product grew from their own mutual fund back-office application), since most of them are not in the vertical applications market. There are going to be more entrants into all of these spaces in the near future, as well as changes to the multi-tenancy models offered by the vendors; you’ll want to keep your eye on what’s happening in this space if you’re considering BPM via SaaS, and start to consider how you’re going to handle process governance when your business processes aren’t running on your own systems any more.

She also showed a chart of different SaaS services types (BPO, application outsourcing, hosting, traditional ASP/SaaS model, process-based applications using BPMS/SaaS, BPMS as a platform, BPMS as SaaS-enabling platform) mapped against operating characteristics (operational cost, degree of customization, process agility, cost of process agility, number of suppliers): for example, BPMS as a platform has high process agility, whereas a traditional ASP/SaaS application that likely doesn’t include a BPMS has low process agility.

There was a list of do’s and don’ts of using SaaS for process agility, such as using BPMS via SaaS for pilot projects in order to make the business case for on-premise systems. Of course, if you do that, you might just find that you like the SaaS model well enough to stick with it for the long run.

Enterprise 2.0: How Cloud Computing is Shaping Enterprise Technology

The Enterprise 2.0 conference kicked off yesterday with some workshops, but I just flew in this morning and am at my first session of the day (although not *the* first session of the day), a keynote by Google’s Rishi Chandra on cloud computing. The same key message (buy lots of Google cloud computing 🙂 ) but some complementary points to the presentation that I saw by Matthew Glotzbach at IT360 a couple of months ago; considering that they’re both in product marketing for Google Enterprise, that’s not surprising.

The focus of the presentation is cloud computing, and how the trends in consumer applications are starting to bleed over into the enterprise world. Chandra discussed several trends in cloud computing for the enterprise:

  1. Simplicity wins, and applications that provide targeted functionality well are more likely to succeed than monolithic all-singing, all-dancing applications.
  2. Rise of the power collaborator, as the important things being done in many organizations shift from being individual efforts to team efforts. A key team member will be the well-connected collaborators who can leverage the skills of others to help the entire team to succeed.
  3. Economics of IT are changing, and many companies are looking at combinations of on-premise software and software as a service.
  4. Barriers to the adoption of cloud computing for the enterprise are falling away: connectivity, user experience, reliability, offline access and security are all valid issues, but are all being addressed. He made some great points here (with which I totally agree) about the illusion of security of your existing internal systems, and how better security be achieved by putting corporate data in the cloud for remote access instead of having it on an unsecured laptop that can be stolen. You already trust a variety of outsourced vendors with your data — payroll, legal, IT — so how is outsourcing your data and application infrastructure any different? In fact, it used to be quite common (in the days when everyone had a mainframe) for third parties such as IBM and many long-dead competitors to host many companies’ data centers.

I’m totally on with cloud computing: my email is hosted on Google Apps, and I backup daily to an encrypted Amazon S3 service. Although I would not be keen to have my laptop stolen, I had a moment a couple of days ago when my laptop spontaneously died, and I felt absolutely no panic about it. It turned out to be only a temporary coma, but I knew that I could recreate my working environment on a new machine in pretty short order.

By the way, yes, there’s free wifi.

SAPPHIRE: Léo Apotheker Keynote

This afternoon, we heard from the other co-CEO of SAP, Léo Apotheker (I think that I forgot to mention Henning Kagermann’s title of co-CEO in my post this morning), starting with some fairly general comments on the nature of competitive differentiation in business, and the power of collaboration.

He was joined on stage by a couple of customers:

  • Proctor&Gamble, who discussed how they’ve used SAP as essential infrastructure for innovation and growth in the consumer products industry; P&G has become well-known in social media circles for crowdsourcing their R&D after being featured in the book Wikinomics.
  • Harley Davidson, who are using SAP to provide the information necessary to enrich their customers’ experience, further deepening the relationship and increasing loyalty in order to increase revenues.
  • Coca-Cola, through a really funny sequence using voice-activated ordering, picking and delivery, ending with a real person from their warehouse delivering two Cokes to the stage, then giving a short (and rehearsed) bit on how it helps his day-to-day work. A Coca-Cola executive was there to help serve the drinks. And, oh yeah, talk about how SAP and a service-oriented architecture have improved their warehouse operations.

All of this is about business processes, and I don’t mean just the narrow view of process that we have in BPM: this is about the business processes embedded within every business application, from legacy ERP to agile composite applications.

Apotheker talked explicitly about NetWeaver BPM and what it brings in terms of process agility; this product announcement is obviously a big deal for SAP, since it’s mentioned in both of the CEO keynotes today. He talked about the power of picking and choosing components from the core SAP applications and assembling them into composite applications for new functionality and increased agility, while maintaining the power of the underlying ERP functionality.