IT360: Matthew Glotzbach, Google Enterprise

I’m at the IT360 for a couple of hours this morning, mostly to hear Matthew Glotzbach, director of product management for Google Enterprise. It’s a sad commentary on the culture of Canadian IT conferences shows that this session is entitled "Meet Matthew Glotzbach of Google" in the conference guide, as if he doesn’t need to actually talk about anything, just show up here in the frozen north — we need to work on that "we’re not worthy" attitude. 🙂

Google’s Enterprise division includes, as you might expect, search applications such as site search and dedicated search appliances, but also includes Google Apps which many of us now use for hosting email, calendaring and document collaboration functions.

Glotzbach’s actual presentation title is "Head in the Clouds", referring to cloud computing, or more properly in this context, software as a service. He made an analogy between SaaS applications and electricity, referencing Nicholas Carr’s book The Big Switch, talking about the shift from each factory generating its own power to the centralized generation of electricity that is now sold as a service on the power grid. Just as it took a cultural shift to move from each company having their own power generation facilities (and a VP of electricity who was intent on defending his turf), we’re now undergoing a cultural shift to move from each company managing all of their own IT services to using best-of-breed services at a much lower cost over the internet.

He discussed five things that cloud computing has given us:

  1. Democratization of information, giving anyone the chance to have their say in some way, from Wikipedia to Twitter to blogs. This is dependent upon and facilitated by standards, particularly simple, easy-to-use standards like RSS; in fact, all public APIs for Google Apps are RSS-based. What IT can learn from this is to keep things simple, something that enterprise IT is not really known for. Cloud computing also allows for a much freer exchange of information between people who don’t speak the same language, through real-time translation capabilities that aren’t feasible on a desktop platform: for example, add en2zh ([email protected]) to your Google group chat so that you can have a text chat with someone with one of you typing in English and the other in Mandarin Chinese.
  2. Economics of the new information supply chain. Cloud computing fundamentally changes the economics of enterprise IT: the massive scale of cloud-based storage (e.g., Google Apps, Amazon S3) and computing (e.g., Amazon EC2) drives down the cost so much that it’s almost ridiculous not to consider using some of that capacity for enterprise functionality. Of course, we’ve been seeing this manifested in consumer applications for a couple of years now, with practically unlimited storage offered in online email and photo storage applications, but more companies need to start making this part of their enterprise strategy to reduce costs on systems that are essential but not a competitive differentiator.
  3. Democratization of capabilities, allowing a developing nation to compete with a developed country, or a small business to compete with a major corporation, since they all have access to the same type of IT-related functionality through the cloud. In fact, those without legacy infrastructure are sometimes in a better position since they can start with a clean slate of new technology and become innovative collaborators. It’s also possible for any company, no matter how small, to get the necessary Googlejuice for a high ranking in search results if they have quality, targeted information on their site — as the cartoon says, on the internet no one knows you’re a dog.
  4. Consumer-driven innovation will set the pace, and will drive IT. The consumer market is much more Darwinian in nature: consumers have more choices, and are notoriously fast to switch to another vendor. Businesses tend not to do this because of the high costs involved in both the selection process and in switching between vendors; I’m not sure that Glotzbach is giving enough weight to the costs of switching corporate applications, since he seems to indicate that companies may adopt more of a consumer-like fickleness in their vendor relationships. As more companies adopted more cloud computing, that will likely change as it becomes easier to switch providers.
  5. Barriers to adoption of cloud computing are falling away. The main challenges have been connectivity, security, offline access, reliability and user experience; all of these have either been fully addressed or are in the process of being addressed. My biggest issue is still connectivity/offline access (which are really two sides of the same coin) such that I use a lot of desktop applications so that I can work on planes, in hotels with flaky access, or at the Toronto convention centre that I’m at today. He had some interesting stats on security: 60% of corporate data resides on desktop and laptop computers, and 1 in 10 laptops are stolen within 12 months of purchase — the FBI lost 160 laptops in the last 44 months — such that corporate security professionals consider laptops to be one of the biggest security risks. In other words, the data is probably safer in the cloud than it is on corporate laptops.

He finished up with a slide showing a list of well-known companies, all of which use Google Apps; alarmingly, I heard someone behind me say "just show me one Canadian company doing that". I’m not sure if that is an indication of the old-school nature of this conference’s attendees, or of Canadian IT businesspeople in general.

Glotzbach’s closing thoughts:

  • On-premise software is not going away
  • Most of the interesting innovation in software and technology over the next decade will be "in the cloud"
  • Market will have lots of competitors
  • Your new employees are the cloud generation, both welcoming and expecting that some big part of their social graph lives in the cloud
  • We (Google and other cloud providers) need to earn your trust.

Great presentation, and well worth braving the pouring rain to come out this morning.

Lombardi analyst update

On April 2nd, Lombardi held their second analyst update by teleconference; I found the first one back in January to be informative, and obviously Lombardi had sufficient positive feedback from it to continue. Strangely enough, we were instructed to embargo information about the new Blueprint until today, although the Blueprint team blogged about it on the weekend.

Phil Gilbert started out with a high-level corporate update, including their growth — both new hires and through their channel — and some of the new sales where they continue to compete successfully against larger vendors. However, most of the information was about their products and services.

Blueprint, their SaaS process discovery tool, now has 2,400 customer accounts (averaging 5 users per account) in 88 countries. A major update was just released, where they’ve moved on from just business mapping to a more complete BPMN modeler. Later this year, they’ll be improving the wiki-style documentation capabilities in the process repository, and at the end of this year or early next year, they’ll be moving some of Teamworks’ performance analysis tools — process simulation and executive dashboards — into Blueprint. Phil tried to counter the fears of companies not wanting to keep key business information in a hosted environment outside the firewall, but I know that until Blueprint can be hosted outside the US, where privacy laws are not well-aligned with many other countries such as Canada, a lot of my customers wouldn’t even consider it. I asked Phil if they planned to host outside of the US, and he said “probably in 2009” but indicated that it would be based on customer demand. The only other analyst on the call who seemed concerned about this — especially when it includes passing back operational data to the modeling environment for simulation — was Neil Ward-Dutton, who was the only other person who wasn’t US-based.

blueprint-link-to-external-subprocess_2398636604_o

We had a demo of the new version of Blueprint, which includes the ability to reuse processes across Blueprint projects as linked subprocesses: a significant architectural improvement. The new diagramming capabilities include in-line embedded subprocesses that can be expanded and collapsed in place (nice!), the ability to easily convert a single step to a subprocess, and backward looping. It also includes a Visio importer, although not in the free version. In other words, this has clearly moved beyond the “toy” label that many other vendors have been applying to Blueprint, and appears to be a fairly full-featured process modeling tool now.

blueprint-inline-expanded-subprocess_2397806393_o

They’ll continue with their current Blueprint pricing model that has a free version for a single user and a limited number of processes in order to try it out, then subscription pricing of $50/user/month for the professional version, which includes the Visio importer and Teamworks integration.

The other major announcement is about three packages of services that Lombardi will be offering, all of which involve working closely with the customer and using Blueprint to document the processes:

  • Process inventory, a 3-week engagement that includes a full inventory of level 1 “as-is” processes within an organization, identification of 30+ key business KPIs and SLAs, and a report of process improvement opportunities and roadmap. Expected price is $40k.
  • Process assessment, a 2-day engagement to assess a single process: ranking the problems and opportunities, level 1 and 2 “as-is” process maps, and identification of 5-10 key process KPIs and SLAs. Expected price is $15k.
  • Process analysis, a 2-week engagement that follows on from the process assessment with a full analysis of a process by adding detailed ranking of process problems and opportunities, level 1 and 2 “as-is” and “to-be” process maps, the identification of 10-20 key process KPIs and SLAs, and an estimated potential ROI analysis. Expected price is $40k.

The idea is that a customer would have the process inventory done to take a look at all of their business processes and select one or two critical ones, then have the assessment and analysis done for each of those critical processes.

These service packages are available now worldwide, and are working to train their partners to provide these services, although they don’t yet have any partners who can deliver the entire set of packages.

BPM and Model-Driven Development, SaaS and the economy

It’s been a slow week for blogging due to a lot of billable client work, which takes precedence, and I’ve also missed several webinars that I wanted to attend. However, an article that I wrote for Intelligent Enterprise was picked up on TechWeb and published on the Yahoo! News Tech page (thanks to Bruce Williams of Software AG for tipping me off), which has resulted in quite a bit of traffic this week. I wrote the article at the end of the Gartner BPM summit last month, sifting through the wide variety of information that I saw there and distilling out some common themes: model-driven architecture/development, BPM and software-as-a-service, and the impact of the slowing economy on BPM.

The part on BPM and model-driven development was written prior to the Great BPMN Debate, but there’s an obvious tie-in, since BPMN is the modeling language that’s typically used for MDD in BPM. One of the webinars that I missed, but have just played back, is one from PegaSystems and OMG on Five Principles for Success with Model Driven Development (playback available, registration required), which touches on a number of the ideas of using (usually graphical) models to express artifacts across the entire software development lifecycle. Richard Soley of OMG and Setrag Khoshafian of Pega went through these principles in detail:

  • Directly capture objectives through executable models and avoid complex mappings between tools
  • Make a BPM suite the core layer of your MDD: model-driven development is achieved through BPM
  • Build and manage an enterprise repository of your modeling assets using a complete BPM suite
  • Leverage the platform and architecture pattern independence
  • Adopt a BPM suite methodology, center of excellence, best practices and continuous improvement lifecycle

The principles presented by Khoshafian were rather suspiciously aligned with Pega’s way of doing things — I have the feeling that Soley would have produced a somewhat different list of principles on his own — but the entire webinar is still worth watching, especially if you’re trying to haul your organization out of a waterfall development model or trying to understand how BPM and MDD interrelate.

To my new visitors arriving here because of the TechWeb syndication of the article: browse the archives by month or category (including the conference subcategories), or use the search feature to find topics of interest. I have several mostly-finished blog posts waiting for some final touches, so stay tuned for more content.

Gartner BPM: Pursuing Process Agility Goals Using SaaS

Michele Cantera and Ben Pring talked about the compatibility of BPM and SaaS, especially in the key issue of whether process agility can be achieved with SaaS delivery models, or if that’s only suitable for standardized applications and processes.

Pring’s area of expertise is SaaS, and the first part of the presentation was on the SaaS trends in the next five years, and the areas where it will have the most impact. He spent some amount of time defining SaaS (which I won’t reproduce here), how it is confused with outsourcing and hosting, and its benefits. It is useful to consider, however, some of the reasons why companies are moving to SaaS, since these are true for BPM as it becomes available in a SaaS environment:

  • Too much software and hardware that is purchased but never used.
  • The high cost of software implementation, particularly the cost of services required.
  • The hidden costs of IT that drive up the effective cost of on-premise systems.
  • The emergence of new technologies that enable SaaS, such as grid computing.

SaaS is almost always used to reduce costs, both the up-front costs of the systems themselves and the infrastructure required to support them. However, many organizations have security concerns (which may or may not be unfounded), and there is often a real or perceived reduction in functionality (particularly related to integration) compared to an on-premise system. SaaS is no longer seen as a crazy idea any more — Salesforce.com proved that organizations would put confidential business-critical data in a remote system — and many enterprise application vendors are looking for ways to capitalize on this growing market.

Cantera took over to talk about BPMS and SaaS, starting with the range of different service delivery models from on-premise shared services (which she refers to as “not really SaaS” — you think?), to business process outsourcing (again, not SaaS since the end-customer doesn’t provide the people in the process and/or it’s not purchased on a subscription basis), to SaaS delivery of process-based applications (e.g., Enkata, based on Lombardi TeamWorks, or L@W, based on Metastorm), to an actual SaaS BPMS platform (e.g., Appian Anywhere, or Fujitsu Interstage). In most cases, the process-based applications are fairly rigid to the end consumers; unlike the platforms, which expose pretty much the entire functionality of the equivalent on-premise BPMS, the applications may not allow any process changes, or only limited changes.

She said that she doesn’t see a push to using a BPMS platform via SaaS, but I think that’s a chicken-and-egg problem: Appian’s product isn’t even released yet, and Fujitsu’s seems to be under the radar, so customers either don’t even know that this capability exists or think (correctly) that it’s not available yet.

There are a number of architectural patterns for implementing multi-tenancy BPMS on a single SaaS server:

  • Each application has its own instance of the BPMS, and its own instance of a repository, but on a shared server. Gartner sees this as the dominant architecture in order to ensure process agility, although at a higher cost due to separate BPMS and repository instances for each application.
  • Each application has its own instance of the BPMS, but all instances share a partitioned repository on the shared server.
  • Each application shares a single instance of the BPMS and repository on the shared server (currently, no BPMS vendors support this model).

Cantera and Pring spoke together on what degree of process agility can be expected in a SaaS BPMS environment. They started by discussing — separately — how to determine if SaaS is right for you, and if BPMS is right for you, then looked at the process agility characteristics of BPMS in the various service delivery environments. If we look just at the characteristics for BPMS platforms via SaaS, they indicate a moderate operational cost, high degree of customization possible and therefore high process agility with a low to moderate cost associated with that process agility. The problem, of course, is that the vendors just aren’t quite there yet.

Outsourcing the intranet

I’ve told a lot of people about Avenue A|Razorfish and their use of MediaWiki as their intranet platform (discussed here), and there’s a lot of people who are downright uncomfortable with the idea of any sort of non-standard intranet platform, such as allowing anyone in the company to edit any page on the intranet, or contribute content to the home page via tagging and feeds.

Imagine, then, how freaked out those people would be to have Facebook as their intranet.

Andrew McAfee discusses a prototype of a Facebook application that he’s seen that provides a secure enterprise overlay for Facebook, allowing for easy but secure social networking within the organization. According to WorkLight, the creators of the application:

WorkBook combines all the capabilities of Facebook with all the controls of a corporate environment, including integration with existing enterprise security services and information sources. With WorkBook, employees can find and stay in touch with corporate colleagues, publish company-related news, create bookmarks to enterprise application data and securely share the bookmarks with authorized colleagues, update on status change and get general company news.

This sort of interaction is critical for any organization, and once you get past a certain size or start to spread geographically, you can’t do it with a bulletin board and a water cooler any more; however, many companies either build their own (usually badly) or use some of the emerging Enterprise 2.0 software to do something inside their firewall. As Facebook becomes more widely used for business purposes, however, why not leverage a platform that pretty much everyone under the age of 40 is already using (and a few of us over that age)? One company, Serena Software, is already doing this, although they appear to be using the naked Facebook platform, so likely aren’t putting any sensitive information on there, even in invitation-only groups.

Personally, I quite like the idea, although I’m a bit of an anarchist when it comes to corporate organizations.

There’s a lot that would have to happen for Facebook to become a company’s intranet (or even a part of it): primarily sorting out issues of data ownership and export. There’s lots of people putting confidential data into Salesforce.com and other SaaS platforms that I think we can get past the philosophical question of whether or not to store corporate data outside the firewall; it just needs to be proven to be private, secure and exportable.

I also found an interesting post, coincidentally by an analyst at Serena, discussing how business mashups should be human process centric, which was in response to Keith Harrison-Broninski’s post on mashups and process. Although Facebook isn’t a mashup platform in any real sense, one thing that should be considered in using Facebook as a company’s intranet is how much process can — or should — be built into that. You really can’t do a full intranet without some sort of processes, and although WorkBook is targeted only at the social networking part of the intranet, it could easily become the preferred intranet user interface if it were adopted for that purpose.

Update: Facebook launched Friends Lists today, that is, the ability to group your contacts into different lists that can then be used for messaging and invitations. Although it doesn’t (yet) include the ability to assign different privacy settings for each group, it’s a big step on the way to more of a business platform. LinkedIn, you better get that IPO done soon…

LongJump revisited

I had an interview with Pankaj Malviya, CEO of LongJump, back in July, and another a few days ago to bring me up date for this week’s launch of their SaaS platform and applications. There hasn’t been a lot of new functionality since then, but they’ve accelerated their launch date: in July, they said that they’d be in an open beta by the end of the year (which I said was longish), and now they’ve done a full (non-beta) launch instead in a shorter time frame, so they must have felt the heat of the competition to get things going. They’ll be starting to offer training in about a week, and will eventually have some videos available online to allow you to preview applications.

Their focus remains on the small and medium business market, with the idea to prove to those companies that LongJump is sufficiently reliable to trust with their business data. Since they’re part of Relationals, they have a track record at providing hosted CRM for a couple of years now, which is certainly a good start over many of the other SaaS providers.

Although LongJump is a platform, they’re focussed on applications, not the platform itself. The basic package contains two applications: OfficeSpace, a group calendaring and collaboration application to manage documents, projects and discussions; and Customer Manager, a starter CRM application that integrates with Outlook. There will be other CRM applications available as well, such as Deal Manager for creating and tracking quotes, and non-sales management applications such as the IT asset tracking one that I discussed in my first post about them.

360 Customer Manager app

In fact in their press release, they list 12 applications that they say that they are initially introducing, although it’s not clear if all 12 are available now.

I am, of course, interested in what else that they’re doing with workflow after seeing it in the initial demo; they’re not releasing that until October, but they’re moving from a list-based set of states to a graphical process designer and there will be five applications released at the same time to take advantage of the workflow capabilities.

All of the applications will be free for the next three months in order to encourage people to try out LongJump, then it will move to regular pricing. Although the regular pricing was given to me verbally, it wasn’t confirmed so I don’t want to quote it here, but suffice it to say that the price point may give them an advantage over Salesforce.com for CRM, although you’d have to dig in and do a full functionality review (which I haven’t) to know how comparable that they really are.

You can read their full press release here.

Appian Anywhere update

I had a chance to hear an update of Appian Anywhere, Appian‘s SaaS BPM offering, while at the Gartner BPM conference this week. I’m very interested in BPM and Enterprise 2.0, and SaaS BPM fits nicely into that intersection.

Although they originally planned for GA in Q307, it looks like Q108 before they’re going to be available to their planned SMB target audience with payments by credit card and other functionality that you’d expect for a SaaS offering. The reason appears to be that they’ve had so much interest from large corporate customers that they’re offering a large-client configuration first to a small number of select customers, so have diverted resources from the SMB functionality to focus on the big fish first. It seems to me that that would tend to cannibalize their on-premise business, although I’m sure that there are large organizations who will use this as a way to try before they buy.

They’re really trying to create an ecosystem for partners to develop applications on their platform. To prime the pump, they’ve created 30+ applications of their own that they’ll offer out for free with the basic subscription; partners are developing other applications that will be offered on a subscription based in the Appian Anywhere marketplace. Encouraging this sort of application development is a web service-like integration capability (I don’t think that it is exactly web services, but similar in nature) to integrate between Appian Anywhere applications and behind-the-firewall applications, which makes it much more useful as a BPM platform, since I can’t think of any customer of mine who wouldn’t have to integrate with one of their on-premise systems at some point.

They’re also creating some video training to minimize the need for professional services to get you up and running on the platform.

There’s still a lot of resistance to SaaS for core business processes, although I think that this could catch on for the non-critical ones as a starting point. However, there’s some pure Enterprise 2.0 vendors such as LongJump who are going to creep into this space — from the other direction and with a very different sort of offering — and pick up some of the market.

Why SaaS rocks

I hear a lot of opposition to software as a service from customers, ranging from an unformed mistrust of anything that crosses the firewall, to the feeling that anything that runs in a browser must be a toy, to a full-blown (and justified) concern of non-American companies about having their data stored on US-based servers where it is presumably accessible to US government agencies on demand. Keeping in mind that many of them are large, fairly conservative financial services organizations, I obviously have a long way to go in terms of convincing them otherwise, yet I still try.

Going back to Tim O’Reilly’s original treatise on Web 2.0, SaaS is baked right into the definition in two important ways:

  • the web as platform
  • the end of the software release cycle

The first of these is likely what sells most people originally: the idea that nothing needs to be installed at your own site, and all you need to do is pay $x per month per user (where x is about the cost of a couple of cappuccini at Starbucks) to have access to a fully-functional application. Think that this is only for small businesses? Salesforce.com announced yesterday that Dell is increasing their number of Salesforce.com subscriptions from 15,000 to 40,000 users. There’s all sorts of good reasons why to do this — lower TCO, small ongoing expense versus a large capital expenditure, no need to bring a new servers and applications into your data centre — but the somewhat unspoken reason is that it’s a way for the business to escape the tyranny of IT when it comes to purchasing applications. I’ve seen many cases of a smallish business unit within a large organization wanting to bring in new technology (BPM, BPA and BI are all ones that I’ve seen in this scenario), but IT adds on an unduly large burden of corporate standards and application vetting that kills the ROI, and the business goes back to their paper and spreadsheets. I’m not saying that IT shouldn’t be involved in these decisions, but when their time spent reviewing and “architecting” a packaged solution costs as much as the external costs, something’s wrong. If the business can get equivalent functionality from a SaaS offering with much less IT involvement and a small monthly bill rather than a large up-front capital expenditure, that’s going to look much more attractive.

The second driver for SaaS from O’Reilly’s definition is where the benefits will really accrue in the future, although that’s likely unrecognized by many people. The idea that you don’t have massive software releases that take the system down for hours or days, but that new features are gradually introduced with little or no fanfare, means that there’s much less disruption to the users, and that they’ll be pleasantly surprised by new functionality. I had exactly this experience of pleasant surprise this morning, when I noticed that Google Reader, which I’ve been using for a couple of months now, has gone from listing the number of unread items as “100+” to the actual number, a feature that I sorely missed from Bloglines since I almost always have more than 100 unread items and I really want to know how many more. They didn’t, to my knowledge, disrupt service in order to add this new functionality: it just appeared in my browser this morning (or maybe before, I’m not all that observant sometimes). I believe that there’s still the need for some major upgrades, such as a complete UI paradigm shift, but most of the enhancements to most business applications could be done incrementally and introduced as they’re ready, if the infrastructure is there to support it. That requires a browser-based application to avoid a download and install each time something changes, if not actually SaaS, but it also requires a new mindset for development teams about agile development and release: something that is much more prevalent in the SaaS vendors than in corporate IT groups.

If you read my post on Enterprise 2.0 updates recently, or the original Dion Hinchcliffe post that inspired it, it starts to become clear that Enterprise 2.0 will be dependent to some degree on SaaS, at least in the short term: many IT organizations are just not ready to start installing this new breed of application on their own servers, and the business groups will look outside to get their problems solved. This will lead to a further commoditization of IT, since once the business is using SaaS successfully, that genie’s not going back into the bottle.

Update: Google Reader also added search capabilities in this set of incremental upgrades, which I didn’t even notice (as enamoured as a I was with the accurate unread item count) until I read it on Mashable.

BPM Think Tank Day 3: BPM vendor panel

Next up was a panel of BPM vendors: Phil Gilbert (Lombardi), Angel Diaz (IBM), Marco ten Vanholt (SAP BPX), Burley Kawasaki (Microsoft), Scott Byrnes (Handysoft) and David Shaffer (Oracle). Derek Miers moderated, and posed a series of questions to the panelists rather than having the panelists do short presentations as we saw on previous panels — a much better panel format, in my opinion, and it even generated some conversation between the panelists directly.

Phil mixed it up right away by agreeing with the other panelists that standards are important (duh), but said that the first thing that we need to standardize is the meaning of the term BPM. He also thinks that OSM (Organizational Structure Metamodel) is going to be one of the most significant standards in the coming months, next to BPMN. In other words, people are going to start modelling their business, not just their processes. Marco added that there’s going to be an increasing interest in the processes that span organizations, and standards that support that will become more important. They all seem to agree that business users don’t really care about standards explicitly, but that standards are an implicit part of things that the business types to want: portability of models and reusability of skills, for example.

One question was whether BPM offered via SaaS is reducing the barriers to entry to what is still a complex implementation. Burley feels that it will make a difference for departmental applications that just can’t justify the spend, and for cross-organizational choreographic processes where no one organization is “in charge”, but that there will still be a strong market for on-premise solutions especially at an enterprise level. Angel added that standards are going to play a strong role here, since there’s likely to be a hybrid approach that uses both on-premise and on-demand systems within the same processes. Marco made the statement that some industries will “never, ever have software as a service”; it will be interesting to come back in a few years and see if he has to eat his words. Many organizations already have their data centres outsourced, including those that require advanced security, and I think that SaaS is just a small step beyond that from a security standpoint even though it might be perceived as being something entirely different. Scott things that a template-driven, simpler type of BPM functionality could be adopted by the SMB market. David pointed out that there’s a difference between having BPM embedded in a SaaS application and offering BPM directly as a SaaS, and feels that the latter is going to see much lower adoption. Phil stated that their Blueprint product is a tactic in their way to building a cloud capability, implying that we’ll see some hybrid on-premise/on-demand functionality from Lombardi in the future

They then discussed mechanisms for supporting more collaboration and deeper embedding into a worker’s environment. Scott talked about being able to share, for example, information about the experts for a specific process, and be able to IM them directly. Marco talked about being able to do some collaborative Visio diagramming in a wiki-type plug-in (presumably on BPX); I’m not sure if this something that they have with a browser design interface, or if it’s a place to upload Visio diagrams. He also pointed out that wikis, forums and IM are going to be start to be built into applications for collaboration, further pushing the need for standards since none of us want the BPM vendors to build their own wiki or IM software.

A question from the audience asked whether the vendors are getting inquiries from other vendors to embed/OEM their BPM functionality inside another product, whether SaaS or not. David, Burley and Angel spoke up that they are seeing this; not surprising since Oracle, Microsoft and IBM are all “platform” BPM vendors that tend to offer components rather than a more cohesive suite. Although I haven’t written up my notes from the BPEL roundtable yesterday, this is one of the areas where standards like BPEL will help to facilitate that type of integration. Phil added that they’re seeing this as well, but more from the standpoint of embedding more of their suite rather than just the engine.

Another question was on the distinction between modelling processes for business improvement purposes, and modelling processes as a visual coding/RAD tool. Phil responded that if you’re just buying BPM as a RAD tool, don’t buy it: stick with Java or .Net.

There was a discussion on the role of large vendors in standards, and how large vendors can sometimes take a standard off into their own organization and develop it 80% of the way and bring it back to the standards group: sometimes this works well, and sometimes it allows the vendor to just mould the standard to their own product agenda. We also came back around to the comment that Phil made at the beginning of the panel, where we need to define what BPM is in the market: the vendors all seemed to agree that they all have their own definition of BPM that coincidentally matches completely with their product functionality, and they all agreed on the buzzphrase “BPM is all about the business”. 🙂 The analysts also all have their own definitions, although they all seem to be congealing around the Gartner definition of BPM as a management practice, which doesn’t at all help the issue when the BPM vendors define it in terms of the technology capabilities. Bruce Silver lobbed a small incendiary device from the audience by stating that from the viewpoint of BPM as a management discipline, the vendor products are all exactly the same, and that customers may just see them as snake oil salesmen trying to sell the same thing in a different way. Not sure that we’re going to solve this one today.

It’s interesting watching a vendor panel like this, where the panelists are not allowed to do any product pitches, and where they’re all pretty smart guys: the discussion is a complex weave of philosophy, techno-geekery and thinly-veiled nudging towards their own specific agendas. This is part of what I like about the BPM Think Tank: there’s much more open collaboration between vendors than at other conferences, although there’s always a strong streak of friendly competition throughout the interactions.

Enterprise 2.0: Case Studies, Part I

Another panel, this one with moderator Brian Gillooly from Optimize, and including panelists Jordan Frank of Traction, Mark Mader of Smartsheet.com, Suresh Chandrasekaran of Denodo, Todd Berkowitz of NewsGator and David Carter of iUpload (which I understood was going to undergo a name change based on what their CEO John Bruce said last month at EnterpriseCamp in Toronto). Since these are all product companies, I expect that this might be a bit less compelling than the previous panel, which was primarily focused on two Enterprise 2.0 end-user organizations.

I’m not going to list the details of each vendors’ product; suffice it to say that Traction is an enterprise wiki platform (although there’s some blog type functionality in there too), Smartsheet.com is a spreadsheet-style project management application offered as a hosted service, Denodo does enterprise data mashups for business intelligence applications (now that’s kind of interesting), NewsGator is a well-known web feed aggregator and reader, and iUpload is a hosted enterprise social software service.

Mader had some interesting comments on how by making updates to a schedule completely transparent, no one wants to be the last one to add their part since everyone will know that they were last; this, however, is not unique to any Enterprise 2.0 functionality, but has been a well-known characteristic of any collaboration environment since Og was carving pictures of his kills on the community cave wall.

There was an interesting question about who, within an organization, is driving the Enterprise 2.0 technology adoption: although the CxO might be writing the cheque, it’s often corporate communications who’s pushing for it. In the last session, we saw that in one organization, it was pushed by HR, but I suspect that’s unusual.