Conference Season Begins

It’s been quiet for several months for conferences, but things are heating up again for the next four weeks. Here’s my upcoming schedule:

  • This week, I’m at PegaWorld in Philadelphia, including chairing a workshop on Wednesday morning on case management
  • The week of May 3rd, IBM Impact in Las Vegas
  • The week of May 10th, TIBCO’s TUCON in Las Vegas
  • The week of May 17th, SAP SAPPHIRE in Orlando

If you’re attending any of these events, be sure to look me up. I’ll be blogging from all of them. You can find these, and many other BPM-related events, at the BPM Events calendar. If you have an event to add to the calendar, just let me know.

Disclosure: each of the vendors pays my travel expenses for me to attend their user conference. They do not, however, have any editorial control over what I write while at the conference.

Not Dead, Just Resting

Thanks to all who have noticed my lack of online participation this week and wondered if all is okay. I was downed by a nasty headcold/flu/plague, and have spent most of the week either coughing or sleeping. I appear to be on the mend, since I’ve been able to remain vertical for a couple of hours to return urgent emails and voice mails, but don’t expect me back to usual form until next week since I plan to be horizontal again really soon.

Unfortunately, this meant that I missed out on Monday’s Toronto Girl Geek Dinner, Tuesday’s CloupCamp, dinner last night with the good folks from the Business Rules Forum who were in town this week, and the talk about social BPM that I was scheduled to give today on Craig Cmehil’s Friday Morning Report 24-hour marathon. 🙁

Blogging and a Knowledge Scarcity Model Don’t Mix

I recently swapped around my office space, and found some old (paper) notebooks that I browsed through before shredding. One of them, from 2006, contained a page of notes that I jotted down about why consultants don’t blog:

  • Not enough time
  • Too few “outside” interests (aside from proprietary customer work), hence nothing interesting to blog about
  • Knowledge scarcity model

Taking these points one at a time, I consider the time that I put into blogging as part of my marketing budget (if I had such a thing), since most of my new business comes to me because someone reads my blog and thinks that I have something to add to their projects. I also consider it a valuable part of my business social networking, providing a way for me to connect with others to exchange opinions or just build those weak ties that come in handy when you least expect it. It’s also, in some cases, a public version of my note-taking – especially the conference posts – that I often refer back to when I know that I wrote about something, but can’t recall when or where. For all of these reasons, the time that I have spent blogging has paid for itself many times over in revenue, relationships and research.

On the second point, there’s always something that you can write about that has nothing to do with the proprietary work that you do for your customers, but would serve you in the ways that I mention above. Generic technology or management research or readings that you’ve done are always a good place to start; product reviews; links to and comments on interesting posts in your fields; even topics that aren’t directly related to your work but that you find interesting. If your customer has a great case study that they’d like to brag about, you can even include that. The important part is to write about what you’re passionate about, those little things that make you love your job.

The most common reason that I hear from consultants on why they don’t blog – and what clearly drives the mostly content-free blogs that we see from the big analyst firms – is that they’re afraid of people stealing their ideas, especially if they think that they can sell those ideas. To quote my friend Sacha:

If the thought of people stealing your ideas is what’s stopping you from thinking out loud on a blog, you’re not alone. It’s a valid fear. If you’re afraid of your ideas being stolen, your mindset is probably that of knowledge scarcity – that you should hoard knowledge because that’s what gives you power. That makes sense to a lot of people.

Another mindset is that of knowledge abundance. There are plenty of ideas to go around, and sharing knowledge gives you power. That makes sense to a lot of people, too.

She goes on to discuss the value of openly sharing ideas: practice in communicating those ideas, questions and challenges that help you refine those ideas, and the networking and reputation effects.

What I see happening with people who operate in a knowledge scarcity model is that they tend to blog about things on which they don’t place much value, since they don’t want to “give away” their really good stuff. However, this results in a negative feedback loop: your audience knows that you’re feeding them crap, and they tune out. In other words, if you think of knowledge as scarce, then your blog is not going to be very successful. It doesn’t mean that your business won’t be, but failure to share makes for an unpalatable blog.

I tend to operate in a knowledge abundance model: there are a lot of people out there with great ideas too, so let’s share them and make something even better. More importantly, however, my knowledge isn’t some limited bit of intellectual property that I invented in the past and have to horde only for my paying customers: I generate new knowledge every day, every time that I talk to someone or read something interesting or have a new experience. In other words, although I might be judged on the basis of what I’ve done in the past, the real value that I bring is the ability to create new knowledge going forward.

OpenApps at DemoCamp 26: Easy Functional Extensions of Your Website

Last night was DemoCamp 26: a forum for people to show off whatever they have to demo. It’s a good way to network with the tech community in Toronto, and see emerging new applications often before they even launch. We started with a keynote from April Dunford on lean marketing, a topic that many of the startups attending were taking to heart, before we moved on to the demos.

The first demo was by Krispy, a long-standing member of the TorCamp community (and a serial presenter at DemoCamp), announcing the beta launch of his latest project, OpenApps. OpenApps is a platform for you to add applications to your website without writing code, or even having to edit your site at all. Their app store has both free and paid (monthly subscription) apps, and an open model to allow developers to add their own apps.

In the beta release, there is a fairly small set of apps – Bing and Yahoo search, comparison shopping with Shopping.com, news/topic search with Daylife.com, Oodle Classifieds, and Twitter user/topic searches – but they list a number of other potential apps such as zip/postal code lookups. The best way to understand it, however, is to try it out. I did this in about 3 minutes (although, to be fair, I had seen Krispy demo this last night).

First, go to OpenApps and sign up for a free account, which requires only your email address and a password:

OpenApps home screen

Next, go to the App Store to see the list of available apps; you can click on one to see more details about it, such as the Daylife app for adding news and other relevant content based on keywords:

OpenApps app store OpenApp app details for DayLife

Click on “Try this app” to fill in the parameters for adding this app to your site. Shown below, I changed the “App Name” field to “BPM news”, specified the keyword “BPM” and – this is the magic part – told it to adopt the look and feel of www.column2.com:

OpenApps configuring DayLife for Column 2

I clicked on “Preview”, and this is what was generated:

OpenApps DayLife page preview on Column 2

Click through for the full-size image, and you’ll see that OpenApps has replicated the look and feel of Column 2 (which is built on WordPress), including my header, sidebar and styles, and inserted the Daylife page based on the BPM keyword into the content portion of the page. Very, very cool.

To be able to publish this, I filled out the subdomain section of the application parameters, and changed my DNS to create a CNAME record “news” to point that subdomain to OpenApps, but they even provide links to step-by-step instructions from some of the popular domain registrars on how to do this. That took me another minute. I also had to sign up for a Daylife API key so that I could publish their content. There went another minute. That produced a page, news.column2.com, publishing Daylife content based on the BPM keyword, that appears to be part of my site, all done in about 5 minutes (plus wait time for DNS propagation and Daylife API account approval).

The page, of course, isn’t really on my site. The DNS change means that that URL redirects to OpenApps for the presentation, with the app content section redirected to Daylife. Still, it’s all pretty seamless. The magic part of OpenApps, where they make it look like my existing site, has to do with their technology to auto-locate the injection point for content for a number of common web content management systems, including WordPress, Joomla, Typepad, Movable Type, Drupal and Blogger; there are also methods to use a more complex manual detection process based on the page elements if you’re not using one of those platforms, or are using them in a non-standard way.

OpenApps is free to use; Krispy told us that developers who create apps for their app store retain 70% of the revenue, meaning that OpenApps is monetizing on the 30% commission that they charge to the app developers.

Wiki Tuesday: Wikis at RBC

Yes, I know that today is not Tuesday, but this is about our previous Toronto Wiki Tuesday, a monthly meetup where we have a presentation on wikis, lift a few pints and hobnob with wiki specialists such as Martin Cleaver (who also organizes Wiki Tuesdays) and Mike Dover (responsible for the research behind Wikinomics, and co-author of the upcoming Wikibrands).

The presenter at this session was Tim Hanlon from Royal Bank of Canada, talking about RBC’s journey and future plans with wikis inside the bank. He’s part of the Applied Innovation team, who are tasked with identifying and applying emerging technologies: a sort of center of excellence for technology innovation. They’re within the Technology and Operations area, but half of their team is technical and half business, with a collection of skills that is very similar to a typical CoE.

First, a short lesson on Canadian banks: we only have five, they’ve been around since before Canada was a country, they don’t take a lot of risks, they own all aspects of our financial life, and RBC is the biggest. As you can imagine, wiki adoption in a large, conservative enterprise that’s been around for 150 years poses a few cultural challenges. I did a near full-time contract in part of RBC in 2003-4, and spent some time pushing the use of SharePoint (the only thing available internally) to get people collaborating, so I can appreciate some of the struggles that they’re having with the same culture and bit newer technology.

Hanlon outlined their progress to date:

  • In 2006, wiki functionality was enabled in SharePoint, but there was no widespread education about its use or benefits, hence no widespread adoption. Around this time, however, people started to accept Wikipedia as a reference source, which validated the use of wikis in general: in other words, it wasn’t that the RBC users didn’t believe that wikis could work, they just saw themselves as consumers rather than contributors. From my experience, this is a classic large enterprise attitude: many people don’t have the time or the inclination to take that first step to being a wiki author.
  • Over 2007-8, the SharePoint wiki attempts in RBC were seen, in general, as a failure. This was blamed on the technology, although that was only part of the problem. During that time period, a Confluence pilot was started.
  • In 2009, Confluence was rolled out as part of the corporate standard technology infrastructure: what the RBC architecture review committee that I used to sit on there referred to as the “bricks”, which are products that any department can select and implement without special approval.
  • Currently, they have 66 active instances of Confluence (paid version), mostly focused around projects. There are 1,000-1,500 total creators and participants across these instances, with a potential viewing audience of 10,000 internal users. Users are primarily at head office, with very little branch involvement. There is no external access to the wikis.

We spent some amount of time discussing the issues that they had with SharePoint. Some of these were cultural, due to the document-oriented nature of SharePoint: the standard wiki edit functionality looked very much like editing a Word document, and people were conditioned not to edit other people’s “completed” documents. Instead, they would email their changes to the wiki team, which really defeats the purpose of a wiki. Confluence has a very different user interface for editing, which allowed people to disassociate the idea of editing a wiki page from editing someone else’s document. As Hanlon pointed out, they could have customized SharePoint to make it look and feel more like Confluence, potentially avoiding these problems, but they didn’t even know that was the problem until they moved from SharePoint to Confluence.

Since RBC’s Confluence use is mostly for projects, it’s used for things such as meeting agendas and minutes. Last year, I wrote a post based on some research that I was doing with a few clients and around the web, covering the topic of when to use ECM versus a wiki: opinions ranged from “use a wiki only if there are no security requirements and you need to maximize accessibility, an ECM for everything else” to “use wikis for internal content by default, and ECM only for specific cases”. It would be interesting to see if RBC’s experiences with splitting content between ECM and wikis have matched what I’ve seen in other organizations. RBC is using SharePoint as their main document repository, and provide some easy functionality for linking to these documents from Confluence, but project documents are still often imported directly to Confluence. They’ve also found wikis useful for event calendars.

Adoption within the enterprise continues to be a struggle: Hanlon pointed out that they’re out trying to evangelize about wikis to people who just got good intranet search, so may not be ready for the idea of user-generated content. However, they’ve had a lot of success with tagging within Confluence, since many people don’t equate tagging with creating content. He said that they’re getting fairly good participation, but that the slow uptake on content creation is happening at typical “bank speed”. They’re still working on defining where wikis are appropriate, and how to educate the masses on what they are and how to use them: it’s important that wikis are not seen as just some extra thing that people need to do, but as a way of making their job easier. Although Hanlon and many others in the room saw the use of wikis as “creative” and therefore something that people will just want to do, I’ve spent too many hours with back-office workers to think that they’re going to be swayed by the argument that this lets them be more creative in their work. They’re finding that most people will still comment rather than edit, then email responses or requests for changes to the wiki team.

There’s a lot that they haven’t done yet: they haven’t yet started to work with plugins, such as ones that I’ve seen for content approval workflow. There is no federated search that includes the Confluence content, although they do have enterprise search that covers their intranet, shared drives and SharePoint content. They have an internal community of practice (Hanlon’s group), but no real training to roll out across the potential user base. There’s no single sign-on, and about half of the Confluence instances require a login. There’s little customization in terms of appearance, and they’re considering more of an RBC-specific skinning, although this could backfire if people then become confused over what’s part of the (uneditable) intranet versus a wiki. They’re still working out what to put in a wiki versus SharePoint (which is their document repository). In other words, lots of work for the RBC wiki team in the future.

So what does RBC need to do in order to push forward with wikis? They are starting to see value from wikis in content creation, but accept that Word and Outlook still rule in that area; in my experience, most content creation isn’t even making it into an ECM system (if one exists), but is on network drives and in email attachments. They need to balance the corporate need for control with the bottom-up wiki usage and folksonomy, likely by involving some wiki gardeners to help curate the content without controlling it. They need to push past the regulatory and information security mindset that exists within financial institutions, since regulations and privacy don’t apply to much of the information that would likely be stored on internal wikis. They need to understand the long-term value proposition for updating wiki content: what’s in it both for the individual and the company. Lastly, they need to make the long-time RBC employees see themselves as content creators, not just content consumers.

At the end of it all, a very informative talk on the struggles and successes with wiki adoption within a large enterprise. And, at the end of the night, I somehow ended up volunteering to speak at the June event, on using wikis with ECM and BPM. Hope to see you there.

Global 360’s analystView Simulation

It’s the first day of Gartner’s BPM summit in Las Vegas, so expect to see a lot of vendor announcements this week. Some, like Global 360, had the decency to arrange for a briefing for me last week so that I could write something about their announcement in advance; others, who shall remain nameless, waited until Friday afternoon to send me a content-free advance press release that is not worth repeating (although some undoubtedly will). You know who you are.

Global 360’s products are tightly tied to Microsoft platforms, and they use Visio as their business-facing process modeler. Although I have a philosophical problem with not using a shared model approach to process modeling, I’m also realistic enough to know that Visio for process modeling is not going away any time soon. There’s some nice things in Visio 2010 that are allowing them to move Visio from a passive role to a more active role, although only in the Premium edition.

BPMN in Visio 2010Visio 2010 Premium supports BPMN 1.2 with a stencil, and also has a number of ease-of-use enhancements to make it easier to draw process diagrams, such as the ability to easily add connected shapes, auto-alignment, reflowing the process map to vertical or horizontal alignment, and allowing a selected group of elements to be converted to a subprocess. In short, Visio is becoming a competent BPMN modeler, although the key will be how quickly they will release BPMN 2.0 now that the standard is with the finalization taskforce and can be expected to be released pretty much as it is currently defined. For those of you who aren’t that familiar with the differences between BPMN 1.x and 2.0, there are a number of new event types (some of them will be rarely used, although the non-interrupting boundary events are going to be a big hit), the addition of standardized task types, and most importantly, a serialization format that can be used for process map interchange between tools.

Global 360 analystView Visio integration: simulation resultsSo far, that’s just Microsoft Visio Premium. What Global 360 provides is the analystView plugin to add simulation to these BPMN models right within Visio. This is intended to be simulation “for the masses”: really, we’re talking simulation for the statistically-minded business analyst, although they’ve made the user interface fairly simple, and will include tutorials and interactive help to support the user who is just getting to know simulation. This actually runs discrete events through the model, and can pump out the results to the managerView analytics just as if it were an executing process. It can also do the reverse, taking historical data from managerView and using it as baseline data for the simulation. There are a number of fairly sophisticated simulation functions: data can be simulated through the model; routes can be selected conditionally rather than just weighted decision; roles can be used; and specific statistics can be tracked, such as logged events, timed sequences of events, or SLAs for the entire process, an activity or a timed sequence.

After watching the simulation in action, I’m left with two thoughts: first, it looks quite fully functional, although you’re still going to need some basic statistical background to use it; and second, I’m very glad that they didn’t use little animated running people while the simulation is running, because we’re all just so over that as a user experience. The simulation engine, by the way, is what they acquired from Cape Visions around 2004, which is the same as is used in IBM FileNet and Fujitsu BPM products for simulation.

Visio/SharePoint 2010 integration: saving as Web DrawingA second part of this announcement, also riding on the shoulders of Microsoft, is their SharePoint integration. Process maps created with Visio can be checked in to SharePoint for collaboration; although this can be done with the prior version of SharePoint, the 2010 version allows the process models to be checked in as Web Drawing files, which allows viewing and commenting by non-Visio users, where the built-in Visio services allow the diagram to be viewed as a web part. A process model in this form is still fully functional, for example, clicking on a subprocess will link to that subprocess, and clicking on an element shows the parameters associated with that element, including the simulation parameters.

When an analystView simulation is run, the simulation data is stored in SharePoint with the process model as XML; although Global 360 hasn’t yet launched the web part that will allow viewing of the simulation data directly in SharePoint, that’s expected to come along within a couple of months.

The critical component here is Visio 2010, which is still in beta, required for the BPMN 1.2 support; SharePoint 2010 is a nice-to-have because it allows non-Visio users to collaborate on process models, but isn’t required for any of the other functionality. Global 360 is hedging on the BPMN 2.0 support, saying that they’re pushing Microsoft to support it as soon as it is available, but if Microsoft doesn’t come through by the end of 2010, they’re going to have to make a move on their own. There’s also the issue of what happens when Microsoft decides that they really want to play in the BPM market, although Global 360 (and many other Microsoft-centric BPM vendors) are so far ahead of them, it will likely take an acquisition to have any of the current BPM vendors lose any sleep over it. In the meantime, Microsoft and Global 360 are doing some nice co-marketing, and Microsoft’s Visio website will offer the Global 360 analystView plugin for sale ($349) as well as Visio Premium 2010 ($800).

Global 360 userView inboxThis isn’t the first thing that Global 360 has done with SharePoint, however: they built 20-30 “ViewParts” that are SharePoint web parts that provide a front-end to the Global 360 process engine, allowing you to assemble a user interface for executing their processes in a SharePoint portal view. They’ve done quite a bit of research into persona-based user interfaces, which has resulted in their viewPoint set of interfaces tuned to each particular type of user: builders, managers and participants. The newly-released analystView is for analyst-type builders, whereas some of the userView applications that I saw last fall are for end users in various roles.

For example, a user’s inbox would show their current task list, a Current Workload feedback widget tell their supervisor how busy that they feels they are, a performance comparison with other users, and a message center for other work-related information. A heads-down transaction processing user’s view could be similar, but with push-type task lists instead of browsable lists. A user can then open a work packet, view any attached documents, and complete the tasks within the packet. A supervisor, on the other hand, would see the managerView, containing SLAs, warning and reports, and allowing the supervisor to reallocate work and roles.

The designerView, for more technical builders than the Visio tools described above, provides a process modeler with palettes for standard BPMN process elements, but also messaging, document functions, scripting and a variety of other integration functions. The data model for the process is fully exposed and integrated with the process model, something that more BPM vendors are realizing is critical. Comments can be added on each element in a model, then a documentation view collects all of those comments into a single view. The process models were not fully BPMN compliant when I saw them last fall, although that was planned for early 2010.

Once a process is designed, a builder can create the UI using web parts that are auto-wired when dropped onto the SharePoint canvas. Composite applications can be built using SharePoint or ASP.Net; a number of production-quality starter applications are provided out of the box.

I’ve been waiting for the analystView piece, announced today, to complete this picture: now they have business-facing process modeling and simulation in Visio with analystView, collaboration on the process models in SharePoint, redrawing (or at least enrichment) of the process models in the designerView, and user interface design in SharePoint. The suite feels a bit disjointed, although taking advantage of the penetration of Visio and SharePoint within enterprises could be a huge advantage for Global 360. The major challenges are direct competition from Microsoft at some point in the future, as I discussed previously, and the slow migration of many organizations onto the 2010 versions of the Microsoft platforms required for full functionality. Given that I still have enterprise customers using SharePoint 2003, that could be a while yet.

Five Years of Column 2

As of today, I’ve been writing this blog for five years. My first post was on BPTrends’ 2005 BPM Suites Report, and I’m still pretty focused on BPM, although have branched out to cover a wider variety of Enterprise 2.0 and collaboration topics as well. In the beginning, it was just labeled as my business blog and hosted on a subdomain under my corporate domain, although within the first month, I talked about how I’m a “column 2” sort of girl, and a month later, rebranded as Column 2.

Since then, I’ve written about 2,000 posts: that’s more than one per day on average, although that includes about 550 posts consisting of links and my comments on those links, auto-generated from whatever I save on Delicious that day. Not considering those posts, I’ve still managed to post more than once per weekday on average: a count that is badly skewed by my live-blogging at conferences, where I post several times per day. I’ve had over 2,000 comments on posts, or about one per post: not a great level of conversation overall, although we’ve had some lively discussions. In total, I’ve written over 600,000 words.

I average 400-500 unique visitors (600-700 page views) per weekday, with peaks of two or three times for events such as the Oracle BEA strategy briefing and IBM layoffs. Posts can remain popular over time: the Oracle BEA post totaled 4,500 page views (although not on the same day). I also have another 1,800 readers who are subscribed to the RSS feed, likely not visiting the site directly since I publish full feeds. That doesn’t make Column 2 exactly a prime internet destination, but most people are a bit surprised that I have 2,200+ daily readers on a relatively niche topic.

My presence on Twitter (which has just passed the 3-year mark) may have slowed my blogging a bit, but a broad spectrum of social media participation is a must for independent consultants these days. In fact, Twitter has probably increased my blog readership since FeedBurner auto-tweets each blog post when it publishes: Twitter is my second-largest referrer site, after Google.

I don’t get paid to blog, except for the small fee that Intelligent Enterprise pays me when they republish some of my posts, and the bit from the Google ads on the site that just covers my hosting fees. Vendors who invite me to their conferences (and pay my expenses while I’m there) obviously get more coverage while I’m blogging at the conference, as do vendors who are my clients since I’m more familiar with their products, but the opinions written here are my own, and no one has any editorial review or control over my content. In fact, it’s pretty common for me to see the PR/AR/marketing people at a vendor conference checking their mobile devices to see what I just wrote about their company, since they don’t see it in advance.

Blogging has given me the best soapbox ever on which to stand and voice my opinions: as an extroverted introvert, it’s the perfect blend of public discourse and private contemplation for me. As an independent working mostly from my home office, blogging provides me with a way to engage with BPM vendors and practitioners that would just not be possible face-to-face. I am asked to speak at conferences and review products because of what I write here; in fact, most of my professional engagements start with someone saying “I read your blog, and I’m interested in working with you”. I’ve also had the opportunity to meet many of my readers and fellow bloggers at conferences; these are opportunities that I would have missed were it not for blogging. My blog also serves as an online portfolio and history of my ideas, so that I can show, for example, that I was talking about social BPM and process wikis in 2006, a few years ahead of those who claim to have first written about it.

Writing a blog is not for everyone – in fact, some days it’s not even for me 🙂 – but blogs have become an essential part of online reading for any business or technology professional, rather than just seen as rants from the fringe. And although I sometimes resort to a bit of ranting, I like to believe that I’m adding value to your research on a variety of enterprise technology topics.

Business Process Incubator: Another Online BPM Community, But With Standards

BPM standards, I mean. 😉

Yesterday saw the public beta launch of the Business Process Incubator; although this was inadvertently announced by Robert Shapiro during a public webinar last month, it only moved out of closed preview yesterday. I had a briefing from Denis Gagné of Trisotech, one of the driving forces behind BPI, and have had a test account to try it out for the past month.

BPI has a focus on BPM standards, especially BPMN and XPDL, and is intended to a be a hub for content and tools related to standards. That doesn’t mean that this is another walled garden of content; rather, a lot of content is mashed in from other locations rather than being published directly on the site. For example, if you search for me on the site, you’ll find links to this blog and to a number of my presentations on Slideshare, plus the ability to rate the content or flag them on a My Interests list. That means that there’s a lot of content available (but not necessarily hosted) on the site from the start, and it’s growing every day as more people link in BPM-related content that they know about.

The site is divided into four main areas:

  • Do, including services for verifying, visualizing, validating, publishing and converting process models in various standard formats. These are premium services available either directly on the site or via an API: you can try them out a few times with a free membership, but they require payment for more than a few times each month.
  • Share, for contributing content such as process models, tools and blogs; this is also used to view process models shared by others.
  • Learn, for viewing the links, blogs, books, training and other content added in the Share section.
  • Tools, for viewing the tools added in the Share section; these are categorized as diagramming, BPMS, BPA, BAM and BRE. Trisotech’s own free BPMN add-in for Visio is here, but is also featured directly on most other pages on the site, something that competing diagramming tools might object to.

Most content on the site can be tagged and rated, allowing the community to provide feedback. There needs to be better integration with other social networking besides just standard “community share” options on Facebook, Twitter and LinkedIn, and this site just begs for BPI iPhone app, or at least a mobile version of the site.

Although I like the clean user interface, the categorization takes a bit of getting used to: for example, you add both content and tools in the Share section, but you view the links to content in Learn and the links to tools in Tools. Furthermore, you both contribute and view process models in the Share section; this appears to be the only type of contribution that is viewed in Share rather than another section. Also, the distinctions between some of the functions in the Do section are a bit esoteric: most users, for example, may not make the distinction between Transform (which is an XML transformation) versus Convert, since both turn a file of one type into another type. Similarly, Verify ensures that the file is a BPMN file based on the schema, whereas Validate ensures that there are no syntax errors in the BPMN file.

Although vendors can participate in the community as partners, it is vendor-independent. Rather than vendor sponsorships, the site is monetized through a membership model that allows access to most of the content for free, but requires a $300/year premium membership for unrestricted access to premium features, such as process model validation and translation services. In that way, the bulk of the site revenue is expected to come from corporate end-user organizations that use a combination of free and premium memberships for their users, and can sign up for a corporate membership that gives them four premium memberships plus 50% any additional ones. End-user organizations are becoming more aware of the value of BPM standards, and understand the value proposition of a standard notation when using process models to communicate broadly within their organization; BPI will help them to learn more about BPM standards as well as being a general resource for BPM information.

Businesses can have their own page on the site using a custom URL, fancy it up with their own logo and business description, and list all of the site content that belongs to them, whether links to tools, blogs or other content. Partner pages are free, but are monetized by referral or commission fees on any RFI/RFQs, services, training or paid content offered via those pages.

The cloud-based functions offered in the Do section are also available through a public API for vendors to include directly or white-label them in their own offerings; although monetized for this wasn’t settled last month, it would be possible to do this through an API key, much like other public APIs. Both APIs and a toolbar are available for including BPI content and functions on another site.

Partners are already ramping up on the site, and by fall, BPI will be in general availability for all members. There’s now quite a bit of choice in BPM online communities: in addition to all the BPM-themed social networking sites and discussion groups, there are now several public communities offering tools and functionality specific to BPM, such as BPM Blueworks and ARISalign. Gagné sees BPI as complementary and partnering with those sites – for example, those sites could have a partner page, as BPM Institute does – since they augment the other sites’ content with standards-focused materials. BPI’s openness via APIs and a toolbar allows it to be added as a BPM community from another site, and will likely see many referrals from BPM vendors who don’t want to build their own community site, but like the idea of participating in one that’s vendor-neutral. Although BPI is focused on BPM standards, the open platform gives it the potential to grown into a full BPM social networking site with a broad variety of content.

By the way, as your reward for reading this entire post, here’s a link to get a free premium membership. Enjoy!

IBM Cloud Strategy: Collaboration, Dev/Test Environments, and Virtual Desktops

Today, IBM announced their cloud strategy and roadmap; I was at the analyst update last week and had a chance to hear about it first-hand from IBM execs, a customer and a partner.

Erich Clementi, who heads enterprise initiatives at IBM, started the briefing by showing their cloud evolution over the past year, and plans for the remainder of 2010. Last year saw the launch of LotusLive collaboration services and the Test Cloud for hosted test environments. By the end of 2009, cloud offerings had expanded to include analytics, storage and email plus cloud consulting services, and the beta for cloud-based development and test environments had opened up. That beta has evolved so that today we’re hearing about the launch of Smart Business Dev/Test on IBM Cloud: an enterprise-class environment for provisioning virtual machines on demand for software development and testing. By the end of this year, there will be more cloud offerings, and a variety of security, resiliency, capacity and compliance options, and an ecosystem of partners.

He discussed what they’ve learned from their clients: there is a universal interest in cloud computing, but that there won’t be a “Big Switch fantasy” happening in large enterprises any time soon. Instead, this is part of a transition from owning IT assets to sourcing IT solutions as part of an organization’s enterprise IT delivery mix, where cloud complements on-premise, and these often coexist in integrated hybrid services. Although cost is a factor, speed of deployment is also a key driver, since that drives time to value. And, since IBM always has a large services component, they have a suite of services around moving to and maintaining cloud services. To be clear, there is a predominant focus on private clouds, or what some would not consider cloud at all: fast provisioning (after you install all the hardware and infrastructure software), but everything is on the customer’s site, making this virtualization rather than true hosted cloud.

For hosted cloud, they see the initial sweet spot as the collaboration space, where they’re targeting the LotusLive brand, including the web conferencing tool which we were using for the briefing, email suite (Lotus Notes lives!) and even social networking, such as the BPM BlueWorks community. Altogether, IBM has 18 million users on LotusLive, including their own workforce and some large customers such as Panasonic.

Targeting both public and private cloud is their Smart Business Desktop, where the entire desktop environment – OS and applications – is virtualized rather than installed on the actual desktops, allowing for access from anywhere, and also providing desktop remote control and other IT service functions. This has long been used for VPN access to networks, but is a newer concept for full-time internal desktops. Coincidentally, eWeek just published an article on virtual desktop infrastructure (VDI), discussing the benefits in terms of reduced maintenance and hardware costs (reduce desktop TCO by 15-35%) as well as business continuity, but also the relatively high startup costs and complexity; the author ultimately states “I hesitate to recommend VDI across the board”.

The third part of their cloud strategy is for virtual hosted server environments for ISVs – what appears to be a direct competitor to Amazon’s EC2 – providing development and test infrastructure through developerWorks Cloud Computing Resources, but apparently also production hosting (I think – the presentation was a bit vague here).

For my regular BPM readers, if you’ve made it this far, consider how you could use cloud development and test servers for BPM deployments, where some of the multiple environments required (usually at least four, sometimes as many as six) could be moved out of your own data centers, and provisioned at will.

Pat Toole, CIO of IBM, was up next to discuss how they are using their own products internally, speaking as a customer of the cloud offerings. They started with hosted development and test environments, and have half of their new development in the US happening on the dev/test cloud; this has reduced their server provisioning time from five days down to about an hour for both Power and x86 environments. Next, they looked at BI and analytics, with the dual aim of reducing costs and making the data more readily available to users. They consolidated 100 data warehouses into a single Cognos environment for 80,000 internal users in their Blue Insight initiative, and expect to add another 30 applications and double their users over the next year.

On the collaboration front, they turned on LotusLive web conferencing for all employees to use for internal and external meetings, logging 200 million minutes last year. They’ve recently added Engage for 6,000 users initially; although this seems to provide full social networking capabilities, Toole mentioned file sharing as the primary use case.

They’ve implemented Smart Business Desktop at one center in China in order to reduce TCO by more than 40% and improve security and control, and plan to roll this out to their call centers in US and India. Echoing the eWeek article, he said that this is not for everyone in the organization, but makes sense for certain classes of users and desktops. They’re also about to launch their first pilot on the storage cloud, and have identified about 1,000 applications for deployment in the production cloud.

In “eating their own cooking”, IBM is doing what any of their customers would be doing: trying to make their computing environment more efficient and less expensive.

Mike McCarthy, who heads the cloud computing group, gave the the details of today’s announcement:

  • Smart Business Development and Test environments on the IBM (public) cloud, initially within North America, on a pay-as-you-go or reserved capacity basis. Although hosted on their public cloud, this is intended to support enterprise clients in that it’s not an open community, but a platform for hosting your development and test environments as securely as if they were on premise; in fact, they plan to offer dedicated hardware environments in the future for the truly paranoid. There are several pre-configured software images to select from, offering a wide choice of configurations and deployment models. They offer 99.5% availability, sufficient for most dev/test environments, and support options up to 24×7 telephone support. This allows you to provision a development or test environment yourself in a matter of minutes: choose the service (software image, such as OS or OS plus tools), configure the usage configuration, and click to provision a new virtual server. Initially, they’ll be offering Red Hat and Novell Linux on x86 environments, with additional hardware options as well as Windows later in the year.
  • Adding development services, such as Rational SDS, to the existing Smart Business Test Cloud offering for private cloud deployments.
  • Rational Software Delivery Services for both their private and public Smart Business Development and Test Cloud.
  • Tighter integration of the developerWorks online community and the development/test cloud initiatives through a variety of learning resources.

Evan Bauer of the Collaborative Software Initiative joined the IBM team on the call to discuss their use of the IBM cloud for developing, testing and hosting the US Department of Education’s Open Innovation Portal. They used the beta version of the IBM cloud and open source software to develop and deploy this portal within three months. Hosting on IBM’s public cloud allows them to scale quickly and achieve excellent response time, providing a valuable pilot for the future use of cloud for government applications.

Last up was Tom Lounibos of SOASTA, an IBM partner offering CloudTest, an on-demand service for load-testing web applications by provisioning hundreds of virtual servers to simulate millions of users hitting a website. There are a couple of key use cases for this type of load-testing – e-commerce sites with seasonal peaks, and social media sites with peaks caused by news events – with some very high profile cases of unacceptable latency or even site failure due to load. CloudTest has been around for a while, but they’ve just announced that they’ll be running on the IBM cloud.

The IBM (public) cloud will initially be hosted in the US, with data centers in Europe added later in 2010. Although there was some talk about other data centers (such as Asia) in the future, the entire rollout plan wasn’t clear. Many organizations, especially financial services, need to have the data centers located in their own country, or at least one with better privacy laws than the US, so both the location of the data centers and the ability for a customer to select which country is hosting their systems will become important as IBM looks beyond the US market.

For those of us used to working with virtual servers hosted elsewhere, the concepts announced today aren’t new, but the IBM brand brings an air of respectability to the idea of using hosted virtual environments for a variety of uses.

Salesforce Releases Force.com Visual Process Manager

A couple of months back, there was a private discussion amongst the Enterprise Irregulars about who Salesforce.com was going to buy next, and there was a thought in the back of my mind that it might be a BPM vendor. Since that time, two BPM vendors have been acquired, but not by Salesforce: instead, they launched their own Force.com Visual Process Manager for designing and running processes in the cloud.

However, they seem determined to keep it a secret: first, the Visual Process Manager Demo video on YouTube has been made private (that’s just a screen snapshot of the cached video below), and second, I was unable to get a call back in response to the technical questions that I had during the demo.

For those of you unfamiliar with options for Salesforce application development ( as I mostly was before this briefing), Force.com is the platform originally built for customizing the Salesforce CRM offering, which became a necessity for larger customers requiring customization of data, UI and business logic. Customers started using it as a general business application development and delivery platform, and there are now 135,000 custom applications on Force.com, ranging from end-user-created databases and analytics, to sophisticated order management and e-commerce systems that link directly to customers and trading partners, and can update data from other Salesforce applications. In the past four years, they’ve gone from offering transactional applications to entire custom websites, and are now adding collaboration with Chatter.

As you might guess, there are processes embedded in many applications; classic software development might view these as screen flows, that is, the process for a person to move from one screen to another within an application. Visual Process Manager came about for exactly that purpose: customers were building departmental enterprise applications applications with process (screen flow) logic, but were having to use a lot of code in order to make it happen.

Link between form and process mapSalesforce acquired Informavores for their process design and execution engine, and that became Visual Process Manager. This is primarily human-centric BPM; it’s not intended as a system-centric orchestration platform, since most customers already have extensive middleware for integration, usually on-premise and already integrated with their Force.com apps so don’t need that capability. That means that although a process step can call a web service or pretty much anything else within their existing Force.com platform, asynchronous web service calls are not supported; this would be expected to be done by that middleware layer.

The process designer allows you to create a process map, then create a form that is tied to each human-facing step in the process map. Actions are bound to the buttons on the forms, where a form may be a screen for internal use, or a web page for a public user to access. You can also add in automated steps and decisions, as well as calling subprocesses and sending emails. It uses a fairly simple flowchart presentation for the process map, without swimlanes. There isn’t a lot of event handling that I could see, such as handling an external event that cancels an insurance quote process. There’s a process simulator, although that wasn’t demonstrated.

Visual Process Manager is priced at $50/user/month for Force.com Enterprise and Unlimited Edition customers, although it’s not clear if that’s just for the application developers, or if there’s a runtime licensing component as well.

Similar to what I said about SAP NetWeaver BPM, this isn’t the best BPMS around – in fact, in the case of Force.com, it’s little more than application screen flow – but it doesn’t have to be the best in class: it only has to be the best BPMS for Force.com customers.