TIBCO Corporate and Technology Analyst Briefing at TUCON2012

Murray Rode, COO of TIBCO, started the analyst briefings with an overview of technology trends (as we heard this morning, mobile, cloud, social, events) and business trends (loyalty and cross-selling, cost reduction and efficiency gains, risk management and compliance, metrics and analytics) to create the four themes that they’re discussing at this conference: digital customer experience, big data, social collaboration, and consumerization of IT. TIBCO provides a platform of integrated products and functionality in five main areas:

  • Automation, including messaging, SOA, BPM, MDM, and other middleware
  • Event processing, including events/CEP, rules, in-memory data grid and log management
  • Analytics, including visual analysis, data discovery, and statistics
  • Cloud, including private/hybrid model, cloud platform apps, and deployment options
  • Social, including enterprise social media, and collaboration

A bit disappointing to see BPM relegated to being just a piece of the automation middleware, but important to remember that TIBCO is an integration technology company at heart, and that’s ultimately what BPM is to them.

Taking a look at their corporate performance, they have almost $1B in revenue for FY2011, showing growth of 44% over the past two years, with 4,000 customers and 3,500 employees. They continue to invest 14% of revenue into R&D with a 20% increase in headcount, and significant increases in investment in sales and marketing, which is pushing this growth. Their top verticals are financial services and telecom, and while they still do 50% of their business in the Americas, EMEA is at 40%, and APJ making up the other 10% and showing the largest growth. They have a broad core sales force, but have dedicated sales forces for a few specialized products, including Spotfire, tibbr and Nimbus, as well as for vertical industries.

They continue to extend their technology platform through acquisitions and organic growth across all five areas of the platform functionality. They see the automation components as being “large and stable”, meaning we can’t expect to see a lot of new investment here, while the other four areas are all “increasing”. Not too surprising considering that AMX BPM was a fairly recent and major overhaul of their BPM platform and (hopefully) won’t need major rework for a while, and the other areas all include components that would integrate as part of a BPM deployment.

Matt Quinn then reviewed the technology strategy: extending the number of components in the platform as well as deepening the functionality. We heard about some of this earlier, such as the new messaging appliances and Spotfire 5 release, some recent releases of existing platforms such as ActiveSpaces, ActiveMatrix and Business Events, plus some cloud, mobile and social enhancements that will be announced tomorrow so I can’t tell you about them yet.

We also heard a bit more on the rules modeling that I saw before the sessions this morning: it’s their new BPMN modeling for rules. This uses BPMN 1.2 notation to chain together decision tables and other rule components into decision services, which can then be called directly as tasks within a BPMN process model, or exposed as web services (SOAP only for now, but since ActiveMatrix is now supporting REST/JSON, I’m hopeful for this). Sounds a bit weird, but it actually makes sense when you think about how rules are formed into composite decision services.

There was a lot more information about a lot more products, and then my head exploded.

Like others in the audience, I started getting product fatigue, and just picking out details of products that are relevant to me. This really drove home that the TIBCO product portfolio is big and complex, and this might benefit from having a few separate analyst sessions with some sort of product grouping, although there is so much overlap and integration in product areas that I’m not sure how they would sensibly split it up. Even for my area of coverage, there was just too much information to capture, much less absorb.

We finished up with a panel of the top-level TIBCO execs, the first question of which was about how the sales force can even start to comprehend the entire breadth of the product portfolio in order to be successful selling it. This isn’t a problem unique to TIBCO: any broad-based platform vendor such as IBM and Oracle have the same issue. TIBCO’s answer: specialized sales force overlays for specific products and industry verticals, and selling solutions rather than individual products. Both of those work to a certain extent, but often solutions end up being no more than glorified templates developed as sales tools rather than actual solutions, and can lead to more rather than less legacy code.

Because of the broad portfolio, there’s also confusion in the customer base, many of whom see one TIBCO product and have no idea of everything else that TIBCO does. Since TIBCO is not quite the household name like IBM or Oracle, companies don’t necessarily know that TIBCO has other things to offer. One of my banking clients, on hearing that I am at the TIBCO conference this week, emailed “Heard of them as a player in the Cloud Computing space.  What’s different or unique about them vs others?” Yes, they play in the cloud. But that’s hardly what you would expect a bank (that uses very little cloud infrastructure, and likely does have some TIBCO products installed somewhere) to think of first when you mention TIBCO.

CEO Keynote At Appian World 2012

Matt Calkins, CEO of Appian, spoke about how they are achieving their goal to be the world’s best way to organize work.

Key features that they have to support this:

  • Native mobile capabilities on iOS, Blackberry and Android, meaning that you can develop your applications once and have it run not just on a desktop browser, but on any mobile device.
  • Transparent platform portability, allowing an Appian application to be easily moved between on-premise, public cloud and private cloud.
  • Social interface to minimize training and be able to more easily track events, primarily through a participatory event streaming paradigm.

Their software sales increased in 2011 by over 200% with 95 new customers (not just expansions in existing customers), and they have 95% “very satisfied” customer satisfaction rating.

The typical Appian customer runs about 10 applications, but Appian’s goal is to actually reduce the number of applications that a customer has (who wants more apps in their enterprise, after all?) by linking the data, actions and users in the application silos into a common environment. In fact, their theme for this year is data, which they see treated as a second-class citizen in many systems, and he switched over to a demo of the upcoming Appian 7 to show how they are combining data from multiple applications and sources into their event stream.

The new Appian interface is organized into five tabs:

  • News, which is the familiar event stream, but with the much richer links and attachments from other sources. Adding a comment to the stream can be just a comment, or can be turned into a task that can be assigned and tracked. What do I need to know?
  • Tasks, which are the tasks sent to the active user, or that they created and assigned to someone else. These can be filtered by type, can can be sorted by deadlines and priorities. What do I need to do?
  • Records, aka data, which shows a list of data sources: client records, support tickets, employees, whatever is important to this user. These may be Appian applications or external data sources such as relational databases. He drilled into the Clients data source, which provided several ways to filter and search in the client application, then selected one client to show the collection of information about that client: basic contact data, sales satisfaction survey results, ticket history, sales opportunities and more. The interface is customizable both during the initial setup, but also on the fly by any user with permissions to do so. Beyond the summary page for that client, there’s a news feed for all items tagged with this client, then links for each of the applications that might have information on that client: projects, billing history and sales opportunities. A related records tab allows connections to other data sources, such as support cases, that are linked to that client, allowing you to navigate through a web of data in your enterprise, much as we navigate the internet by following links on a whim. Lastly, a related actions tab allows you to launch any of the related applications for this client, such as starting a new contract or schedule an onsite visit.
  • Reports, showing enhanced reporting capabilities with new abilities to sort and customize reports.
  • Actions, which links to all applications in the enterprise, allowing you to launch any application from a single point.

Furthermore, there are new Facebook/Twitter-like functions to subscribe to people within your organization, see their profile information that they have posted including their job skills, and add kudos (LinkedIn-like recommendations) for individuals. This is similar to what IBM has been doing with their Beehive social network internally: it’s a way to enable collaboration within the enterprise as well as tracking of employee skills and recommendations. In order to have this sort of enterprise-wise social network, however, everyone needs an Appian license, so they are coming up with new licensing model for what they’re calling Appian tempo that will allow this type of access to social, mobile and data (but not actions) at a much lower cost than a regular Appian user. In fact, it’s free, if you have any other Appian (paid) licenses, and if you install and use it within a year.

As always, pretty innovative stuff coming from Appian.

Active Endpoints’ Cloud Extend For Salesforce Goes Live

Next week at at Dreamforce, Active Endpoints’s new Cloud Extend for Salesforce will officially go live. I had a briefing a few months back when it hit beta, and an update last week where I saw little new functionality from the first briefing, but some nice case studies and partner support.

Introduction Call guide - set up meeting.jpgCloud Extend for Salesforce is a helper layer that integrates with Salesforce that allows business users to create lightweight processes and guides – think screenflows with context-sensitive scripting – to help users through complex processes in Salesforce. In Salesforce, as in many other ERP and CRM systems, achieving a specific goal sometimes requires a complex set of manual steps. Adding a bit of automation and a bit of structure, along with some documentation displayed to the user at the right time, can mean the difference between a process being done correctly or having some critical steps missed. If you look at the Cloud Extend case study with PSA Insurance & Financial Services covered in today’s press release, a typical “first call” sales guide created with Cloud Extend includes such actions as recording the prospect’s current policy expiration date, setting reminders for call-back, sending out collateral and emails to the prospect, and interacting with other PSA team members via Salesforce Chatter. This will mean that less follow-up items are missed, and improve the overall productivity of the sales reps since some of the actions are automated or semi-automated. Michael Rowley, CTO of Active Endpoints, wrote about about Cloud Extend at the time of the beta release, covering more of the value proposition that they are seeing by adding process to data-centric applications such as Salesforce. Lori MacVittie of F5 wrote about how although data and core processes can be commoditized and standardized, customer interaction processes need to be customized to be of greatest value. Interestingly, the end result is still a highly structured pre-defined process, although one that can be created by a business user using a simple tree structure.

When I saw a demo of Cloud Extend, I was reminded of similar guides and scripts that I’ve seen overlaid on other enterprise software to assist in user interactions, usually for telemarketing or customer service to be prompted on what to say on the phone to a customer, but this is more interactive than just scripts: it can actually update Salesforce data as part of the screenflow, hence making it more of a BPM tool than just a user scripting tool. Considering that the ActiveVOS BPMS is running behind the scenes, that shouldn’t come as a surprise, since it is optimized around integration activities. Yet, this is not an Active Endpoints application: the anchor application is Salesforce, and Cloud Extend is a helper app around that rather than taking over the user experience. In other words, instead of a BPMS silo in the clouds as we’re seeing from many BPMS cloud vendors, this is using a BPMS platform to facilitate a functionality integrated into another platform. A cloud “OEM” arrangement, if you please.

Creating a new guide - set automated email step actionThe Guide Designer – a portal into the ActiveVOS functionality from within Salesforce – allows a non-technical user to create a screen flow, add more screens and automated steps, call subflows, and call Salesforce functions. The flow can be simulated graphically, stepping forwards and backwards through it, in order to test different conditions; note that this is simulation in order to determine flow correctness, not for the purpose of optimizing the flow under load, hence is quite different from simulation that you might see in a full-featured BPA or BPMS tool. Furthermore, this is really intended to be a single-person screen flow, not a workflow that moves work between users: sort of like a macro, only more so. Although it is possible to interrupt a screen flow and have another person restart it, that doesn’t appear to be the primary use case.

There are a few bits that likely a non-technical user couldn’t do without a bit of initial help, such as creating automated steps and connecting up the completed guides to the Salesforce portal, but it is pretty easy to use. It uses a simple branching tree structure to represent the flow, where the presence of multiple possible responses at a step creates the corresponding number of outbound branches. In flowcharting terms, that means only OR gates, no merges and no loopbacks (although there is a Jump To Step capability that would allow looping back): it’s really more of a decision tree than what you might thing of as a standard process flow.

Creating a guide, or a “guidance tree” as it is called in the Guide Designer consists of adding an initial step, specifying whether it is a screen (i.e., a script for the user to read), an automated step that will call a Salesforce or other function, a subflow step that will call a predefined subflow, a jump to step that will transfer execution to another point in the tree, or an end step. Screen steps include a prompt and up to four answers to the prompt; this is the question that the user will answer at this point in response to what is happening on their customer call. One outbound path is added to the step for each possible answer, and a subsequent step automatically created on that path. The branches keep growing until end steps are defined on each branch.

Complex guidance tree - additional steps on right revealed on navigationA complex tree can obviously get quite large, but the designer UI has a nice way of scrolling up and down the tree: as you select a particular step, you see only the connected steps twice removed in either direction, with a visual indicator to show that the branch continues to extend in that direction.

Regardless of the complexity of the guidance tree, there is no palette of shapes or anything vaguely BPMN-ish: the user just creates one step after another in the tree structure, and the prompt and answers create the flow through the tree. Active Endpoints see this tree-like method of process design, rather than something more flowchart-like, to be a key differentiator. In reality, under the covers, it is creating BPMN that is published to BPEL, but the designer user interface just limits the design to a simple branching tree structure that is a subset of both BPMN and BPEL.

Once a flow is created and tested, it is published, which makes it available to run directly in the Salesforce sales guides section directly on a sales lead’s detail page. As the guide executes, it displays a history of the steps and responses, making it easy for the user to see what’s happened so far while being guided along through the steps.

Cloud Extend, Socrates modeler and multi-tenant ActiveVOS in the PaaS stackObviously, the Active Endpoints screen flows are executing in the cloud, although as of the April release, they were using Terremark rather than hosting it on Salesforce’s platform. Keeping it on an independent platform is critical for them, since there are other enterprise cloud software platforms with which they could integrate for the same type of benefits, such as Quickbooks and SuccessFactors. Since there is very little data persisted in the process instances within Cloud Extend, just some execution metrics for reporting and the Salesforce object ID for linking back to records in Salesforce, there is less concern about where this data is hosted, since it will never contain any personally identifiable information about a customer.

We’re starting to see client-side screen flow creation from a few of the BPMS vendors – I covered TIBCO’s Page Flow Models in my review of AMX/BPM last year – but those screen flows are only available at a step in a larger BPMS model, whereas Cloud Extend has encapsulated that capability for use in other platforms. For small, nimble vendors who don’t need to own the whole application, providing embeddable process functionality for data-centric applications can make a lot of sense, especially in a cloud environment where they don’t need to worry about the usual software OEM problems of installation and maintenance.

I’m curious about whatever happened to Salesforce’s Visual Process Manager and whether it will end up competing with Cloud Extend; I had a briefing of Visual Process Manager over a year ago that amounted to little, and I haven’t heard anything about it since. Neil Ward-Dutton mentions these two possibly-competing offerings in his post on the beta release of Cloud Extend, but as he points out, Visual Process Manager is more of a general purpose workflow tool, while Cloud Extend is focused on task-specific screen flows within the Salesforce environment. Just about the opposite of what you might have expected to come out of these respective vendors.

Cloud Extend

Salesforce’s Peter Coffee On The Cloud

I just found my notes from a Salesforce.com lunch event that I went to in Toronto back in April, where Peter Coffee spoke enthusiastically while we ate three lovingly-prepared courses at Bymark, and was going to just pitch them out but found that there was actually quite a bit of good material in there. Not sure how I managed to write so much while still eating everything in front of me.

This came just a few days after the SF.com acquisition of Radian6, a move that increased the Canadian staff to 600. SF has about 1,500 customers in Canada, a few of whom where in the room that day. Their big push with these and all their customers is on strategic IT in the cloud, rather than just cost savings. One of the ways that they’re doing this is by incorporating process throughout the platform, allowing it to become a global user portal rather than just a collection of silos of information.

Coffee discussed a range of cloud platform types:

  • Infrastructure as a service (IAAS) provides virtualization, but persists the old IT and application development models, combining the weaknesses of all of them. Although you’ve outsourced your hardware, you’re still stuck maintaining and upgrading operating systems and applications.
  • Basic cloud application development, such as Google apps and their add-ons.
  • SF.com, which provides a full application development environment including UI and application support.

The old model of customization, that most of us are familiar with in the IT world, has led to about 1/3 of all enterprise software running on the current version, and the rest stuck with a previous version, unable to do the upgrade because the customization has locked it in to a specific version. This is the primary reason that I am so anti-customization: you get stuck on that old version, and the cost of upgrading is not just the cost of upgrading the base software, but of regression testing (and, in the worst case, redeveloping) all the customization that was done on top of the old version. Any wonder that software maintenance ends up costing 10x the original purchase cost?

The SF.com model, however, is an untouchable core code base sitting on managed infrastructure (in fact, 23 physical instances with about 2,000 Dell servers), and the customization layer is just an abstraction of the database, business logic and UI so that it is actually metadata but appears to be a physical database and code. In other words, when you develop custom apps on the SF.com platform, you’re really just creating metadata that is fairly loosely coupled with the underlying platform, and resistant to changes therein. When security or any other function on the core SF.com platform is upgraded, it happens for all customers; virtualization or infrastructure-as-a-service doesn’t have that, but requires independent upgrades for each instance.

Creating an SF.com app doesn’t restrict you to just your app or that platform, however: although SF.com is partitioned by customer, it allows linkages between partners through remapping of business objects, leveraging data and app sharing. Furthermore, you can integrate with other cloud platforms such as Google, Amazon or Facebook, and with on-premise systems using Cast Iron, Boomi and Informatica. A shared infrastructure, however, doesn’t compromise security: the ownership metadata is stored directly with the application data to ensure that direct database access by an administrator doesn’t allow complete access to the data: it’s these layers of abstraction that help make the shared infrastructure secure. Coffee did punt on a question from the (mostly Canadian financial services) audience about having Canadian financial data in the US: he suggested that it could be encrypted, possibly using an add-on such as CipherCloud. They currently have four US data centers and one in Singapore, with plans for Japan and the EU; as long as customers can select the data center country location that they wish (such as on Amazon), that will solve a lot of the problem, since the EU privacy laws are much closer to those in Canada. However, recent seizures of US-owned offshore servers brings that strategy into question, and he made some comments about fail-overs between sites that makes me think that they are not necessarily segregating data by the country specified by the customer, but rather picking the one that optimizes performance. There are other options, such as putting the data on a location-specific Amazon instance, and using SF.com for just the process parts, although that’s obviously going to be a bit more work.

Although he was focused on using SF.com for enterprises, there are stories of their platform being used for consumer-facing applications, such as Groupon using the Force.com application development platform to power the entire deals cycle on their website. There’s a lot to be said for using an application development environment like this: in addition to availability and auto-upgrading, there’s also built-in support for multiples mobile devices without changing the application, using iTunes for provisioning, and adding Chatter for collaboration to any application. Add the new Radian6 capabilities to monitor social media and drive processes based on social media interactions and mentions, and you have a pretty large baseline functionality out of the box, before you even start writing code. There are native ERP system and desktop application connectors, and a large partner network offering add-ins and entire application suites.

I haven’t spent any time doing evaluation specifically of Salesforce or the Force.com application development platform (except for a briefing that I had over a year ago on their Visual Process Manager), but I’m a big fan of building applications in the cloud for many of the reasons that Coffee discussed. Yes, we still need to work out the data privacy issues; mostly due to the potential for US government intervention, not hackers. More importantly, we need to get over the notion that everything that we do within enterprises has to reside on our own servers, and be built from the metal up with fully customized code, because that way madness lies.

Appian World Cloud Case Studies: psHEALTH

We finished the cloud case studies with Abhishek Agrawal of psHEALTH. He used to work for Appian, so likely had a bias in that direction already, but they selected Appian since they wanted a complete cloud solution, plus for the strong process modeling capabilities, data security infrastructure, scalability and robustness. Lastly, they liked that they could create solutions without writing code: they wanted to be a case management solutions provider without being a software company, something that a lot of outsourcing companies struggle with.

That doesn’t mean that they haven’t built their own components, rather that they haven’t done that using the usual “lines of code”; Agrawal stated that their total lines of code = 0. They built a library of process application components within Appian, then can easily assemble those into custom case management solutions for their clients.

Looking at the before and after of one of their clients in deploying the Appian-based case management solution, they doubled the number of cases that a case worker could handle (from 40 to 80) by making it much easier to access, work with and transfer case files. They’re also working on some mobile applications, including support for their clients’ case workers and also for the end-customers (patients) with things such as a smartphone-based medication diary.

Data security was key for them, being in the healthcare industry, and they gained ISO27001 and SAS-70 Type II certifications for their Appian-based applications, which says a lot about the potential for high security in the cloud. They also were able to go to the market with a complete product solution that required only minor tweaks for each client, rather than a complete custom build each time, making it much faster to onboard a new client. For them, a cloud-based solution and the easy ability to build new applications from a library of components have been key to their success.

Appian World Cloud Case Studies: Clayton Holdings

Next up was John Cowles from Clayton Holdings, which does risk analysis for the mortgage industry.

Clayton has 240 users across 5 business units on Appian Cloud BPM, and they have only 5 primary resources for building and maintaining the 100+ processes that they have in production. They had limited IT resources and limited budget, and found that software-as-a-service fit their budget and resources well. Initially, they had no IT involvement at all: it was all operations, business analysts and process efficiency people. They found that Appian was easy enough to build applications without IT support, although now that they are undergoing some large back-end system changes, they do have a bit more technical input. They’ve seen an improvement in their BPM cultural maturity, and an increase in adoption rates as well as demand for new applications. Cowles now wants to do everything with Appian: he sees it as a general-purpose application assembly tool, not just a BPM tool. Interestingly, they did what I always recommend: limited or no system integration for the first implementations, then add that later on once they figure out what they really need, and start to see some process efficiencies. This lines up with their Agile philosophy of prototyping everything, and having frequent releases with incremental new functionality.

They have experienced huge efficiency gains due to their BPM implementation and other process efficiency efforts: 38% reduction in headcount in spite of a 6% increase in workload, time saved not doing manual gathering of user performance data, and process improvements in moving from email and Excel to BPM. The focus on change management and process management early on were important for their success. He also recommended collecting the reporting requirements up front to ensure that the necessary data are being collected by the process. Good points, and a nice success story for cloud BPM.

Appian Tempo

I had a chance for an advance briefing of Appian’s Tempo release last week; this is a new part of the Appian product suite that focuses on mobility, cloud and social aspects of BPM for social collaboration. This isn’t a standalone social collaboration platform, but includes deep links into the Appian BPM platform through events, alerts, tasks and more. They’ve included Twitter-like status updates and RSS feeds so that you can publish and consume the information in a variety of other forms, offering a fresh new alternative to the usual sort of process monitoring that we see in a BPMS. The free app for the iPhone and iPad requires an account on Appian Forum (the Appian user community site) or access to an Appian BPM installation (not sure if this is both an on-premise system and the cloud-based offering) in order to do anything so I wasn’t really able to check it out, but saw it on an emulator in the demo.

Appian Tempo in a browserTheir goal for Tempo is to provide a zero-training interface that allows people to track and participate in processes either from a browser or from a mobile device. You can think of it as a new user interface for their BPM and information management systems: in some cases to provide an alternative view to the portal for occasional/less-skilled users, and in some cases as the only interface for more collaborative or monitoring applications. It doesn’t replace the information- and feature-rich portal interface used by heads-down workers, but provides a simpler view for interacting with processes by executives or mobile workers. Users can interact with tasks that are assigned to them in a structured process within BPM, such as approving a purchase request, but can also create a new case from any event, whether that original event was related to BPM or not. For example, I had a view of the internal Appian instance of Tempo (I’ve redacted everything from this screenshot except this event, since some of the other events included internal information) where a “Marketing” stream included RSS feeds from a number of BPM news and blog sources. Swiping on any given event in the iPhone/iPad app – say an external blog post – allowed a new case to be opened that linked to that external event. At this point, the case opening functionality is pretty rudimentary, only allowing for a single process type to be created, which would then be manually reassigned to a specific person or subprocess, you can see the future value of this when the case/process type can be selected.

Appian Tempo in browserAs this scenario highlights, Tempo can include process and information from a variety of other sources, internal and external, that may have nothing to do with Appian BPM, in addition to providing visibility into core business processes. Anything with an RSS feed can be added; I saw Salesforce.com notifications, although not sure if they were just from an RSS feed or if there is some sort of more direct integration. Considering the wide adoption of RSS as a publication method for events, this is likely not an issue, but there are also some more direct system connections: an SAP event appearing in Tempo can be expanded to retrieve data directly from the corresponding SAP item, such as invoice details. This turns Tempo into a sort of generalized business dashboard for monitoring tasks and events from many different business sources: collaboration within a business information context.

The browser interface will be familiar if you’ve ever used Facebook: it has a big panel in the center for events, with the event filters in a panel on the left, and the event actions in a panel on the right. Users can subscribe to specific event types, which automatically creates filters, or can explicitly filter by logical business groupings such as departments. Individual events can be “starred” for easy retrieval, as you would with messages in Gmail. The user’s BPM inbox is exposed in the filter panel as “My Tasks”, so that their interaction with structured business processes is seen in the same context as other information and events with which they interact. The action panel on the right allows for the user to initiate new processes, depending on their role; this is more comprehensive than the “Open a case” functionality that we saw on the iPad: this is a full BPM process instantiation based on a user input form, such as creating a new IT change request. The actions available to a user are based on their role and permissions.

Appian Tempo iPhone appAccess to certain event classes can be restricted based on user and role permissions, but a user can comment on any event that they can see in their event stream. This form of collaboration is very similar to the Facebook model: you comment on someone an item that someone else posts, then are notified when anyone else adds a comment to the same event.

There’s been some nice optimization for the iPhone and iPad apps, such as one-click approvals without having to open the item, and rendering of Appian forms natively in the app. Although I’ve seen many iPad demos in the past year – it seems impossible to visit a vendor or go to a conference without seeing at least one – this offers significant value because of the deep integration to business processes and information. It’s easy to envision a mobile worker, for example, using the app to update information while at their client site, rather than filling out paper forms that need to be transcribed later. The app can directly access documents from the Appian content management system, or link to anything in a browser via a URL. It also allows for multiple user logins from the same device, which makes it good for testing but also useful in cases where a mobile device might be passed from worker to worker, such as for healthcare workers where a single device would support rotating shifts of users.

This certainly isn’t the first mobile app for BPM – you can see a few more listed at David Moser’s blog post on process apps – and the expected demand for mobile BPM will continue to drive more into this marketplace. This is, however, a very competent offering by a mainstream BPM vendor, which helps to validate the mobile BPM market in general.

This also isn’t the first BPM vendor to come out with a social media-style collaborative event stream interface (for lack of a better term), but this is a good indication of what we can expect to see as standard BPM functionality in the future.

Appian Tempo 2011

RAVEN Cloud General Release: Generate Process Maps From Natural Language Text

Back in May, I wrote about a cool new cloud-based service called RAVEN Cloud, which translated natural language text into process maps. As I wrote then:

You start out either with one of the standard text examples or by entering your own text to describe the process; you can use some basic text formatting to help clarify, such as lists, indenting and fonts. Then, you click the big red button, wait a few seconds, and voilà: you have a process map. Seriously.

They’re releasing RAVEN Cloud for general availability today (the beta sticker is still on the site as of the time of this writing), and I had an update demo with Dave Ruiz a couple of days ago. There are two major updates: UI enhancements, particularly the Business Process Explorer for process organization and categorization, and exporting to something other than JPEG.

RAVEN Cloud - Context menu in Business Process Explorer, and process attributes paneThe Business Process Explorer, in the left sidebar, looks like a set of folders containing processes although the “folders” are actually categories/tags, like in Google Docs: a process can be in more than one of these folders simultaneously if it relates to multiple categories, and the categories become metadata on the processes “contained” within them. This become more obvious when you look at the attributes for a process, where the Process Category drop-down list allows multiple selections. There is context menu support in the explorer to take actions on a selected process (open, rename, delete, move, save as), and the Process Explorer can be collapsed to provide more screen real estate for the process itself.

The Process Explorer contains a few standard categories, including process examples and tutorials; there is a separate administration panel for managing the process examples, which can then be used by any user as templates for creating  a new process. The tutorials highlight topics such as writing nested conditionals, and can be used in conjunction with the writing guide and their YouTube videos. I liked this one on correcting errors; I saw a bit of this in the demo when Dave accidentally misspelled a role name, resulting in an unwanted break in the flow, and didn’t specify the “else” clause of an “if” statement, resulting in an incomplete conditional:

Another feature that I saw in this version, which also brings them closer to BPMN compliance, is the inclusion of explicit start and end nodes in a process model. There can be multiple end nodes, but not multiple start nodes.

In addition to exporting as a JPEG image – useful for documentation but not for importing to another tool for analysis or execution – RAVEN Cloud now supports export to Visio or a choice of three XML formats: XMI 2.1, XPDL 2.0 and XPDL 2.1. The process model imported to Visio looked great, and the metadata at the process and shape level were preserved. Importing the XPDL into the BizAgi Process Modeler didn’t create quite as pretty a result: the process model was topographically correct, but the formatting needed some manual cleanup. In either case, this demonstrates the ability to have a business analyst without process modeling skills create a first version of a model, which can then be imported into another tool for further analysis and/or execution.

RAVEN Cloud - Error correction #4: role name fixed, process map regeneratedThis still creates only simple process models: it supports unadorned activities, simple start and end events, sequence flows, OR gateways and swimlanes. It also isn’t BPMN compliant, although it’s close. They’re planning to add link events (off-page connectors) and AND gateways, although it’s not clear what natural language constructs would support those, and they may have to use keywords instead, which weakens the natural language argument.

There will still be a free version, which does not support user-created categories or Visio/XPDL exports, and the paid version will be available for subscription for $25/user/month with volume discounts plus a 10% discount for an annual versus monthly subscription. An account can be either single-user or multi-user; by default, all models created within an account are visible for read-only access to all other users in that account, although access can be restricted further if required. A future version will include process model versioning and more access control options, since you can’t really have multi-user editing of a single process model unless you’re keeping some past versions. I think there’s also an opportunity for hybrid pricing similar to Blueworks Live, where a lower-cost user could have read-only permissions on models that were created by others, possibly with some commenting capabilities for feedback. It’s all self-provisioned: you just set up your account, enter your credit card details if you’re going for the paid version, and add users by their name and email address; they’ll receive an email invitation to create their account login and profile. I didn’t ask if one RAVEN Cloud login/profile can be shared across multiple accounts; that would be interesting for people like me, who work with multiple organizations on their process models, and I’ve seen something like this in Freshbooks, an online time tracking and invoicing applications, so that Freshbooks customers can easily interact since a single login (authentication) can have access to multiple accounts (authorization).

They’re also working on hosting RAVEN Cloud in a private cloud environment, so keep watching for that.

My verdict: still cool, but they need to re-work their subscription model a bit, and bring their notation in line with BPMN. They also have some challenges ahead in defining the language for new element types, but I’m sure that they’re up to it.

Smarter Infrastructure For A Smarter Planet

Kristof Kloeckner, IBM’s VP of Strategy & Enterprise Initiatives System and Software, & CTO of Cloud Computing, delivered today’s keynote on the theme of a smarter planet and IBM’s cloud computing strategy. Considering that this is the third IBM conference that I’ve been to in six months (Impact, IOD and now CASCON), there’s not a lot new here: people + process + information = smarter enterprise; increasing agility; connecting and empowering people; turning information into insights; driving effectiveness and efficiency; blah, blah, blah.

I found it particularly interesting that the person in charge of IBM’s cloud computing strategy would make a comment from the stage that he could see audience members “surreptitiously using their iPads”, as if those of us using an internet-connected device during his talk were not paying attention or connecting with his material. In actual fact, some of us (like me) are taking notes and blogging on his talk, tweeting about it, looking up references that he makes, and other functions that are more relevant to his presentation than he understands.

I like the slide that he had on the hype versus enterprise reality of IT trends, such as how the consumerization of IT hype is manifesting in industrialization of IT, or how the Big Switch is becoming reality through multiple deployment choices ranging from fully on-premise to fully virtualized public cloud infrastructure. I did have to laugh, however, when he showed a range of deployment models where he labeled the on-premise enterprise data center as a “private cloud”, as well as enterprise data centers that are on-premise but operated by a 3rd party, and enterprise infrastructure that is hosted and operated by a 3rd party for an organization’s exclusive use. It’s only when he gets into shared and public cloud services that he reaches what many of us consider to be “cloud”: the rest is just virtualization and/or managed hosting services where the customer organization still pays for the entire infrastructure.

It’s inevitable that larger (or more paranoid) organizations will continue to have on-premise systems, and might combine them with cloud infrastructure in a hybrid cloud model; there’s a need to have systems management that spans across these hybrid environments, and open standards are starting to emerge for cloud-to-enterprise communication and control.

Kloeckner feels that one of the first major multi-tenanted platforms to emerge(presumably amongst their large enterprise customers) will be databases; although it seems somewhat counterintuitive that organizations nervous about the security and privacy of shared services would use them for their data storage, in retrospect, he’s probably talking about multi-tenanted on-premise or private hosted systems, where the multiple tenants are parts of the same organization. I do agree with his concept of using cloud for development and test environments – I’m seeing this as a popular solution – but believe that the public cloud infrastructure will have the biggest impact in the near term on small and medium businesses by driving down their IT costs, and in cross-organization collaborative applications.

I’m done with CASCON 2010; none of the afternoon workshops piqued my interest, and tomorrow I’m presenting at a seminar hosted by Pegasystems in downtown Toronto. As always, CASCON has been a great conference on software research of all types.

SaaS BPM at Surrenda-link

Bruce Spicer of Keystar Consultancy presented on a project that he did with Surrenda-link Investment Management to implement Appian cloud-based BPM for the process around procuring US life settlement assets (individual life insurance policies) to become part of their investment funds. They were specifically looking at a software as a service offering for this, in order to reduce cost and risk (considering the small size of their IT group), since SaaS allows them to scale up and down seamlessly without increasing costs significantly. They’ve built their own portal/user interface, using Appian Anywhere as the underlying process and analytics engine; it surprises me a bit that they’re not using more out of the box UI.

They were overtime and over budget, mostly because they (admittedly) screwed up the process mapping due to immature processes, inexperience with process analysis, and inexperience with gathering requirements versus just documenting the as-is state. Even worse, someone senior signed off on these incorrect process models, which were then used for initial development in the proof of concept before corrections were made. They made some methodology corrections after that, improving their process analysis by looking at broad processes before doing a detailed view of a functional silo, and moving to Agile development methodologies. Even with the mistakes that were made, they’re in production and on track to achieve their three-year ROI.

This should be a compelling case study, but maybe because it was just after lunch, or maybe because his presentation averaged 120+ words per slide, I had a hard time getting into this.