HandySoft Process Intelligence and User Experience

Wow, has it really been a month since I last blogged? A couple of weeks vacation, general year-end busyness and a few non-work side projects have kept me quiet, but it’s time to get back at it. I have a few partially-finished product briefings sitting around, and thought it best to get them out before the vendors come out with their next versions and completely obsolesce these posts. 🙂

I had a chat with Garth Knudson of HandySoft in late November about the latest version of their BizFlow product, specifically around the new reporting capabilities and their WebMaker RIA development environment. Although these don’t show off the core BPM capabilities in their product suite (which I reviewed in late 2009), these are two well-integrated tools that allow for easy building of reports and applications within a BizFlow BPM environment. I always enjoy talking with Garth because he says good things about his competitors’ products, which means that not only does he have good manners, but he takes enough care to learn something about the competition rather than just tarring them all with the same brush.

We first looked at their user-driven reporting – available from the My AdHoc Reports option on the BizFlow menu – which is driven by OEM versions of the Jaspersoft open source BI server components; by next year, they’ll have the entire Jaspersoft suite integrated for more complete process analytics capabilities. Although you can already monitor the current processes from the core BizFlow capability, the ad hoc reporting add-on allows users (or more likely, business analysts) to define their own reports, which can then be run on demand or on a schedule.

HandySoft BizFlow Advanced Reporting - select data domainIf you’ve seen Jaspersoft (or most other ad hoc reporting tools) at work, there isn’t much new here: you can select the data domain from the list of data marts set up by an administrator, then select the type of report/graph, the fields, filtering criteria and layout. It’s a bit too techie for the average user to actually create a new report definition, since it provides a little much close contact with the database, such as displaying the actual SQL field names instead of aliases, but once the definition is created, it’s easy enough to run from the BizFlow interface. Regular report runs can be scheduled to output to a specific folder in a specific format (PDF, Excel, etc.), based on the underlying Jaspersoft functionality.

The key integration points with BizFlow BPM, then, are the ability of an administrator to include process instance data in the data marts as well as any other corporate data, allowing for composite reporting across sources; and access to the report definitions in the My AdHoc Reports tab.

The second part of the demo was on their WebMaker application development environment. Most BPM suites these days have some sort of RIA development tool, allowing you to build user forms, screens, portals and dashboards without using a third-party tool. This is driven in part by the former lack of good tools for doing this, and in part by the major analyst reports that state that a BPMS has to have some sort of application development built in to it. Personally, I’m torn on that: most BPMS vendors are not necessarily experts at creating application development tools, and making the BPMS capabilities available for consumption by more generic application development environments through standard component wrappers fits better with a best-of-breed approach that I tend to favor. However, many organizations that buy a BPMS don’t have modern application development tools at all, so the inclusion of at least an adequate one is usually a help.

HandySoft BizFlow WebMaker - specify field visibiltyHandySoft’s WebMaker is loosely coupled with BizFlow, so it can be used for any web application development, not just BPM-related applications. It does integrate natively with BizFlow, but can also connect with any web service or JDBC-compliant database (as you would expect) and uses the Model-View-Controller (MVC) paradigm. For a process-based application, you define the process map first, then create a new WebMaker project, define a page (form), and connect the page to the process definition. Once that’s done, you can then drag the process variables directly onto the form to create the user interface objects. There’s a full array of on-form objects available, including AJAX partial pages, maps, charts, etc., as well as the usual data entry fields, drop-downs and buttons. Since the process parameters are all available to the form, the form can change its appearance and behavior depending on the process variables, for example, to allow a partial page group to be enabled or disabled based on the specific step in the process or the value of the process instances variables at that step. This allows a single form to be used for multiple steps in a process that require a similar but not identical look and feel, such as a data entry screen and a QA screen; alternatively, multiple forms can be defined and assigned to different steps in the same process.

To be clear, WebMaker is not a tool for non-technical people: although a trained business analyst could probably get through the initial screen designs, there is far too much technical detail exposed if you want to do anything except very vanilla static forms; the fact that you can easily expose the MVC execution stack is a clue that this is really a developer tool. It is, however, well-integrated with BizFlow BPM, allowing the process instance variables to be used in WebMaker, and the WebMaker forms to be assigned to each activity using the Process Studio.

HandySoft is one of the small players in the BPMS market, and has focused on ad hoc and dynamic processes from the start. Now that all of the BPMS vendors have jumped into the dynamic BPM fray, it will be interesting to see if these new BizFlow tools round out their suite sufficiently to compete with the bigger players.

RAVEN Cloud General Release: Generate Process Maps From Natural Language Text

Back in May, I wrote about a cool new cloud-based service called RAVEN Cloud, which translated natural language text into process maps. As I wrote then:

You start out either with one of the standard text examples or by entering your own text to describe the process; you can use some basic text formatting to help clarify, such as lists, indenting and fonts. Then, you click the big red button, wait a few seconds, and voilĂ : you have a process map. Seriously.

They’re releasing RAVEN Cloud for general availability today (the beta sticker is still on the site as of the time of this writing), and I had an update demo with Dave Ruiz a couple of days ago. There are two major updates: UI enhancements, particularly the Business Process Explorer for process organization and categorization, and exporting to something other than JPEG.

RAVEN Cloud - Context menu in Business Process Explorer, and process attributes paneThe Business Process Explorer, in the left sidebar, looks like a set of folders containing processes although the “folders” are actually categories/tags, like in Google Docs: a process can be in more than one of these folders simultaneously if it relates to multiple categories, and the categories become metadata on the processes “contained” within them. This become more obvious when you look at the attributes for a process, where the Process Category drop-down list allows multiple selections. There is context menu support in the explorer to take actions on a selected process (open, rename, delete, move, save as), and the Process Explorer can be collapsed to provide more screen real estate for the process itself.

The Process Explorer contains a few standard categories, including process examples and tutorials; there is a separate administration panel for managing the process examples, which can then be used by any user as templates for creating  a new process. The tutorials highlight topics such as writing nested conditionals, and can be used in conjunction with the writing guide and their YouTube videos. I liked this one on correcting errors; I saw a bit of this in the demo when Dave accidentally misspelled a role name, resulting in an unwanted break in the flow, and didn’t specify the “else” clause of an “if” statement, resulting in an incomplete conditional:

Another feature that I saw in this version, which also brings them closer to BPMN compliance, is the inclusion of explicit start and end nodes in a process model. There can be multiple end nodes, but not multiple start nodes.

In addition to exporting as a JPEG image – useful for documentation but not for importing to another tool for analysis or execution – RAVEN Cloud now supports export to Visio or a choice of three XML formats: XMI 2.1, XPDL 2.0 and XPDL 2.1. The process model imported to Visio looked great, and the metadata at the process and shape level were preserved. Importing the XPDL into the BizAgi Process Modeler didn’t create quite as pretty a result: the process model was topographically correct, but the formatting needed some manual cleanup. In either case, this demonstrates the ability to have a business analyst without process modeling skills create a first version of a model, which can then be imported into another tool for further analysis and/or execution.

RAVEN Cloud - Error correction #4: role name fixed, process map regeneratedThis still creates only simple process models: it supports unadorned activities, simple start and end events, sequence flows, OR gateways and swimlanes. It also isn’t BPMN compliant, although it’s close. They’re planning to add link events (off-page connectors) and AND gateways, although it’s not clear what natural language constructs would support those, and they may have to use keywords instead, which weakens the natural language argument.

There will still be a free version, which does not support user-created categories or Visio/XPDL exports, and the paid version will be available for subscription for $25/user/month with volume discounts plus a 10% discount for an annual versus monthly subscription. An account can be either single-user or multi-user; by default, all models created within an account are visible for read-only access to all other users in that account, although access can be restricted further if required. A future version will include process model versioning and more access control options, since you can’t really have multi-user editing of a single process model unless you’re keeping some past versions. I think there’s also an opportunity for hybrid pricing similar to Blueworks Live, where a lower-cost user could have read-only permissions on models that were created by others, possibly with some commenting capabilities for feedback. It’s all self-provisioned: you just set up your account, enter your credit card details if you’re going for the paid version, and add users by their name and email address; they’ll receive an email invitation to create their account login and profile. I didn’t ask if one RAVEN Cloud login/profile can be shared across multiple accounts; that would be interesting for people like me, who work with multiple organizations on their process models, and I’ve seen something like this in Freshbooks, an online time tracking and invoicing applications, so that Freshbooks customers can easily interact since a single login (authentication) can have access to multiple accounts (authorization).

They’re also working on hosting RAVEN Cloud in a private cloud environment, so keep watching for that.

My verdict: still cool, but they need to re-work their subscription model a bit, and bring their notation in line with BPMN. They also have some challenges ahead in defining the language for new element types, but I’m sure that they’re up to it.

BPM Meets Goldilocks: Picking a First Process That’s Just Right

The key to picking the right process for your first BPM implementation is a bit like Goldilocks: you don’t want one too big, or too small, but just right. We’ve covered this topic in this week’s article in the series that I’m writing with Global 360’s Steve Russell, published over on bpm.com.

This is the first of a six-part series, with an introductory post published a couple of weeks ago to talk about the entire series. Coming up next: gaining business buy-in for project success.

IBM Case Manager In Depth

I had a chance to see IBM’s new Case Manager product at IOD last month, and last week Jake Levirne, the product manager, gave me a more complete demo. If you haven’t read my earlier product overview from IOD as well as the pre-IOD briefing on Case Manager and related products, the business analyst view, a quick bit on customizing the UI and the technical roundtable, you may want to do so now since I’ll try not to repeat too much of what’s there already.

Runtime

IBM Case Manager Runtime - CSR role view in portalWe started by going through the end-user view of an application for insurance claims. There’s a role-based portal interface, and this user role (CSR) sees a list of cases, can search for a case based on any of the properties, or add a new case – fairly standard functionality. In most cases, as we’ll see later, cases are created automatically on the receipt of a specific document type, but there needs to be the flexibility to have users create their own as well. Opening a case, the case detail view shows case data (metadata) and case information, which comprises documents, tasks and history that are contained within the case. There’s also a document viewer, reminding us that case management is content-centric; the entire view is a bit reminiscent of the previous Business Process Framework (BPF) case management add-on, which has definitely contributed to Case Manager in a philosophical sense if not any of the actual underlying technology.

For those FileNet geeks in the crowd, a case is now a native content type in the FileNet content repository, rather than a custom object type as was used in the BPF; logically, you can think of this as a case folder that contains everything related to the case. The Documents tab is pretty straightforward – a list of documents attached to the case – and the History tab shows a list of events on the case, including documents being added and tasks started/completed. The interesting part, as you might have guessed, is in the Tasks tab, which shows the tasks (small structured processes, in reality) assigned to this case, either as required or optional tasks. Tasks can be added to a case at design time or runtime; when added at runtime, these are predefined processes, although there may be customizable parameters that the user can modify, but the end user can’t change the definition of a task. This gives some flexibility to the user – they can choose whether or not to execute the optional tasks, they can execute tasks in any order, and they can add new tasks to a case – but doesn’t allow the user to create new tasks: they are always selecting from a predefined list of tasks. Depending on the task definition, tasks for their case may end up assigned to them or to someone else, or to a shared queue corresponding to a role. This results in the two lists that we saw back in the first portal view: one is a list of cases based on search criteria, and the other is a list of tasks assigned to this user or a shared queue on which they are working.

IBM Case Manager Runtime - case task viewCreating a new case is fairly simple for the user: they click to add a case, and are presented with a list of instructions for filling out the initial case data, such as the date of loss and policy number in our insurance claim example. The data that can be entered using the standard metadata widget is pretty limited and the form isn’t customizable, however, and often there is an e-form included in the case that is used to capture more information. In this situation, there is a First Notice of Loss e-form that the user fills out to gather the claim data; this e-form is contained as a document in the case, but also synchronizes some of its fields with the case metadata. This ability to combine capabilities of documents, e-forms and folders has been in FileNet for quite a while, so it’s no surprise that they’re leveraging it here. It is important to note, however that this e-form would have to be designed in the Lotus forms designer, not in the Case Manager design tools: a reminder that the IBM Case Manager solution is a combination of multiple tools, not a single monolithic system. Whether this is a good or bad thing is a bit of a philosophical discussion: in the case of e-forms, for example, you may want to use this same form in other applications besides Case Manager, so it may make sense that it is defined independently, but it will require additional design skills.

Once the case is created, it will follow any initial process flows that are assigned to it, and can kick off manual tasks. For example, there could be automated activities that update a claims systems with the data captured on the FNOL form, and manual tasks created and assigned to a CSR to call the third parties’ insurance carrier. The underlying FileNet content engine has a lot of content-centric event handling baked right into it, so being able to do things such as trigger processes or other actions based on content or metadata updates have been there all along and are being used for any changes to a case or its contents.

Design Time

We moved over to the Case Manager Builder to look at how designers – business analysts, in IBM’s view – define new case types. At the highest level, you first define a “solution”, which can include multiple case types. Although the example that we went through used one case type per solution, we discussed some situations where you might want to have multiple case types in a single solution: for example, a solution for a customer service desktop, where there was a different case type defined for each type of request. Since case types within a single solution can share user interface designs, document types and properties, this can reduce the amount of design work if you plan ahead a bit.

IBM Case Manager Builder - define solution propertiesFor each solution, you define the following:

  • Properties (metadata)
  • Roles and the in-baskets (shared work queues) to which they have access
  • Document types
  • In-baskets associated with this solution
  • Case types that make up this solution.

Then, for each case type within a solution, you define the following:

  • The document type that will be used to trigger the creation of a case of this type, if any. Cases can be added manually, as we saw in the runtime example, or can be triggered by other events, but the heavily content-centric focus of Case Manager assumes that you might usually want to kick off a case automatically when a certain document type is added to the content repository.
  • The default Add Case page, which is a link to a previously-defined page in the IBM Mashup Center that will be used as the user interface on selecting the Add Case button.
  • The default Case Details page, which is a link to the Mashup Center page for displaying a case.
  • Optionally, overrides for the case details page for each role, which allows different roles to see different views of the case details.
  • Properties for this case type, which can be manually inherited from the solution level or defined just at this level. All solution properties are not automatically inherited by each case type, since it was felt that this would make it unnecessarily confusing, but any of the solution properties can be selected for exposure at the case level.
  • The property views (subsets) that are displayed in the case summary, case details and case search views. If more than about a dozen properties are used, then IBM recommends using an e-form instead of the standard views, which are pretty limited in terms of display customization. A view can include a group of properties for visual grouping.
  • Case folders to organize the content within a case.
  • Tasks associated with the case, grouped by required and optional tasks. Unlike the user interfaces, document types and properties, task definitions are not shared across case types within a solution, which requires that similar or identical tasks will require redefining for each case type. This is definitely an area that they can improve in the future; if their claim of loosely-coupled cases and processes is to be fully realized, then task/process definitions should be reusable at least across case types within a solution, if not across solutions.

IBM Case Manager Builder - Step EditorAlthough part of the case type definition, I’ll separate out the task definition for clarity. For each task within a case type, you define:

  • As noted above, whether it is required or optional for this case type.
  • Whether the task starts automatically or manually, or if the user optionally adds the task to the case at runtime.
  • Inclusion of the task in a set. Sets provide visual grouping of tasks within a case, but also control execution: a set can be specified as all-inclusive (all tasks execute if any of the tasks execute) or mutually exclusive (only one of the tasks in the set can be executed). The mutually exclusive situation could be used to create a manner of case subtypes, instead of using multiple case types within a solution, where the differences between the subtypes are minimal.
  • Preconditions for the task to execute, that is, the task triggers. In many cases, this will be the case start, but could also be when a document of a specific type is added to the case, or a case property value is updated to meet certain conditions, including combinations of property values.
  • Design comments, which could be used simply as documentation, but are primarily intended for use by a non-technical business analyst who created the case type definition up to this point but wants to pass of the creation of the actual process flow to someone more technical.
  • The process flow associated with this task, using the visual Step Editor. This allows the roles defined for the solution to be added as swimlanes, and the human-facing steps to be plotted out. This supports branching as well as sequential flow, but no automated steps; however, any automated steps that are added via the full Process Designer will appear in the uneditable grey lanes at the top of the Step Editor map. If you’ve used the Process Designer before, the step properties at the left of the Step Editor will appear familiar: they’re a subset of the step properties that you would see in the full Process Designer, such as step deadlines and allowing reassignment of the step to another user.

Being long acquainted with FileNet BPM, a number of my questions were around the connection between the Step Editor and the full BPM Process Designer; Levirne handled some of these, and I also had a few technical discussions at IOD that shed light on this. In short, the Step Editor creates a full XPDL process definition and stores it in the content repository, which is the same as what happens for any process definition created in the Process Designer. However, if you open this process definition with the Process Designer, it recognizes that it was created using the Case Manager Step Editor and performs some special handling. From the Process Designer, a more technical designer can add any system steps required (which will appear, but not be editable, in the Step Editor): in other words, they’ve implemented a fully shared model used by two different tools: the Case Builder Step Editor for a less technical business analyst, and the BPM Process Designer for a developer. IBM Case Manager Builder - deploy solutionAs with any process definition, the Case Manager task process definitions must be transferred to the process engine before they can be used to instantiate new processes: this is done automatically when the solution is deployed.

Deploying a solution to a test environment is a one-click operation from the Case Manager Builder main screen, although moving that to another environment isn’t quite as easy: the new release of the P8 platform allows a Case Manager solution to be packaged in order to move it between servers, but there’s still some manual work involved.

We wrapped up with a discussion of the other IBM products that integrate with Case Manager, some easier than others:

  • Case Manager includes a limited license of ILOG JRules, but it’s not integrated in the Case Manager Builder environment: it must be called as a web service from the Process Designer. There are already plans for better integration here, which is essential.
  • Content Analytics for data mining and analytics on the case metadata and case content, including the content of attached documents.
  • Case Analyzer, which is a version of the old BPM Process Analyzer, with enhancements to show analytics at the case level and the inclusion of custom case properties to provide a business view as well as an operational view in dashboard and reports.

They’re working on better integration between Case Manager and the the WebSphere product line, including both WebSphere Process Server and Lombardi; this will be necessary to combat the competition who have a single solution that covers the full range of BPM functionality from structured processes to completely dynamic case management.

Built on one of the best industrial-strength enterprise content management products around, IBM Case Manager will definitely see some adoption in the existing IBM/FileNet client base: adding this capability onto an existing FileNet Content Manager repository could provide a lot of value with a minimal amount of work for the customer, assuming that they actually allowed their business analysts to do the work that IBM intends them to. In spite of the power, however, there is a lack of flexibility in the runtime task definition that may make them less competitive in the open market.

IBM Case Manager demo

BonitaSoft Open Source BPM

I recently had my first briefing with BonitaSoft about their open source BPM product. Although the project has been going on for some time, with the first release in 2001, the company is only just over a year old; much of the development has been done as part of BPM projects at Bull. Their business model, like many open source companies, is to sell services, support and training around the software, while the software is available as a free download and supported by a broader community. They partner with a number of other open source companies – Alfresco for content management, SugarCRM for CRM, Jaspersoft for BI – in order to provide integrated functionality without having to build it themselves. They’ve obviously hit some critical mass point in terms of functionality and market, since their download numbers have increased significantly in the past year and have just hit a half million.

A French company, they have a strong European customer base, and a growing US customer base, mostly comprising medium and large customers. They’ve just announced the opening of two US offices, and the co-founder/CEO Miguel Valdés Faura is moving to the San Francisco area to run the company from there; that’s the second European company that I’ve heard of lately where the top executives are moving to the Bay area, indicating that the “work from anywhere” mantra doesn’t necessarily pan out in practice. They’ve hired Dave Cloyd away from open source content management company Nuxeo as a key person in the building the US market; he was VP of sales at Staffware prior to the TIBCO acquisition, so knows both the open source and BPM side.

Open source BPM solutions have been around for a while, but the challenges are the same as with any open source project: typically, it takes greater technical skills to get up and running with open source, especially if it doesn’t do everything that you need and has to be integrated with other (open source or not) products. In many cases, open source BPM provides the process engine embedded inside a larger solution created by a systems integrator or business process outsourcing firm; in other words, it’s more like a toolkit for adding process capabilities into another application or environment. BonitaSoft considers jBPM, Activiti and ProcessMaker to be in this “custom BPM development” camp, as opposed to the usual commercial players in the “standalone BPM suites” category; they see themselves as being able to play on both sides of that divide.

Taking a look (finally, after 35 minutes of PowerPoint) at a product demo, I saw their four main components of process modeling, process development, process execution, and process administration and monitoring.

The modeler is a desktop Eclipse-based application providing BPMN 2.0 modeling, including importing of BPMN models from other tools. There is starting to be less distinction between these tools, as all the vendors start to pick up the user interface tricks that make process modeling work better: auto-alignment, automatic connector creation, and tool tips with the most likely next element to add. The distinguishing characteristics start to become how the non-standard modeling aspects are handled: data modeling and integration with other systems using proprietary connectors that go beyond the capabilities of a simple web services call, for example.

Bonitasoft BPM - Alfresco connector actionsI like what they’ve done with some of the out-of-the-box connectors: the Sharepoint and Alfresco connectors allow you to browse and select a specific document repository event (such as check in a file) directly from within the process designer, and associate it with an activity in the process model. I saw a fairly comprehensive database connector that allowed for graphical query creation, and this connection can be used to transfer a data model from a database to the process model to build out the process instance data. There’s a wizard to create your own connectors, or browse the BonitaSoft community to find connectors created by others – a free marketplace for incremental functionality.

You can create a web form for a particular step in the process, which will auto-generate based on the defined data model, then allow new fields to be added based on external database calls, and reformatted in a graphical editor. Effectively, this capability allows a quick process-based application to be created with a minimum of code, just using the forms designer and connectors to databases and other systems.

Key performance indicators (KPIs) can be defined in process modeler; these are effectively data objects that can be populated by any step of the process, then reported on via a BI engine such as the integrated Jaspersoft.

Although they describe their modeling as collaborative, it’s asynchronous collaboration, where the model and associated forms are saved to the Bonita repository model, where they are property versioned and can be checked out by another user.

Bonitasoft BPM - user inbox viewThe end-user experience uses an inbox metaphor in a portal, with the forms displayed as the user interacts with the process. Individual process instances (or entire processes) can be tagged with private labels by a user – similar to labels applied to conversations in Gmail – and categories can be applied to processes so that every instance of that process has the same category, visible to all users. Love the instance and process tagging: this is a capability that I’ve been predicting for years, and just starting to see it emerge.

I was surprised by the lack of flexibility in runtime environment: the only change that a user can make to a process at runtime is to reassign a task, although they are working on other features to handle more dynamic situations.

The big product announcements from last month, with the release of version 5.3, included process simulation and support for cloud environments with multi-tenancy and REST APIs. However, by this time we were getting to the end of our time and I didn’t get all the details; that will have to wait for another day, or you can check out the brief videos on their site.

Getting Started With BPM: A Series

I’ve been working with Steve Russell, SVP of Engineering at Global 360, to create a series of articles on how to get started with BPM. Since this is sponsored content, I won’t publish it here, but will point you over to a link each week when it is published on BPM.com.

This week, we introduce the series with a short description of each of the upcoming six articles:

  • Picking the right first process
  • Gaining business buy-in for project success
  • Ensuring user adoption
  • Structured versus unstructured work
  • Measuring success
  • Moving to wider adoption across the organization

We’re not wedded to these ideas (although I’m already working on the first, so that’s unlikely to change), and if you have ideas of things that you’d like to see included in the series, add a comment here and we’ll work it in if possible.

IBM Blueworks Live Sneak Peak

When I wrote a post yesterday about the slow convergence between BPM and social software, I had forgotten about the analyst briefing that I had scheduled with IBM later in the day for a sneak peak of the new Blueworks Live site. Lombardi has always been at the forefront of the integration of social and BPM, although previously focused purely on the process discovery/design phase, and the IBM acquisition has allowed Lombardi’s social process discovery to be combined with IBM’s online BPM community to create something greater than the sum of the parts. For all my criticism of IBM, they have some incredible pockets of innovation that sometimes burst out into actual product.

Yesterday’s session was hosted by Phil Gilbert; apparently this was the first public viewing of the site, which will be officially unveiled this Saturday, November 20th. Phil, who I’ve known for a number of years through his time at Lombardi, explained some of the motivation for Blueworks Live, and in a weird echo of the post that I wrote just hours before, he said “BPM is ready to meet social networking”. They are trying to reinvent the public BPM community, while avoiding the problems that they perceive with other vendors’ community sites:

  • They are mainly product support sites
  • They have high membership numbers, but low participation
  • A majority of the information is from the sponsor company
  • The customer perception is that these sites are proprietary and biased, and that there’s already too many sources of information on BPM

Blueworks Live Community

In their search for a truly public BPM community, they turned to that universal public community: Twitter. They are taking the public BPM-focused Twitter stream, based on both BPM-focused users (including everyone on the analyst call, said Phil) and the #bwlive hashtag, to create a public stream that will be displayed alongside a user’s private activity stream in Blueworks Live. The private activity stream is based on processes and projects in which the user is a participant, or that the user has selected to follow.

Blueworks Live is a combination of the previous BPM BlueWorks Beta community and the (Lombardi) BPM Blueprint process discovery tool; although BPM BlueWorks Beta had some process modeling tools, they were not of the sophistication of Blueprint. However, it’s more than just community and process modeling: Blueworks Live also includes process automation for the long tail of low-volume administrative processes, that is, those simple human-based processes that can’t warrant a BPM implementation that involves IT. IBM estimates that 75% of all business processes fall into this category – including processes from HR, IT, accounting, marketing and a number of other areas – and most end up being done in email.

IBM Blueworks LiveWe moved on to a product demo by Cliff Vars, a product manager, who started with the view of the site by unregistered (that is, unpaid) users. Without signing in, you can view:

  • Under the Community section, the afore-mentioned public BPM Twitter stream, made up of specific Twitter users and tweets containing the #bwlive hashtag. Although the pricing chart indicated that free users could see both public and private communities, we only saw the public BPM Twitter stream before logging in.
  • Under the Library section, blog posts migrated from the old BPM BlueWorks Beta site. I believe that a lot of the content from the old site was written by IBM employees and was moderated, so can’t exactly be considered public community content.
  • Also in the Library section, a number of process templates that appear to be in the (Lombardi) Blueprint format – not clear how useful that would be if you weren’t a paid user, since you couldn’t use the Blueprint modeler to open them.

Creating a Process Automation

We then logged on to take a look at how simple process automation works. In the logged-in view, the “Getting Started” section is replaced by the “Work” section, which contains all of the tasks assigned to the user, the process instances that they’ve launched, the ability to launch a new process instance, plus links to create a Blueprint process design or a new automated process. It’s important to recognize that there’s two distinct types of processes here: complex processes modeled in Blueprint (the former Lombardi tool), which may eventually be transferred to an on-premised IBM Lombardi process engine for execution; and simple processes, which are modeled using a completely different tool and executed directly within the Blueworks Live site. When we look at process automation, it’s the latter that we’re seeing.

Creating a process automation in IBM Blueworks LiveTo automate a process, then, you click the big green “Automate a Process button” to get started, then specify the following:

  • A process application name.
  • The process type, either “Simple Workflow” or “Checklist”. In the demo, we saw a simple workflow type, which is a linear sequence of tasks assigned to users; we didn’t get a look at the checklist type so not sure of the different functionality. These are the only types available for automated processes in Blueworks Live, although they plan to add more in the future.
  • Select the space for the process definition, which might be a personal sandbox or a department such as Marketing.
  • Add instructions to be provided when an instance of the process is launched.
  • Configure some of the labels that will appear in the running process to make them more specific to the process.
  • Add one or more tasks, which will be executed sequentially in each process instance. For each task, specify the description, who the task is assigned to (or leave it blank to have it assigned at runtime), and whether the task is an approval step.
  • Share the process definition with participants of that space, who will then have it available as a process type to instantiate from their Work section.

The whole process creation took only a couple of minutes, and when we returned to the user’s Work section where we had started, the new process template was available in the sidebar.

Launching and Participating in a Process

We then logged on as a different user to create a process instance from that template. Since this user presumably has access to the space in which the process designer saved the process template, it appears in the sidebar of our Work section. Clicking that link kicks off a process instance:

  • The instructions specified by the process designer are displayed.
  • Fill in the name and details fields.
  • Add a desktop document as an attachment; this is uploaded and shared with all the participants.
  • Select a due date for each of the tasks.
  • For the task that wasn’t pre-assigned to a user, assign the user.
  • Launch it to kick off the first task.

Returning to the main Work section, we can now see that process instance in the “Work I’ve launched” tab, and can open and track its progress from there.

Launching a process in IBM Blueworks LiveWhen we move over to the Community section, we can now see our private activity stream, which includes two new events: first, that we launched the workflow, and second, that the first task in that workflow was received by the user to which it was assigned. By default, all of the events for every process that we’ve launched will appear in our activity stream.

We then switched back to the original user, who was also the user to whom the first task in the process was assigned, to see what it looks like to participate in a process. An email was already waiting to tell us that we had a new task, complete with a link to the task, or we could have found the task directly in the Work section of Blueworks Live under the “Tasks assigned to me” tab. Regardless of how it was opened, we can then complete the task:

  • View the process name and details provided by the process originator.
  • View the attached document. It appears that we could also have added more documents at this point, although we didn’t see that.
  • Add a comment, which appears in a comments timeline on the side of the process information.
  • View the tasks to be completed. Since the first one is assigned to us, and it was an approval task, there are Approve and Reject buttons on the task.
  • Click the Approve button to mark the task as completed. I assume that tasks that are not approval tasks have a simple Complete button or something similar so that the participant can mark the task as complete, although we didn’t look at that.

Participating in a process in IBM Blueworks LiveThere are a number of other options that appear to be available at this point, although we didn’t explore them, such as reassigning the remaining tasks to different users, but essentially this user is done with their task and the process. If we move to the Community area and look at the private activity stream for this user, we can see that in addition to creating and sharing the process template, the approval task also appears there.

Overall, although there’s nothing really new about this sort of easy sequential workflow design and execution, the user interface is clean and uncluttered, and pop-up tips on the fields assist the user on what to enter. Assuming that you can wrench your users away from using email for these processes, there won’t be much of a learning curve for them to create new processes on their own, and even less to use processes created by others. If you want to see this in action, there’s a Blueworks Live YouTube channel with a couple of videos on creating and participating in a process.

A user with administrative privileges can view some basic aggregate reports on these processes, including some graphical views of process template usage, user participation and on-time completion; this is generated as an Excel spreadsheet that is downloaded and viewed on the desktop, not as an integrated reporting or dashboard view. It’s very rudimentary, but may be sufficient for the types of processes that are likely to be automated using this tool.

To finish up, we also looked at the Library section again; as a logged-in user, we could now see some additional content areas, including links to Blueprint process models, which could then launch the familiar Blueprint environment within Blueworks Live for complex process discovery and modeling. As I mentioned earlier, this is a completely different modeling environment than the “process automation” that I described above; these processes will be exported to an on-premise IBM Lombardi process engine for execution.

There are three levels of Blueworks Live users:

  • Community, free, which allows you to view the public and private communities, although it’s not clear what the private community is in the case of a free user.
  • Contributor, $10/month, which adds all the functionality of creating and running the simple process applications that I’ve described above, plus the ability to review and comment on Blueprint process models.
  • Editor, $50/month, which adds the full Blueprint modeling capability.

Although the paid users now have more than former (paid) Blueprint users with the addition of the simple process automation, free users of the old BPM BlueWorks Beta site have lost a whole bunch of capabilities, unless we just skipped that part of the demo.

The Verdict

In a nutshell, Blueworks Live provides some private and public community functionality, allows you to create (Lombardi) Blueprint process designs, and automate simple processes. But these are two very different tools: the online mini processes with the Blueworks Live automation engine (based on two basic templates, workflow and checklist), and the Blueprint processes, some of which will be moved to an on-premise Lombardi system. Different interfaces, different engines, different everything except that they’re contained within the same portal.

The Twitter stuff is pretty useless for those of us who are already competent at monitoring Twitter using a tool such as Tweetdeck. I’m never going to go to Blueworks Live to look at the public Twitter stream; I probably already follow the same list of people in my BPM Twitter list, and if I want to see what’s happening with #bwlive, I’ll just add it as a search column. It’s probably good for the Twitter newbies, since they haven’t figured out groups, hashtags or Tweetdeck yet; maybe that’s more representative of the expected user base.

Except for the Twitter stream, the only community content appears to be the current BlueWorks blog content, written mostly by IBM. The online execution isn’t really community, it’s process execution in a semi-collaborative space, which is different. The forums (mostly product/site help) and media library (including webinars, white papers and the various modeling tools such as strategy and capability maps) from the old BPM BlueWorks Beta site are missing, or at least not displayed in the version that we saw. Although Blueworks Live definitely has some improved functionality such as process execution, this is really a collection of non-integrated tools, and it’s not clear that they’ve reached their goals regarding a public BPM community.

They’re not the first to have cloud-based process execution, but they are IBM, and that lends some credibility to the whole notion of running your business processes outside the firewall. Like the entry of other large players into the cloud BPM marketplace, I believe that this will be a benefit to all cloud BPM providers since it will validate and enlarge the market. This validation of cloud-based BPM is a real game-changer, if not Blueworks Live itself.

Blueworks Live Launch

Time For Enterprise 2.0 To Get Enterprisey

The funny thing about “Enterprise 2.0”, or social business software, is that it’s not very enterprisey: yes, it is deployed in enterprises, but it often doesn’t deal with the core business of an enterprise. You hear great stories about social software being used to strengthen weak ties through internal social networking, or fostering social production by using a wiki for project documents, but many less stories about using social software to actually run the essential business processes. Andrew McAfee recently wrote about his experience talking to a group of CIOs, and how they were seeing social software as becoming mainstream, but one comment struck me:

[The CIOs] weren’t too worried that their people would use the tools to waste time or goof off. In fact, quite the opposite; they were concerned that the busy knowledge workers within their companies might not have enough time to participate.

The fact that the knowledge workers had a choice of whether to participate tells me that the use of social business software is still somewhat discretionary in these companies, that is, it’s not running the core business operations; if it were, there wouldn’t be a question of participation.

At the Enterprise 2.0 conference in June, my only blog post was something of a rant on the emperor having no clothes, since I believe that this has to be about the core business or it’s just not very interesting (and likely won’t survive an economic downturn). Interestingly, Michael Idinopulos of Socialtext was at the same conference, and saw some evidence of the shift towards the idea that ”social software delivers business value when it integrates with business process” (I wish I had been in some of the sessions that he was, since he obviously saw evidence of this opinion being further along than I did).

I’m starting to see some similar opinions emerging from a variety of sources, or maybe the recent Enterprise 2.0 conference in Santa Clara has just heated up the same discussion again. Klint Finley of ReadWrite Enterprise, hearkening back to Idinopulos’ post, thinks that enterprise 2.0 needs to be tied to business processes. Tom Davenport recently wrote about the need to add structure to social in order to bring enterprise value:

Well before personal computers enabled online chatter, they helped bring structure to work. Transaction systems like ERP and CRM, tools for workflow and document management, and project management systems all made it more clear to people what they need to do next in their jobs. That capability has undoubtedly led to productivity gains.

But work effectiveness also demands that people share their knowledge and expertise with each other. That’s where social media comes in. It makes it easy to reach out to others for help in making a decision or taking an action. And the transfer of knowledge through social media doesn’t require a lot of difficult knowledge management work in advance.

Be sure to read Davenport’s example of what’s happening at Cognizant, where they’re combining project/task management and social resources: effectively combining social and core business processes.

Meanwhile, while the social business software vendors have been stumbling towards process, the BPMS vendors have been stumbling towards social. I first presented on the ideas of social features in BPMS in 2006, and while a lot of what I predicted then has come to pass, there are many things that I didn’t even imagine four years ago. Although many vendors focus on the social aspects of process discovery and design, I don’t think that’s where the true impact will be felt: social process execution is the key to bringing together the productivity, governance and quality improvements of BPM with the networking and cultural aspects of social software. Having social features at runtime as innate capabilities for all process participants – through the entire spectrum from structured processes to unstructured collaboration – is what will really make social software (or rather, social features of enterprisey software such as BPM) mainstream.

What concerns me is the divide between social business software and enterprise software vendors. I don’t think that most social business software is capable of managing industrial-strength core business processes. I also don’t think that most BPM software is capable of doing social collaboration really, really well – at least, not yet. However, the BPMS vendors have already done the heavy lifting of creating tools to manage business processes and gaining the trust of customer to manage those processes, and I expect that we’ll continue to see rapid expansion of the social features of BPMS, through acquisition or organic internal development. Although there’s still undoubtedly a place for social business software as a standalone category, those companies looking to take on the social aspects of core business processes may want to position themselves for acquisition by one of those deep-pocketed BPMS vendors.

Taking a BPMS Test Drive

Last week, I tried out the Ultimus Test Drive, a guided hands-on session using a process application built with the Ultimus BPMS. The process itself was fairly simple – a purchase request process – since this test drive is targeted at end users, not analysts or developers, and intended to give the user a hands-on look at what it’s like to participate in the process.

The script for the session is pretty straightforward: a purchase request  is made by one person, approved by their supervisor, then sent to their manager if the amount is over a threshold. Once approved, the purchase order is created and emailed to the requester.

Ultimus Test Drive 2010

The real innovation here is not the application itself, but the method that Ultimus is using to have people try it out. Although the demo is scripted and partially guided, you’re actually interacting with a live version of Ultimus via GoToMyPC, not a screen capture animation. Optionally, someone from Ultimus will watch over what you’re doing and be on the phone with you, providing help if you get off track.

If you’re in the market for a BPMS, getting your hands on software as part of the evaluation process is important, but is rarely done very satisfactorily. In many cases, organizations still buy (very expensive) software based only on what they see in a demo given by the vendor’s sales team, without ever trying it out for themselves. I have seen some situations that improve on that by providing the analyst/developer view of the product in a hosted environment (such as EC2) for a trial period; this allows the techies to do an evaluation of the look and feel of the components that they will use, but rarely the end users. Short of an onsite proof of concept, it’s rare for the potential end-users to have a chance to try things out. The exception, of course, is cloud-based BPMS where you can get a limited trial license for free, or nearly so, but that covers only a small subset of the BPMS vendors out there today, and if you’re looking at on-premise software for your final solution, you may not want to limit your search to those that also have a cloud version.

We need more of what Ultimus is doing in terms of customer (or prospect) education: the chance for a quick, hands-on demo of software that we’re expecting people to spend a good part of their day interacting with in the future.

Webinar on Fast-Tracking BPM Projects

I’ll be speaking on a webinar this Thursday, November 18th, along with Michael Rowley of Active Endpoints, about fast-tracking BPM projects. The Active Endpoints tools are targeted at an IT audience of architects, developers and technical business analysts, and that’s exactly who we’re focusing this webinar on as well. From the description:

Time and resources are limited and you are tasked with automating processes for your business faster, cheaper and better than ever before. How do you develop service-oriented process applications that address the needs of the business AND meet their deadlines? What process automation tools and techniques are used today successfully by others in your situation? How do you quickly create working prototypes, while avoiding a drawn out process of creating written requirements, and still provide traceability from requirements to implementation?

You can register here for the webinar; hope to see you there.