Flowable’s FlowFest 2022

Flowable is holding their half-day online FlowFest today, and in spite of the eye-wateringly early start here in North America, I’ve tuned in to watch some of the sessions. All of these will be available on demand after the conference, so you can watch the sessions even if you miss them live.

There are three tracks — technical, architecture and business — and I started the day in the tech stream watching co-founder Tijs Rademakers‘ presentation on what’s new in Flowable. He spent quite a bit of the hour on a technical backgrounder, but did cover some of the new features: deprecation of Angular, new React Flow replacing the Oryx modelers, a new form editor, improved deployment options and cloud support, a managed service solution, and a quick-start migration path that includes an automatic migration of Camunda 7 process instance database to Flowable (for those companies that don’t want to make the jump to Camunda 8 and are concerned about the long-term future of V7).

For the second session, I switched over to the architect stream for Roman Saratz’ presentation on low-code integration with data objects. He showed some cool stuff where changes to the data in an external data object would update a case, in the example tied to a Microsoft Dynamics instance. The presentation was relatively short and there was an extended Q&A, obviously a lot of people interested in this form of integration. At the end, I checked in on the business track and realized that the sessions there were not time-aligned with the two technical tracks: they were already well into the Bosch session that was third on the agenda – not sure why the organizers thought that people couldn’t be interested in technology AND business.

In the third session, I went back to the tech stream and attended Joram Barrez‘ presentation on scripting. Like a few of the other Flowable team, Joram came from Alfresco’s Activiti core development team (and jBPM before that), and is now Principal Software Architect. He looked at the historical difference between programs and scripts, which is that programs are compiled and scripts are interpreted, and the current place of pre-compiled [Java] delegates in service tasks versus script tasks that are interpreted at runtime. In short, the creation, compilation and deployment of Java delegates are definitely the responsibility of technical developers, while scripts can be created and maintained by less-technical low code developers. Flowable now allows for the creation a “service registry” task that is actually a Javascript or Groovy script rather than a REST call, which allows scripts to be reusable across models as if they were external service tasks rather than embedded within one specific process or case model. There are, of course, tradeoffs. Pre-compiled delegates typically have higher performance, and provide more of a structured development experience such as unit testing, and backwards-compatible API agreements. Scripts open up more development capability to the model developer who may not be Java-savvy. Flowable has created some API constructs that make scripts more capable and less brittle, including REST service request/response processing and BPMN error handling. It appears that they are shifting the threshold for what’s being done by a low code developer directly in their modeling environment, versus what requires a more technical Java developer, an external IDE and a more complex deployment path: making scripts first-class citizens in Flowable applications. In fat, Joram talked about ideas (not yet in the product) such as having a more robust scripting IDE embedded directly in their product. I am reminded of companies like Trisotech that are using FEEL as their scripting language in BPMN-CMMN-DMN applications, on the assumption that if you’re already using FEEL in DMN then using it throughout your application is a good idea; I asked if Flowable is considering this, and Joram said that it’s not currently supported but it would not be that difficult to add if there was demand for it.

To wrap up the conference, I attend Paul Holmes-Higgin‘s architecture talk on Flowable future plans. Paul is co-founder of Flowable and Chief Product Officer. He started with a discussion of what they’re doing in Flowable Design, which is the modeling and design environment. Tijs spoke about some of this earlier, but Paul dug into more detail of what they’ve done in the completely rebuilt Design tool that will be released in early 2023. Both the technical underpinnings and the visuals have changed, to update to newer technology and to support a broader range of developer types from pro code to low code. He also spoke about longer term (2-3 year) innovation plans, starting with a statement of the reality that end-to-end processes don’t all happen within a centralized monolithic orchestrated system. Instead, they are made up of what he refers to as “process chains”, which is more of a choreography of different systems, services and organizations. He used a great example of a vehicle insurance claim that uses multiple technology platforms and crosses several organizational boundaries: Flowable Work may only handle a portion of those, with micro-engines on mobile devices and serverless cloud services for some capabilities. They’re working on Flowable Jet, a pared-down BPMN-CMMN-DMN micro-engine for edge automation that will run natively on mobile, desktop or cloud. This would change the previous insurance use case to put Flowable Jet on the mobile and cloud platforms to integrate directly with Flowable Work inside organizations. With the new desktop RPA capabilities in Windows 11, Flowable Jet could also integrate with that as a bridge to Flowable Work. This is pretty significant, since currently end-to-end automation has a lot of variability around the edges; allowing for their own tooling in the edge as well as central automation could provide better visibility and security throughout.

Tijs, Jorram and Paul are all open source advocates in spite of Flowable’s current more prominent commercial side; I’m hoping to see them shifting some of their online conversations over to the Fosstodon (or some other Mastodon instance), where I have started posting.

That’s it for FlowFest: a good set of informational sessions, and some that I missed due to multiple concurrent tracks that I’ll go back and watch later.

Effektif BPM Goes Open Source

On a call with Tom Baeyens last week, he told me about their decision to turn the engine and APIs of Effektif BPM into an open source project: not a huge surprise since he was a driver behind two major open source BPM projects prior to starting Effektif, but an interesting turn of events. When Tom launched Effektif two years ago, it was a bit of a departure from his previous open source BPM projects: subscription-based pricing, cloud platform, business-friendly tooling for creating executable task lists and workflows with little IT involvement, and an integrated development environment rather than an embeddable engine. In the past, his work has been focused on building clean and fast BPM engines, but building the Effektif user-facing tooling taught them a lot about how to make a better engine (a bit to his surprise, I think).

The newly-launched open source project includes the fully-functional BPM engine with Java and REST APIs; the REST APIs are a bit minimal at this point, but more will come from Effektif or from community contributions. It also includes a developer cloud account for creating and exporting workflows to an on-premise engine (although it sounds like you can create them in any standard BPMN editor), or process instances can be run in the cloud engine for a subscription fee (after a 30-day free trial). They will also offer developer support for a fee. Effektif will continue to offer the existing suite of cloud tools for building and running workflows at subscription pricing, allowing them to address both the simple, out-of-the-box development environment and the developer-friendly embeddable engine – the best of both worlds, although it’s unclear how easy it will be for both types of of “developers” to share projects.

You can read more about the technical details on Tom’s blog or check out the wiki on the open source project.

This definitely puts Effektif back in direct competition with the other open source BPM projects that he has been involved with in the past – jBPM and Activiti (and, due to it forking from Activiti, Camunda) – since they all use a similar commercial open source business model, although Tom considers the newer Effektif engine as having a more up-to-date architecture as well as simpler end-user tooling. How well Effektif can compete against these companies offering commercial open source BPM will depend on the ability to build the community as well as continue to offer easy and compelling citizen developer tools.

Activiti Update 2013: New Functionality And New Partners

I had a briefing on the latest version of Alfresco’s Activiti BPM a couple of months back, but decided to wait until the news about their new partners – BP3 and Edorasware – was released before I posted. This strong showing of enterprise support partners is crucial for them following the defection of camunda from the Activiti fold, since many large enterprises won’t deploy an open source product without some level of support from the open source vendor directly or via their partner channel.

Alfresco’s interest in Activiti is as a part of their open source enterprise content management suite: they don’t offer Activiti as a standalone commercial open source product, only bundled within their ECM. Activiti exists as an Apache-licensed open source project with about 1/3 of its main developers – likely representing more than 1/3 of the actual development effort – being Alfresco employees, making Alfresco the main project sponsor. Obviously, Alfresco’s document-centric interests are going to be represented within the Activiti project, but that doesn’t make it unsuitable as a general purpose BPMS; rather, Alfresco makes use of the BPM platform functionality for the purpose of document flow and tasks, but doesn’t force content concepts into Activiti or require Alfresco in any way to use Activiti. Activiti is continuing to develop functionality that has nothing to do with ECM, such as integration with MuleESB.

Activiti was one of the first BPMS platforms to execute BPMN 2.0 natively, and provides full support for the standard. It’s not a “zero-code” approach, but intended as a developer tool for adding high-performance, small-footprint BPM functionality to applications. You can read more about full Activiti functionality on the main project site and some nuances of usage on the blog of core developer Joram Barrez; in this post, I just want to cover the new functionality that I saw in this briefing.

Activiti BPM 5.12 ad hoc task collaborationLike all of the other BPMS out there, Activiti is jumping on the ad hoc collaborative task bandwagon, allowing any user to create a task on the fly, add participants to the task and transfer ownership of the task to another participant. The task definition can include a due date and priority, and have subtasks and attached content. Events for the task are showing in an activity feed sidebar, including an audit trail of the actions such as adding people or content to the task, plus the ability to just post a comment directly into the activity feed. The Activiti Explorer UI shows tasks that you create in the My Tasks tab of the Tasks page, although they do not appear in the Inbox tab unless (I think) the task is actually assigned to you. If someone includes you as a participant (“involves” you) in a task, then it shows in the Involved tab. This is pretty basic case management functionality, but provides quite a bit of utility, at least in part because of the ability to post directly to the activity feed: instead of having to build data structures specific to the task, you can just post any information in the feed as a running comments section. Mostly unconstrained, but at least it’s in a collaborative environment.

Activiti BPM 5.12 table-driven process definitionThe other big new thing is a table-driven process definition as an alternative to the full BPMN modeler, providing a simpler modeling interface for business users to create models without having to know BPMN, or for fast process outlining. This allows you to create a process definition, then add any number of tasks, the order of which implies the sequence flow. Each task has a name, assignee, group (which I believe is a role rather than a direct assignment to a person) and description; you can also set the task to start concurrently with the previous task, which implies a parallel branch in the flow. Optionally, you can define the form that will be displayed for this task by adding a list of the properties to display, including name, type and whether each is mandatory; this causes an implicit definition of the process instance variables. The value of these properties can then be referenced in the description or other fields using a simple ${PropertyName} syntax. You can preview the BPMN diagram at any time, although you can’t edit in diagram mode. You can deploy and run the process in the Activiti Explorer environment; each task in the process will show up in the Queued tab of the Tasks page if not assigned, or in the Inbox tab if assigned to you. The same task interface as seen in the ad hoc task creation is shown at each step, with the addition of the properties fields if a form was defined for a task. The progress of the process instance can be viewed against the model diagram or in a tabular form. Indeed, for very simple processes without a lot of UI requirements, an entire process could be defined and deployed this way by a non-technical user within the Explorer. Typically, however, this will be used for business people to prototype a process or create a starting point; the model will then make a one-way trip into the Eclipse modeling environment (or, since it can be exported in BPMN, into any other BPMN-compliant tool) for the developers to complete the process application. Once the simple table-driven process is moved over to the Eclipse-based Activiti Modeler, it can be enhanced with BPMN attributes that can’t be represented in the table-driven definition, such as events and subprocesses.

There were a few other things, such as enhanced process definition and instance management functions, including the ability to suspend a process definition (and optionally, all instances based on that definition) either immediately or at a scheduled time in the future; some end-user reporting with configurable parameters; and integration of an SMS notification functionality that sent me a text telling me that my order for 2 iPads was shipped. Sadly, the iPads never arrived. Winking smile

We finished with a brief description of their roadmap for the future:

  • Hybrid workflow that allows on-premise and cloud (including instant deployment on CloudBees) for different tasks in same flow, solving the issue of exposing part of process to external participants without putting the entire process off premise.
  • Project KickStart, which builds on the table-driven process definition that I saw in the demo to provide better UI form display (making a real contender as a runtime environment, rather than just for prototyping) and the ability to make changes to the process definition on the fly.
  • Polyglot BPM, allowing Activiti to be called from other (non-Java) languages via an expanded REST API and language-specific libraries for Ruby, C#, Javascript and others.

It’s great to see Activiti continue to innovate after so much change (losing both the original product architect and their main partner) within a short period of time; it certainly speaks to their resiliency as an organization, as you would expect from a robust open source project.

Activiti April 2013 

I also talked with Scott Francis of BP3 about their new Activiti partnership; apparently the agreement was unrelated to the camunda departure, but definitely well-timed. I was curious about their decision to take on another BPM product, given their deep relationship with IBM (and formerly with Lombardi), but they see IBM BPM and Activiti as appealing to different markets due to organizational cultural choices. Certainly to begin with, most of their new Activiti customers will be existing Activiti customers looking for an enterprise support partner, just as many of their new IBM BPM customers are already IBM BPM customers; however, I’ve been in a couple of consulting engagements recently where organizations had both commercial and open source solutions under evaluation, so I’m anticipating a bit of channel conflict here. BP3 has no existing Activiti customers (or any other BPM other than IBM), and has no significant open source contribution experience, but plans to contribute to the Activiti open source community, possibly with hybrid/HTML mobile front-ends, REST APIs architecture and other areas where they have some expertise from building add-ons to IBM BPM. Interestingly, they do not plan to build/certify WAS support for Activiti; although they didn’t see this as a big market, I’m wondering whether this also just cuts a bit too close to the IBM relationship.

Aside from the obvious potential for awkwardness in their IBM relationship, I see a couple of challenges for BP3: first, getting the people with the right skills to work on the Activiti projects. Since the IBM BPM skills are pretty hard to come by, they won’t be redeploying those people, so presumably have to train up other team members or make some new hires. The other challenge is around production support, which is not something that BP3 does a lot of now: typically, IBM would be the main production support for any IBM BPM installation even if BP3 was involved, although BP3 would support their own custom code and may act as triage for IBM’s support. With Activiti, they will have to decide whether they will offer full production support (and if not them, then who?) or just provide developer support during business hours.

Stick A (Open Source) Fork In It: camunda BPM Splits From Activiti

At the end of 2012, I had a few hints that things at Alfresco’s Activiti BPM group was undergoing some amount of transition: Tom Baeyens, the original architect and developer of Activiti (now CEO of the Effektif cloud BPM startup announced last week), was no longer leading the Activiti project and had decided to leave Alfresco after less than three years; and camunda, one of the biggest Activiti contributors (besides Alfresco) as well as a major implementation consulting partner, was making noises that Activiti might be too tightly tied to Alfresco’s requirements for document-centric workflow rather than the more general BPM platform that Activiti started as. I’m not in a position to judge how Alfresco was controlling the direction and release cycle of Activiti, who was making the biggest contribution to the open source efforts, or what was said behind closed doors, but obviously things reached a breaking point, and this week camunda announced that they are forking a new open source project from Activiti, to be known as camunda BPM.

This is big news in the world of open source BPM. There are a few players already – Activiti, BonitaSoft, jBPM and Processmaker, to name a few – and it’s not clear that there’s enough demand for open source BPM software to warrant another entrant. Also, there has to be some hard feelings between the parties here, and this is a small community where you can’t really afford to make enemies, because you never know who you’re going to end up working with in years to come. This parting of the ways is described as “sad” by both camunda in their announcement post and by Joram Barrez (current Activiti lead core developer) in his post, and puts Activiti and camunda in direct competition for both existing Activiti users and future business. Signavio, whose process modeler is deeply integrated with camunda BPM, issued a press release stating that the camunda BPM fork will be good for Signavio customers, and including a nice quote from Tom Baeyens; keep in mind that Signavio just provided the funding for Baeyens’ new startup. It’s like the Peyton Place of BPM.

Leaving the personal (and personnel) aspects aside, camunda BPM is offering some significant additional capabilities beyond what is available in Activiti, mostly through open-sourcing their previously proprietary Activiti add-ons. I had a briefing a couple of weeks ago with Jakob Freund, camunda’s CEO, to get caught up on what they’re doing. camunda is about 20 people now, founded 4-1/2 years ago and completely self-funded. That makes them a bit small for launching an enterprise software product – including the implementation and support aspects – but also not driven to unreasonable growth since they have no external investors to please. Having once grown a consulting company to about twice that size without external funding, I can understand the advantages of maintaining the organic growth: control to pick the projects and products that you want to build, and to hand-pick a great team.

camunda BPM, as with Activiti (and jBPM, for that matter) are not claiming to be zero-code BPM suites – some would argue that even those claiming to be, aren’t – but are BPM engines and capabilities intended to be embedded within line-of-business enterprise applications. They see the zero-coding market as being general tooling for non-strategic processes, and likely served equally well or better by outsourcing or cloud solutions (Effektif, anyone?); instead, camunda targets situations where IT is a competitive differentiator, and BPM is just part of the functionality within a larger application. That doesn’t mean that there’s nothing for the non-technical business analyst here: BPMN is used as a bridge for business-IT alignment, and camunda is bringing their previously proprietary BPMN round-tripping capabilities into the new open source project. Their BPMN plugin for Eclipse provides an easy-to-use modeler for business analysts, or round-tripping with Signavio, Adonis and other modeling tools; camunda blogged back in June 2012 about how to integrate several different BPMN modelers with camunda BPM, although they have a definite preference for Signavio.

camunda BPM is a complete open source BPM stack under an Apache License (except for Eclipse, the framework for the designer/developer UI, which uses the Eclipse Public License). The Community (open source) edition will always be the most up-to-date edition – note that some commercial open source vendors relegate their community edition to being a version behind the commercial edition in order to drive revenue – with the Enterprise (commercial) edition lagging slightly to undergo further testing and integrations. The only capabilities available exclusively in the Enterprise edition are WebSphere Application Server (WAS) integration and Cockpit Pro, a monitoring/administration tool, although there is a Cockpit Light capability in the Community edition. You can see a Community-Enterprise feature comparison here, and a more complete list here. Unless you’re tied to WAS from the start, or need quite a bit of support, the Community edition is likely enough to get you up and running initially, allowing for an easier transition from open source to commercial.

However, the question is not really whether camunda has some great contributions to make to the Activiti code base (they do), but whether they can sustain and build an open source fork of Activiti. They have some good people internally to provide vision – Daniel Meyer for the core process engine architecture, Bernd Rücker for a technical consulting/product management view, Jakob Freund for the business aspects of BPM – and a development team experienced with the Activiti and camunda code bases. They have showed significant leadership in the Activiti open source community and development, so are likely capable of running a camunda BPM open source community, but need to make sure that they dedicate enough resource to it to keep it vital. There is a German camunda community already, but that’s not the same as an open source community, and also is only in German, so they have some work to do there.

And then there’s the existing Activiti and camunda users. Existing camunda customers probably won’t be freaked out about the fork since the contributions important to them were being made by camunda anyway, but existing Activiti users (and prospects) aren’t just going to fall into camunda’s lap: they might be weighing the additional functionality against the bigger company, stable brand and existing community behind Activiti. Given some of the new UI features being rolled into Activiti from the Alfresco team, it’s fair to say that Alfresco will continue to innovate Activiti, and attempt to maintain their solid standing in the open source BPM market. There’s likely a small window for existing Acitiviti users to shift to camunda BPM if they want to: right now, the engine is identical and the migration will be trivial, but I expect that within six months, both sides will make enough changes to their respective projects that it will become a more significant effort. In other words, if you’re on Activiti or camunda now and are thinking of switching, do it now.

camunda could be ruffling a few feathers by declaring an open source fork rather than just rolling their proprietary offerings into the Activiti project; they might have been able to become a stronger influencer within the project by doing that, counteracting any (perceived) document-centric influence from Alfresco. Again, I’m not internal to either of the companies nor part of the Activiti open source community, so that’s just speculation.

Meanwhile, Alfresco remains officially silent on the whole business. Given that they had advance warning about this, that’s a pretty serious PR mistake.

BonitaSoft Open Source BPM

I recently had my first briefing with BonitaSoft about their open source BPM product. Although the project has been going on for some time, with the first release in 2001, the company is only just over a year old; much of the development has been done as part of BPM projects at Bull. Their business model, like many open source companies, is to sell services, support and training around the software, while the software is available as a free download and supported by a broader community. They partner with a number of other open source companies – Alfresco for content management, SugarCRM for CRM, Jaspersoft for BI – in order to provide integrated functionality without having to build it themselves. They’ve obviously hit some critical mass point in terms of functionality and market, since their download numbers have increased significantly in the past year and have just hit a half million.

A French company, they have a strong European customer base, and a growing US customer base, mostly comprising medium and large customers. They’ve just announced the opening of two US offices, and the co-founder/CEO Miguel Valdés Faura is moving to the San Francisco area to run the company from there; that’s the second European company that I’ve heard of lately where the top executives are moving to the Bay area, indicating that the “work from anywhere” mantra doesn’t necessarily pan out in practice. They’ve hired Dave Cloyd away from open source content management company Nuxeo as a key person in the building the US market; he was VP of sales at Staffware prior to the TIBCO acquisition, so knows both the open source and BPM side.

Open source BPM solutions have been around for a while, but the challenges are the same as with any open source project: typically, it takes greater technical skills to get up and running with open source, especially if it doesn’t do everything that you need and has to be integrated with other (open source or not) products. In many cases, open source BPM provides the process engine embedded inside a larger solution created by a systems integrator or business process outsourcing firm; in other words, it’s more like a toolkit for adding process capabilities into another application or environment. BonitaSoft considers jBPM, Activiti and ProcessMaker to be in this “custom BPM development” camp, as opposed to the usual commercial players in the “standalone BPM suites” category; they see themselves as being able to play on both sides of that divide.

Taking a look (finally, after 35 minutes of PowerPoint) at a product demo, I saw their four main components of process modeling, process development, process execution, and process administration and monitoring.

The modeler is a desktop Eclipse-based application providing BPMN 2.0 modeling, including importing of BPMN models from other tools. There is starting to be less distinction between these tools, as all the vendors start to pick up the user interface tricks that make process modeling work better: auto-alignment, automatic connector creation, and tool tips with the most likely next element to add. The distinguishing characteristics start to become how the non-standard modeling aspects are handled: data modeling and integration with other systems using proprietary connectors that go beyond the capabilities of a simple web services call, for example.

Bonitasoft BPM - Alfresco connector actionsI like what they’ve done with some of the out-of-the-box connectors: the Sharepoint and Alfresco connectors allow you to browse and select a specific document repository event (such as check in a file) directly from within the process designer, and associate it with an activity in the process model. I saw a fairly comprehensive database connector that allowed for graphical query creation, and this connection can be used to transfer a data model from a database to the process model to build out the process instance data. There’s a wizard to create your own connectors, or browse the BonitaSoft community to find connectors created by others – a free marketplace for incremental functionality.

You can create a web form for a particular step in the process, which will auto-generate based on the defined data model, then allow new fields to be added based on external database calls, and reformatted in a graphical editor. Effectively, this capability allows a quick process-based application to be created with a minimum of code, just using the forms designer and connectors to databases and other systems.

Key performance indicators (KPIs) can be defined in process modeler; these are effectively data objects that can be populated by any step of the process, then reported on via a BI engine such as the integrated Jaspersoft.

Although they describe their modeling as collaborative, it’s asynchronous collaboration, where the model and associated forms are saved to the Bonita repository model, where they are property versioned and can be checked out by another user.

Bonitasoft BPM - user inbox viewThe end-user experience uses an inbox metaphor in a portal, with the forms displayed as the user interacts with the process. Individual process instances (or entire processes) can be tagged with private labels by a user – similar to labels applied to conversations in Gmail – and categories can be applied to processes so that every instance of that process has the same category, visible to all users. Love the instance and process tagging: this is a capability that I’ve been predicting for years, and just starting to see it emerge.

I was surprised by the lack of flexibility in runtime environment: the only change that a user can make to a process at runtime is to reassign a task, although they are working on other features to handle more dynamic situations.

The big product announcements from last month, with the release of version 5.3, included process simulation and support for cloud environments with multi-tenancy and REST APIs. However, by this time we were getting to the end of our time and I didn’t get all the details; that will have to wait for another day, or you can check out the brief videos on their site.

Open Source BPM with Alfresco’s Activiti

When Tom Baeyens announced that he and Joram Barrez stepped down from the jBPM project, he hinted about a new project, but details have been sparse until now except for a post that stated that they’re working on an open source BPMN 2.0 offering, plus one that gave unprecedented (for Tom) attention to ECM, which should have tipped me off as to their direction. Turns out that they have both joined Alfresco and are spearheading Activiti, an Apache-licensed open source BPM project, announced its Alpha 1 release today with a planned November GA date. From the press release:

An independently-run and branded open source project, Activiti will work independently of the Alfresco open source ECM system. Activiti will be built from the ground up to be a light-weight, embeddable BPM engine, but also designed to operate in scalable Cloud environments. Activiti will be liberally licensed under Apache License 2.0 to encourage widespread usage and adoption of the Activiti BPM engine and BPMN 2.0, which is being finalized as standard by OMG.

I met Tom face-to-face a couple of years ago when we ended up at different conferences in the same conference center and had a chat about total BPM world domination; interestingly, at the time he expressed that “BPMN should stick to being a modeling notation…and the mapping approach to concrete executable process languages should be left up to the vendors”; obviously, BPMN 2.0 execution semantics have changed his mind. 😉

Activiti Modeler - process designJohn Newton, CTO of Alfresco, and Tom Baeyens, in his new role as Chief Architect of BPM, briefed me last week on Activiti. The project is led by Alfresco and includes SpringSource, Signavio and Camunda; Alfresco’s motivation was to have a more liberally-licensed default process engine, although they will continue to support jBPM. Alfresco will build a business around Activiti only for content-centric applications by tightly integrating it with their ECM, leaving other applications of BPM to other companies. I’ll be very interested to see the extent of their content-process integration, and if it includes triggering of process events based on document state changes as well as links from processes into the content repository.

They believe that BPEL will be replaced by BPMN for most general-purpose BPM applications, with BPEL being used only for pure service orchestration. Although that’s a technical virtuous viewpoint that I can understand, there’s already a lot of commitment to BPEL by some major vendors, so I don’t expect that it’s going to go away any time soon. Although they are only supporting a subset of the BPMN 2.0 standard now – which could be said of any of the process modelers out there, since the standard is vast – they are committed to supporting the full standard, including execution semantics and the interchange format.

Activiti includes a modeler, a process engine, an end-user application for participating in processes, and an administration console. Not surprisingly, we spent quite a bit of time talking about Activiti Modeler, which is really a branded version of Signavio’s browser-based BPMN 2.0 process modeler. This uses AJAX in a browser to provide similar functionality to an Eclipse-based process modeler, but without the desktop installation hassles and the geeky window dressing. It is possible to create a fully executable process model in the Activiti Modeler, although in most cases a developer will add the technical underpinnings, likely in a more developer-oriented environment rather than the Modeler. Signavio includes a file-based model repository, which has been customized for inclusion in the Activiti Modeler; it would be great to see if they can do something a bit more robust to manage the process models, especially for cloud deployments. They are including support for certain proprietary scripting instead of using Java code for some interfaces, such as their Alfresco interface.

Activiti Explorer - end-user interfaceActiviti Explorer provides a basic end-user application for managing task lists, working on tasks, and starting new processes. Without a demo, it was hard to see much of the functionality, although it appears to have support for private task lists as well as shared lists of unassigned tasks; a typical paradigm for managing tasks is to allow someone to claim an unassigned task from the shared list, thereby moving it to their personal list.

The Activiti Engine, which is the underlying process execution engine, is packaged as a JAR file with small classes that can be embedded within other applications, such as is done in Alfresco for content management workflows. It can be easily deployed in the cloud, allowing for cross-enterprise processes. The only thing that I saw of Activiti Probe, the technical administration console, was its view on the underlying database tables, although it will have a number of other capabilities to manage the process engine as it develops. Not surprisingly, they don’t have all the process engine functionality available yet, but have been focusing on stabilizing the API in order to allow other companies to start working with Activiti before the GA release.

Activiti Cycle mockup - design collaborationI also saw a mockup of Activiti Cycle, a design-time collaboration tool that includes views (but not editing) of process models, related documents from Alfresco, and discussion topics. Activiti Cycle can show multiple models and establish traceability between them, since their expectation is that an analyst and a developer would have different versions of the model. This is an important point: models are manually forward-engineered from an analyst’s to developer’s version, and there are no inherent automated updates when the model changes, although there are alerts to notify when other versions of the same model are updated. This assumption that there can be no fully shared model between analyst and developer has formed a part of a long-standing discussion between Tom and I since before we met; although I believe that a shared model provides the best possible technical solution, it’s not so easy for a non-technical analyst to understand BPMN models once you get past the basic subset of elements. Activiti Cycle may not be in GA until after the other components, although they are working on it concurrently.

The screen shots that I saw looked nice, although I haven’t seen a demo yet; Tom gave credit to Alfresco’s UI designers for raising this above just another developer’s BPM tool into something that could be used by non-developers without a lot of customization. I’m looking forward to a demo next month, and seeing how this progresses to the November release and beyond.

Social media for community projects

If you ever wonder what BPM analyst/architect/bloggers do in their spare time, wonder no more:

Ignite Toronto: Sandy Kemsley -The Hungry Geek from Ignite Toronto on Vimeo.

I was invited to give a presentation at Ignite! Toronto this week, and decided to discuss how I’ve been using social media – Twitter, Flickr, Facebook, blogging – and some integration technologies including RSS and Python scripting to promote a new farmers’ market in my community. I’m on the local volunteer committee that acts as the marketing team for the market. Here’s the presentation, it’s not too clear on the video:

If you’re not familiar with Ignite, it’s a type of speed presentation: 20 slides, 5 minutes, and your slides auto-advance every 15 seconds. For a marathon presenter like me, keeping it down to 5 minutes is a serious challenge, but this was a lot of fun.

For a technology view, check out slide 17 in the slide deck, which shows a sort of context diagram of the components involved. Twitter is central to this “market message delivery framework”, displaying content from a number of sources on the market Twitter account:

  • I manually tweet when I see something of interest related to the market or food. Also, I monitor and retweet some of our followers, and reply to anyone asking a question via Twitter.
  • When I publish a post on my personal blog that is in the category “market”, Twitterfeed picks it up through the RSS feed and posts the title and link on Twitter. These are posted to both the market account and my own Twitter account, so you may have seen them if you’re following me there.
  • Each week, I save up a list of interesting links and other tweet-worthy info, and put them in a text file. My talented other half wrote a Python script that tweets one message from that file each hour for the two days prior to each Saturday market day.
  • I connected my Flickr account with Twitter, and can either manually tweet a link to a photo directly from Flickr, or email a photo from my iPhone to a private Flickr email address that will cause the link to be tweeted. I could have used Twitpic for the latter functionality, but Flickr gives me better control over my photo archive.

The whole exercise has been a great case study on using social media for community projects with no budget, using some small bits of technology to tie things together so that it doesn’t take much of my time now that it’s up and running. I’d be doing most of the activities anyway: taking pictures of the market, cooking and blogging about it, and reading articles on local food and markets online. This just takes all of that and pushes it out to the market’s online community with very little additional effort on my part.

Jason Laszlo gives Bell Canada a(nother) black eye

All week, the local tech community has been buzzing around the news that Bell Canada is throttling P2P traffic — specifically the widely-used BitTorrent protocol — for not only their direct Sympatico subscribers, but also for anyone who buys their supposedly unlimited DSL from a Sympatico reseller, such as TekSavvy. For those of you new to the traffic shaping/net neutrality wars that have been going on in North America over the past months, here’s why throttling P2P traffic isn’t good news:

  • Bell Canada (and our only other “last mile” carrier, Rogers Cable) are violating their role as a common carrier: they’re supposed to deliver the data, regardless of what it is, subject to our individual bandwidth and download caps. As long as I’m not getting a higher bandwidth than I was promised, and don’t go over my monthly volume cap, I should be able to download whatever I want, whenever I want, because the contract that I signed with Bell implied that would be the case. If they can’t deliver that bandwidth, then they shouldn’t be selling it; furthermore, they should have taken the money made by all these years of overselling the same bandwidth and invested in improving the now-outdated infrastructure so that we wouldn’t have these problems now.
  • The carriers, Bell and Rogers, like to position this as allowing equal access to everyone instead of allowing those evil file-sharing types to hog the bandwidth, but they don’t exactly have altruistic motives: both of them sell services (cable and satellite TV) that compete with downloaded video, and they want you paying $40+ to them each month to watch the TV that they choose rather than be able to select from a wide variety of alternative — and legal — video available on the internet. Furthermore, Rogers wants to use the same bandwidth that you would use for free video downloads to download their pay-per-view movies instead.
  • Bell and Rogers have targeted the BitTorrent protocol for throttling even though it has many legal uses. Last week, CBC made history by offering a TV program available, DRM-free, for download by BitTorrent. This allowed anyone in the world with broadband access to have access to Canadian programming that might not be available on their local TV stations. By throttling BitTorrent, however, Bell and Rogers are effectively blocking access to that Canadian content within Canada, forcing people to watch it on Bell or Rogers’ TV services. Personally, I use BitTorrent not just for that CBC show, but to download new releases of Ubuntu, and other large open source downloads where the source site provides BitTorrent as an option in order to reduce the bandwidth demands on their servers.

What this all comes down to is a violation of net neutrality: Bell and Rogers are deciding which traffic on the network gets higher priority. They’re doing it now because they’ve failed to make the necessary investments in infrastructure over the years that would allow them to actually deliver what they sell, and coincidentally they choose to throttle traffic that competes with their other business areas.

Suffice it to say that Bell Canada didn’t have a good week because of this — it was all over the news, the DSL resellers are talking about suing, and even the unions are in on the action. Enter Jason Laszlo, a spokesperson (apparently associate director of media relations) for Bell Canada, who was quoted extensively on this issue in the press:

  • “Regarding customers like Mount Sinai [a major Toronto hospital that was used as an example of how legal file sharing might be used for CAT scans], Laszlo said it’s their own fault for using a notorious application like file-sharing. ‘We’re blind to the content flowing through our pipes,’ he said. ‘Our goal is to ensure maximum efficiency for everyone.'” — Digital Journal, March 25th. [“Notorious”? Oh, puh-leeze. And if they were blind to the content, then they wouldn’t be throttling file sharing.]
  • “P2P programs are only employed by a small percentage of internet users, but they tend to make use of all the available bandwidth, Laszlo said. Reduced P2P use should provide a better balance between P2P and other users at peak times, he said. ‘I feel we’re on the side of good,’ he said.” — CBC News, March 25th. [Throttling P2P is a good way to make sure that it is only ever employed by a small percentage of users, which is exactly what Bell wants.]
  • “Bell spokesman Jason Laszlo on Friday reiterated the company’s position —that it was shaping traffic in order to prevent a small portion of bandwidth hogs from slowing speeds down for all customers.” — CBC News, March 28th.
  • “Jason is throttle-icious.” — Jason Laszlo’s then-publicly-viewable Facebook profile, status update dated March 28th at 4:34pm.
  • “Jason is realizing how little seperates [sic] most journalists from lemmings.” — Jason Laszlo’s then-publicly-viewable Facebook profile, status update dated evening of March 28th.

Yes, those last two are real; his Facebook profile was posted on a broadband discussion forum yesterday afternoon (you can Digg the story here); he obviously was unaware of the impact of no privacy settings, since I was able to access his profile immediately after that even though we’re not directly connected and have no mutual friends.

My friend Mark Kuznicki channeled his outrage into a great blog post about how this hands the net neutrality advocates a gift, and messaged Laszlo on Facebook to let him know what we all think of his two-faced approach to media relations. Shortly after that, Laszlo’s profile was set to private so that I could no longer view it; this morning, it appears to be completely missing.

So what’s the lesson to be learned from this mess? The public is now aware and mobilized on the impact of traffic shaping on their daily lives, even if they haven’t yet heard the term net neutrality. To paraphrase Peter Finch’s character from Network, we’re mad as hell and we’re not going to take this anymore.

Oh, yeah, lesson #2: don’t entrust media relations for a sensitive subject to an inexperienced junior who doesn’t know well enough not to post inappropriate comments to his publicly-viewable Facebook profile.

Shared Insights PCC: AvenueA|Razorfish intranet wiki

I skipped this morning’s taxonomy/folksonomy smackdown featuring Seth Earley and Zach Wahl — I just wasn’t up for that much testosterone this early in the morning — and went to the best practices track to hear about how AvenueA|Razorfish implemented their internal wiki. I’m speaking next, so if this session isn’t sufficiently riveting, I’ll duck out early to review my notes.

Donna Jensen, their senior technical architect, took us through how they use a wiki as an intranet portal. She spent some amount of time first defining wikis and discussing benefits and challenges, particularly when used inside the firewall. She made a crack about how Ph.D. dissertations will be written on many of these points, which isn’t that far from the truth: things like encouraging active versus passive behaviour. And, although she claims that they’re breaking down behaviours tied to organizational silos, she admitted that no one can comment on the CEO’s blog although all others are open territory. At some point, even the top level executives have to learn that if they’re going to commit to Enterprise 2.0, it has to permeate to all levels of the organization: no one should be exempt.

The platform that they used was MediaWiki (the software used to create Wikipedia) on a standard LAMP stack, giving them a completely open source base. They also use WordPress for internal blogs, maintaining the commitment to open source. Although they did do some customization, particularly in terms of creating templates such as project pages, they took advantage of many freely-available third-party extensions for functionality such as tag clouds, calendaring and skins. They use Active Directory for security, and allow access only internal or VPN access: no external access or applications.

AA|RF put in the wiki with only a technical VP and a part-time intern, pretty much out of the box, and found that it wasn’t adopted. They did another cut with Jensen as technical architect (part-time) and a couple more interns, and arrived at their current state: no project management oversight, no content management system, and no creative designer, with the whole thing implemented in about 2,000 person-hours. As a web technology consulting company (although with little Web 2.0 experience), they can get away with this, but you may not want to try this one at home. They used agile scheduling, and eventually brought in some rigorous QA. Jensen feels that their only real mistake was not bringing in a create designer earlier, since the wiki is apparently pretty technical looking. They haven’t yet put a WYSIWYG editor so everyone still needs to work in WikiText, which is likely a bit of a barrier for the non-techies.

Jensen talked about a few byproducts of the wiki adoption, such as the incremental upgrade model that tends to come with open source or SaaS products, rather than the monolithic (and often disruptive) upgrades of proprietary software. She also talked about how many IT departments won’t use open source because it makes them unable to turn to someone who is compelled to help them — in other words, they have to take on the responsibility of finding a solution themselves. Another byproduct is the shift towards open source, and the savings that they can expect by replacing some of their current software platforms and their hefty maintenance fees with open source alternatives.

In their wiki environment, any kind of file can be uploaded, all pages (except the home page) are editable by everyone, and any content except client-confidential information can reside there. I really have to wonder how this would work if they upload a massive number of files: at what point do you need to add a content management system, and how painful is it going to be to do that later? Their wiki home page shows del.icio.us and Flickr feeds, internal blog feeds, Digg items and recent uploaded documents. One audience member asked if that meant that if anyone in the company tagged a public web page, that it would be included on the home page; there was general shock around the room and wonderment that you could do this without having some centralized body approving such content before it was surfaced to the rest of the company. I tried not to laugh out loud; is this such a radical idea? Obviously, the last year of being immersed in Web 2.0 has changed me, and I start wondering which of these things that I would adopt if I were still running a 40-person consulting company. As the session goes on, the same question about how user tagging on the internet drives their intranet home page keeps coming up from the audience over and over.

What I found interesting (and I’m probably blowing their whole game by publishing this), is that they’re using public Web 2.0 tools to feed part of the home page: if something is tagged AARF on del.icio.us or Flickr, it shows up there. For Digg, however, you have to be a friend of AARF to have your items show up. Jensen said that she’ll be changing the AARF tag to something unguessable, although if you know how to track items and users through del.icio.us or Flickr, it wouldn’t be that difficult to figure out their new tag. She also said that they had run some analytics on whether these tags gave away any secrets about what they’re currently researching, and found that the mix is too varied for any patterns to emerge.

The wiki is a portal in a very real sense, which was a bit of a revelation to me: I didn’t previously think of wikis as portals. Everyone has their own people page which they can format and populate as they wish, and which can include their recent file uploads and blog postings. On any page, adding a “portlet” is just a matter of copying and pasting a snippet of PHP code, including copying snippets of code such as the <embed> code provided by YouTube for every video on its site.

They’ve done some cool things with blogs as well, such as having mailing lists corresponding to blogs, and sending an email to that mailing list will auto-post it as a blog entry on the corresponding blog.

Jensen had some great ideas for wiki adoption, often centred around “wikivangelists” getting out there and helping people. I especially like the idea of the “days of wine and wikis” events. 🙂  And they’re getting some great adoption rates.

I had to leave just before the end: she was running 7 minutes overtime and I had only 15 minutes between sessions to get to my own room to set up. It was hard to tear myself away, however; I found both Jensen’s presentation and the audience feedback to be riveting.

TUCON: BPM, The Open Source Debate

Ryan Herd, who heads the BPM centre of competence within RBM Private Bank, was up next to talk about the analysis that they did on open source BPM alternatives. Funny that the South Africans, like we understated Canadians, use the term “centre of competence” as opposed to the very American “center of excellence”. 🙂

Don’t tell Ismael Ghalimi, but Herd thinks that jBoss’ jBPM is the only open source BPM alternative; it was the only one that they evaluated, along with a number of proprietary solutions including TIBCO. Given that he’s here speaking at this conference, you can guess which one they picked.

Their BPM project started with some strategic business objectives:

  • operational efficiency
  • improved client service
  • greater business process agility

and some technology requirements:

  • a platform to define, improve and automate business processes
  • real-time and historical process instance statistics
  • single view of a client and their related activities

They found that then needed to focus on three things:

  • Process: dynamic quality verification, exception handling that can step outside the defined process, and a focus on the end-to-end process.
  • People: have their people be obsessed with the client, develop an end-to-end process culture in order to address SLAs, and create full-function teams rather than an assembly-line process.
  • Systems: a single processing front-end, a reusable business object layer and centralized work management.

Next, they started looking at vendors, and for whatever reasons, open source was considering the mix: quite forward-thinking for a bank. In addition to TIBCO and jBPM, they considered DST‘s AWD, IBM‘s BPM, eiStream (now Global 360) and K2: a month and a half to review all of the products, then another month and a half doing a more focussed comparison of TIBCO and jBPM.

For process design, jBPM has only a non-visual programmer-centric environment, and has support for BPEL but not (obviously, since it’s not visual) BPMN. It does allow modelling freedom, but that can be a problem with enforcing internal standards. It also has no process simulation. TIBCO, on the other hand, has a visual process modelling environment that supports BPMN, has a near zero-code process design and provides simulation. Point: TIBCO.

On the integration side, jBPM has no graphical application integration environment, although it has useful integration objects and methods and has excellent component-based design. The adapters are available but not easily reused, and has no out-of-the box communication or integration facilities. TIBCO has a graphical front-end for application integration, and a lots of adapters and integration facilities. Point: TIBCO.

On the UI side, jBPM has only a rudimentary web-based end user environment, whereas TIBCO has the full GI arsenal at their disposal. Point: TIBCO.

Reporting and analytics: jBPM has nothing, TIBCO has iProcess Analytics and (now) iProcess Insight.

Support: don’t even go there, although jBoss wins on price. 🙂

Overall, they found that the costs would be about the same (because of the greater jBPM customization requirement), but a much longer time to deploy with jBPM, which had them choose TIBCO.

Given what they found, I find it amazing that they spent three months looking at jBPM, since jBPM is, in its raw form, a developer tool whereas TIBCO spans a broader range of analyst and developer functionality. The results as presented are so biased in favour of TIBCO that it should have been obvious long before any formal evaluation was done that jBPM wasn’t suited for their particular purposes and should not have made their short list; likely, open source was someone’s pet idea so was thrown into the mix on a lark. Possibly an open source BPM solution like Intalio, which wasn’t available as open source at the time of their evaluation, would have made a much better fit for their needs if they were really dedicated to open source ideals. I’m pretty sure that anyone in the room that had not considered open source in the past would run screaming away from it in the future.

Getting past the blatant TIBCO plug masquerading as a product comparison, Herd went on to show the architecture of their solution, which uses a large number of underlying services managed by a messaging layer to interface with the BPM layer — a fairly standard configuration. They expect to go live later this year.