RPA just wants to be free: @WorkFusion RPA Express

Last week, WorkFusion announced that their robotic process automation product, RPA Express, will be released in 2017 as a free product; they published a blog post as well as the press release, and today they hosted a webinar to talk more about it. They are taking requests to join the beta program now, with a plan to launch publicly at the end of Q1 2017.

WorkFusion has a lot of interesting features in their RPA Express and Smart Process Automation (SPA) products, but today’s webinar was really about their business model for RPA Express. This is not a limited-time trial, it’s a free enterprise-ready product that can generate business benefit. Free to purchase and no annual maintenance fees, although you obviously have infrastructure costs for the servers/desktops on which RPA Express runs. Their goal in making it free is to bypass the whole RFP-POC-ROI dance that goes on in most organizations, where a decision to implement RPA – which typically can show a pretty good ROI within a matter of weeks – can take months. With a free product, one major barrier to implementation has been removed.

So what’s the catch? WorkFusion has a more intelligent automation tool, SPA, and they’re hoping that by seeing the benefits of using RPA Express, organizations will want to try out SPA on their more complex automation needs. RPA Express uses deterministic, rules-based automation, which requires explicit training or specification of each action to be taken; SPA uses machine learning to learn from user actions in order to perform automation of tasks that would typically require human intervention, such as handling unstructured and dynamic data. WorkFusion envisions a “stairway to digital operations” that starts with RPA, then steps up the intelligence with cognitive processing, chatbots and crowdsourcing to a full set of cognitive services in SPA.

This doesn’t mean that RPA Express is just a “starter edition” for SPA: there are entire classes of processes that can be handled with deterministic automation, such as synchronizing data between systems that may not talk to each other, such as SAP and Salesforce. This replaces having a worker copy and paste information between screens, or (horrors!) re-type the information in two or more systems; it can result in a huge reduction in cost and time, and remove the tedious work from people to free them up for more complex decision-making or direct customer interaction.

RPA Express bots can also be called from other orchestration and automation tools, including a BPMS, and can run on a server or on individual desktops. We didn’t get a rundown of the technology, so more to come on that as they get closer to the release. We did see one or two screens, and it’s based on modeling processes using a subset of BPMN (start and end events, activities, XOR gateways) that can be easily handled by a business user/analyst to create the automation flows, plus using recorder bots to capture actions while users are running through the processes to be automated. There was a mention of coding on the platform as well, although it was clear that this was not required in many cases, hence development skills are not essential.

Removing the cost of software changes the game, allowing more organizations to get started with this technology without having to do an internal cost justification for the licensing costs. There’s still training and implementation costs, but WorkFusion plans to provide some of this through online video courses, as well as having the large SIs and other partners trained to have this in their toolkit when they are working with organizations. More daunting is the architectural review that most new software needs to go through before being installed within a larger organization: this can still block the implementation even if the software is free.

I’m looking forward to seeing a more complete technical when the product is closer to launch date. I’m also looking forward to see how this changes the price point of RPA from other vendors.

TechnicityTO 2016: Challenges, Opportunities and Change Agents

The day at Technicity 2016 finished up with two panels: the first on challenges and opportunities, and the second on digital change agents.

The challenges and opportunities panel, moderated by Jim Love of IT World Canada, was more of a fireside chat with Rob Meikle, CIO at City of Toronto, and Mike Williams, GM of Economic Development and Culture, both of whom we heard from in the introduction this morning. Williams noted that they moved from an environment of few policies and fewer investements under the previous administration to a more structured and forward-thinking environment under Mayor John Tory, and that this introduced a number of IT challenges; although the City can’t really fail in the way that a business can fail, it can be ineffective at serving its constituents. Meikle added that they have a $12B operating budget and $33B in capital investments, so we’re not talking about small numbers: even at those levels, there needs to be a fair amount of justification that a solution will solve a civic problem rather than just buying more stuff. This is not just a challenge for the City, but for the vendors that provide those solutions.

There are a number of pillars to technological advancement that the City is striving to establish:

  • be technologically advanced and efficient in their internal operations
  • understand and address digital divides that exist amongst residents
  • create an infrastructure of talent and services that can draw investment and business to the City

This last point becomes a bit controversial at times, when there is a lack of understanding of why City officials need to travel to promote the City’s capabilities, or support private industry through incubators. Digital technology is how we will survive and thrive in the future, so promoting technology initiatives has widespread benefits.

There was a discussion about talent: both people who work for the City, and bringing in businesses that draw private-sector talent. Our now-vibrant downtown core is attractive for tech companies and their employees, fueled by our attendance at our universities. The City still has challenges with procurement to bring in external services and solutions: Williams admitted that their processes need improvement, and are hampered by cumbersome procurement rules. Democracy is messy, and it slows things down that could probably be done a lot faster in a less free state: a reasonable trade. 🙂

The last session of the day looked at examples of digital change agents in Toronto, moderated by Fawn Annan of IT World Canada, and featuring Inspector Shawna Coxon of the Toronto Police Service, Pam Ryan from Service Development & Innovation at the Toronto Public Library, Kristina Verner, Director Intelligent Communities of Waterfront Toronto, and Sara Diamond, President of OCAD University. I’m a consumer and a supporter of City services such as these, and I love seeing the new ways that they’re using to include all residents and advance technology. Examples of initiatives include fiber broadband for all waterfront community residences regardless of income level; providing mobile information access to neighbourhood police officers to allow them to get out of their cars and better engage with the community; integrating arts and design education with STEM for projects such as transit and urban planning (STEAM is the new STEM); and digital innovation hubs at some library branches to provide community access to high-tech gadgets such as 3D printers.

There was a great discussion about what it takes to be a digital innovator in these contexts: it’s as much about people, culture and inclusion as it is about technology. There are always challenges in measuring success: metrics need to include the public’s opinion of these agencies and their digital initiatives, an assessment of the impact of innovation on participants, as well as more traditional factors such as number of constituents served.

That’s it for Technicity 2016, and kudos to IT World Canada and the City of Toronto for putting this day together. I’ve been to a couple of Technicity conferences in the past, and always enjoy them. Although I rarely do work for the public sector in my consulting business, I really enjoy seeing how digital transformation is occuring in that sector; I also like hearing how my great city is getting better.

TechnicityTO 2016: Open data driving business opportunities

Our afternoon at Technicity 2016 started with a panel on open data, moderated by Andrew Eppich, managing director of Equinix Canada, and featuring Nosa Eno-Brown, manager of Open Government Office at Ontario’s Treasury Board Secretariat, Lan Nguyen, deputy CIO at City of Toronto, and Bianca Wylie of the Open Data Institute Toronto. Nguyen started out talking about how data is a key asset to the city: they have a ton of it gathered from over 800 systems, and are actively working at establishing data governance and how it can best be used. The city wants to have a platform for consuming this data that will allow it to be properly managed (e.g., from a privacy standpoint) while making it available to the appropriate entities. Eno-Brown followed with a description of the province’s initiatives in open data, which includes a full catalog of their data sets including both open and closed data sets. Many of the provincial agencies such as the LCBO are also releasing their data sets as part of this initiative, and there’s a need to ensure that standards are used regarding the availability and format of the data in order to enable its consumption. Wylie covered more about open data initiatives in general: the data needs to be free to access, machine-consumable (typically not in PDF, for example), and free to use and distribute as part of public applications. I use a few apps that use City of Toronto open data, including the one that tells me when my streetcar is arriving; we would definitely not have apps like this if we waited for the City to build them, and open data allows them to evolve in the private sector. Even though those apps don’t generate direct revenue for the City, success of the private businesses that build them does result in indirect benefits: tax revenue, reduction in calls/inquiries to government offices, and a more vibrant digital ecosystem.

Although data privacy and security are important, these are often used as excuses for not sharing data when an entity benefits unduly by keeping it private: the MLS comes to mind with the recent fight to open up real estate listings and sale data. Nguyen repeated the City’s plan to build a platform for sharing open data in a more standard fashion, but didn’t directly address the issue of opening up data that is currently held as private. Eno-Brown more directly addressed the protectionist attitude of many public servants towards their data, and how that is changing as more information becomes available through a variety of online sources: if you can Google it and find it online, what’s the sense in not releasing the data set in a standard format? They perform risk assessments before releasing data sets, which can result in some data cleansing and redaction, but they are focused on finding a way to release the data if all feasible. Interestingly, many of the consumers of Ontario’s open data are government of Ontario employees: it’s the best way for them to find the data that they need to do their daily work. Wylie addressed the people and cultural issues of releasing open data, and how understanding what people are trying to do with the data can facilitate its release. Open data for business and open data for government are not two different things: they should be covered under the same initiatives, and private-public partnerships leveraged where possible to make the process more effective and less costly. She also pointed out that shared data — that is, within and between government agencies — still has a long ways to go, and should be prioritized over open data where it can help serve constituents better.

The issue of analytics came up near the end of the panel: Nguyen noted that it’s not just the data, but what insights can be derived from the data in order to drive actions and policies. Personally, I believe that this is well-served by opening up the raw data to the public, where it will be analyzed far more thoroughly than the City is likely to do themselves. I agree with her premise that open data should be used to drive socioeconomic innovation, which supports my idea that many of the apps and analysis are likely to emerge from outside the government, but likely only if more complete raw data are released rather than pre-aggregated data.

TechnicityTO 2016: IoT and Digital Transformation

I missed a couple of sessions, but made it back to Technicity in time for a panel on IoT moderated by Michael Ball of AGF Investments, featuring Zahra Rajani, VP Digital Experience at Jackman Reinvents, Ryan Lanyon, Autonomous Vehicle Working Group at City of Toronto, and Alex Miller, President of Esri Canada. The title of the panel is Drones, Driverless Cars and IoT, with a focus is on how intelligent devices are interacting with citizens in the context of a smart city. I used to work in remote sensing and geographic information systems (GIS), and having the head of Esri Canada talk about how GIS acts as a smart fabric on which these devices live is particularly interesting to me. Miller talked about how there needs to be a framework and processes for enabling smarter communities, from observation and measurement, data integation and management, visualization and mapping, analysis and modeling, planning and design, and decision-making, all the way to action. The vision is a self-aware community, where smart devices that are built into infrastructure and buildings can feed back into an integrated view that can inform and decide.

Lanyon talked about autonomous cars in the City of Toronto, from the standpoint of the required technology, public opinion, and cultural changes away from individual car ownership. Rajani followed with a brief on potential digital experiences that brands create for consumers, then we circled back to the other two participants about how the city can explore private-public sensor data sharing, whether for cars or retail stores or drones. They also discussed issues of drones in the city: not just regulations and safety, but the issue of sharing space both on and above the ground in a dense downtown core. A golf cart-sized pizza delivery robot is fine for the suburbs with few pedestrians, but just won’t work on Bay Street at rush hour.

The panel finished with a discussion on IoT for buildings, and the advantages of “sensorizing” our buildings. It goes back to being able to gather better data, whether it’s external local factors like pollution and traffic, internal measurements such as energy consumption, or visitor stats via beacons. There are various uses for the data collected, both by public and private sector organizations, but you can be sure that a lot of this ends up in those silos that Mark Fox referred to earlier today.

The morning finished with a keynote by John Tory, the mayor of Toronto. This week’s shuffling of City Council duties included designating Councillor Michelle Holland as Advocate for the Innovation Economy, since Tory feels that the city is not doing enough to enable innovation for the benefit of residents. Part of this is encouraging and supporting technology startups, but it’s also about bringing better technology to bear on digital constituent engagement. Just as I see with my private-sector clients, online customer experiences for many services are poor, internal processes are manual, and a lot of things only exist on paper. New digital services are starting to emerge at the city, but it’s a bit of a slow process and there’s a long road of innovation ahead. Toronto has made committments to innovation in technology as well as arts and culture, and is actively seeking to bring in new players and new investments. Tory sees the Kitchener-Waterloo technology corridor as a partner with the Toronto technology ecosystem, not a competitor: building a 21st century city requires bring the best tools and skills to bear on solving civic problems, and leveraging technology from Canadian companies brings benefits on both sides. We need to keep moving forward to turn Toronto into a genuinely smart city to better serve constituents and to save money at the same time, keeping us near or at the top of livable city rankings. He also promised that he will step down after a second term, if he gets it. 🙂

Breaking now for lunch, with afternoon sessions on open data and digital change agents.

By the way, I’m blogging using the WordPress Android app on a Nexus tablet today (aided by a Microsoft Universal Foldable Keyboard), which is great except it doesn’t have spell checking. I’ll review these posts later and fix typos.

Exploring City of Toronto’s Digital Transformation at TechnicityTO 2016

I’m attending the Technicity conference today in Toronto, which focuses on the digital transformation efforts in our city. I’m interested in this both as a technologist, since much of my work is related to digital transformation, and as a citizen who lives in the downtown area and makes use of a lot of city services.

After brief introductions by Fawn Annan, President and CMO of IT World Canada (the event sponsor), Mike Williams, GM of Economic Development and Culture with City of Toronto, and Rob Meikle, CIO at City of Toronto, we had an opening keynote from Mark Fox, professor of Urban Systems Engineering at University of Toronto, on how to use open city data to fix civic problems.

Fox characterized the issues facing digital transformation as potholes and sinkholes: the former are a bit more cosmetic and can be easily paved over, while the latter indicate that some infrastructure change is required. Cities are, he pointed out, not rocket science: they’re much more complex than rocket science. As systems, cities are complicated as well as complex, with many different subsystems and components spanning people, information and technology. He showed a number of standard smart city architectures put forward by both vendors and governments, and emphasized that data is at the heart of everything.

He covered several points about data:

  • Sparseness: the data that we collect is only a small subset of what we need, it’s often stored in silos and not easily accessed by other areas, and it’s frequently lost (or inaccessible) after a period of time. In other words, some of the sparseness is due to poor design, and some is due to poor data management hygiene.
  • Premature aggregation, wherein raw data is aggregated spatially, temporally and categorically when you think you know what people want from the data, removing their ability to do their own analysis on the raw data.
  • Interoperability and the ability to compare information between municipalities, even for something as simple as date fields and other attributes. Standards for these data sets need to be established and used by municipalities in order to allow meaningful data comparisons.
  •  Completeness of open data, that is, what data that a government chooses to make available, and whether it is available as raw data or in aggregate. This needs to be driven by what problems that the consumers of the open data are trying to solve.
  • Visualization, which is straightforward when you have a couple of data sets, but much more difficult when you are combining many data sets — his example was the City of Edmonton using 233 data sets to come up with crime and safety measures.
  • Governments often feel a sense of entitlement about their data, such that they choose to hold back more than they should, whereas they should be in the business of empowering citizens to use this data to solve civic problems.

Smart cities can’t be managed in a strict sense, Fox believes, but rather it’s a matter of managing complexity and uncertainty. We need to understand the behaviours that we want the system (i.e., the smart city) to exhibit, and work towards achieving those. This is more than just sensing the environment, but also understanding limits and constraints, plus knowing when deviations are significant and who needs to know about the deviations. These systems need to be responsive and goal-oriented, flexibly responding to events based on desired outcomes rather than a predefined process (or, as I would say, unstructured rather than structured processes): this requires situational understanding, flexibility, shared knowledge and empowerment of the participants. Systems also need to be introspective, that is, compare their performance to goals and find new ways to achieve goals more effectively and predict outcomes. Finally, cities (and their systems) need to be held accountable for actions, which requires that activities need to be auditable to determine responsibility, and the underlying basis for decisions be known, so that a digital ombudsman can provide oversight.

Great talk, and very aligned with what I see in the private sector too: although the terminology is a bit different, the principles, technologies and challenges are the same.

Next, we heard from Hussam Ayyad, director of startup services at Ryerson University’s DMZ — a business incubator for tech startups — on Canadian FinTech startups. The DMZ has incubated more than 260 startups that have raised more than $206M in funding over their six years in existence, making them the #1 university business incubator in North America, and #3 in the world. They’re also ranked most supportive of FinTech startups, which makes sense considering their geographic proximity to Toronto’s financial district. Toronto is already a great place for startups, and this definitely provides a step up for the hot FinTech market by providing coaching, customer links, capital and community.

Unfortunately, I had to duck out partway through Ayyad’s presentation for a customer meeting, but plan to return for more of Technicity this afternoon.

What’s on your agenda for 2017? Some BPM conferences to consider

I just saw a call for papers for a conference for next October, and went through to do a quick update of my BPM Event Calendar. I certainly don’t attend all of these events, but like to keep track of who’s doing what, when and where. Here’s what I have in there so far; if you have others, send me a note or add them as a comment to this post and I’ll add to the calendar. I’m posting just the major conferences here, not every regional seminar.

Many vendors are eschewing a single large annual conference in favor of several regional conferences, easing the travel concerns of attendees; since these are usually just one day long, they aren’t announced this far in advance. It will be interesting to see if more vendors decide to go this way, or do more livestreaming to allow people to participate in more of the conference content remotely.

At this point, I don’t have confirmed attendance or speaking spots at any of these, although I will almost certainly be attending bpmNEXT and a few of the vendor conferences, either as a speaker or as an analyst/blogger. If you’re interested in having me attend your conference, let me know; I require that my travel expenses are covered (otherwise they come out of my own pocket in addition to the billable days that I’m giving up to attend), and a speaking fee if you want me to do a keynote or other presentation.

Intelligent Capture enables Digital Transformation at #ABBYYSummit16

IMG_0672I’ve been in beautiful San Diego for the past couple of days at the ABBYY Technology Summit, where I gave the keynote yesterday on why intelligent capture (including recognition technologies and content analytics) is a necessary onramp to digital transformation. I started my career in imaging and workflow over 25 years ago – what we would now call capture, ECM and BPM – and I’ve seen over and over again that if you don’t extract good data up front as quickly as possible, then you just can’t do a lot to transform those downstream processes. You can see my slides at Slideshare as usual:

I’m finishing up a white paper for ABBYY on the same topic, and will post a link here when it’s up on their site. Here’s the introduction (although it will probably change slightly before final publication):

Data capture from paper or electronic documents is an essential step for most business processes, and often is the initiator for customer-facing business processes. Capture has traditionally required human effort – data entry workers transcribing information from paper documents, or copying and pasting text from electronic documents – to expose information for downstream processing. These manual capture methods are inefficient and error-prone, but more importantly, they hinder customer engagement and self-service by placing an unnecessary barrier between customers and the processes that serve them.

Intelligent capture – including recognition, document classification, data extraction and text analytics – replaces manual capture with fully-automated conversion of documents to business-ready data. This streamlines the essential link between customers and your business, enhancing the customer journey and enabling digital transformation of customer-facing processes.

I chilled out a bit after my presentation, then decided to attend one presentation that looked really interesting. It was, but was an advance preview of a product that’s embargoed until it comes out next year, so you’ll have to wait for my comments on it. Winking smile

A well-run event with a lot of interesting content, attended primarily by partners who build solutions based on ABBYY products, as well as many of ABBYY’s team from Russia (where a significant amount of their development is done) and other locations. It’s nice to attend a 200-person conference for a change, where – just like Cheers – everybody knows your name.

Keynoting at @ABBYY_USA Technology Summit

I’ve been in the BPM field since long before it was called BPM, starting with imaging and workflow projects back in the early 1990s. Although my main focus is on process now (hence the name of my blog), almost every project that I’m involved in has some element of content and capture, although not necessarily from paper documents. Basically, content capture is the onramp to many business processes: either the capture of a piece of content is what triggers a process (e.g., an application form) or it adds information to a process to move it along. Capture can mean traditional document scanning with intelligent recognition in the form of OCR, ICR, barcode and mark sense recognition, or can also be capture of information already in electronic form but not in a structured format (e.g., emails).

To get to the point, this is why I’m excited to be keynoting at the ABBYY Technology Summit coming up on November 16-18, in a presentation entitled How Digital Transformation is Increasing the Value of Capture and Text Analytics:

As the business world has been wrestling with the challenge of Digital transformation, the last few years have seen the shift away from BPM and Case Management technology platforms towards the more solutions-orientated approach offered by Smart Process Applications and Case Management Frameworks. A critical component of these business solutions is capability to capture the key business information at the point of origin.

This information is often buried inside forms and other business documents. Capturing this data through recognition technologies and automatic document classification transforms streams of documents of any structure and complexity into business-ready data.

This enables organizations of any size to streamline their existing business processes, increasing efficiency and reducing costs; it also enables real-time customer self-service processes triggered by mobile document capture.

I’ll be covering trends and benefits of intelligent capture, providing ABBYY’s customers and partners in attendance with solid advice on how to best start integrating these technologies to make their business processes run better. I’m also writing a paper covering these topics, sponsored by ABBYY, which will be available in time for the conference.

If you’re at the conference, please stop by and say hi, I’ll be hanging out there for the rest of the day after my keynote.

Strategy to execution – and back: it’s all about alignment

I recently wrote a paper sponsored by Software AG called Strategy To Execution – And Back, which you can find here (registration required). From the introduction:

When planning for business success, corporate management sets business strategy and specifies goals in terms of critical success factors and key performance indicators (KPIs). Although senior management is not concerned with the technical details of how business operations are implemented, they must have confidence that the operations are aligned with the strategy, and be able to monitor performance relative to the goals in real time.

In order to achieve operational alignment, there must be a clear path that maps strategy to execution: a direct link from the strategic goals in the high-level business model, through IT development and management practices, to the systems, activities and roles that make the business work. However, that’s only half the story: there must also be a path back from execution to strategy, allowing operational performance to be measured against the objectives in order to guide future strategy. Without both directions of traceability, there’s a disconnect between strategy and operations that can allow a business to drift off course without any indication until it’s far too late.

I cover how you need to have links from your corporate strategy through various levels of architecture to implementation, then be able to capture the operational metrics from running processes and roll those up relative to the corporate goals. If you don’t do that, then your operations could just be merrily going along their own path rather than working towards corporate objectives.

Another rift in the open source BPM market: @FlowableBPM forks from @Alfresco Activiti

Photo of Berries On Forks by my talented friend Pat Anderson (digiteyes)In early 2013, Camunda – at the time, a value-added Activiti consulting partner as well as a significant contributor to the open source project – created a fork from Activiti to form what is now the Camunda open source BPM platform as well as their commercial version based on the open source core. As I wrote at the time:

At the end of 2012, I had a few hints that things at Alfresco’s Activiti BPM group was undergoing some amount of transition: Tom Baeyens, the original architect and developer of Activiti (now CEO of the Effektif cloud BPM startup announced last week), was no longer leading the Activiti project and had decided to leave Alfresco after less than three years; and camunda, one of the biggest Activiti contributors (besides Alfresco) as well as a major implementation consulting partner, was making noises that Activiti might be too tightly tied to Alfresco’s requirements for document-centric workflow rather than the more general BPM platform that Activiti started as.

Since then, Effektif became Signavio Workflow and Camunda decided to use a capital letter in their name; what didn’t change, however, is that as the main sponsor of Activiti, Alfresco obviously has a need to make Activiti work for document-centric BPM and skewed the product in that direction. That’s not bad if you’re an Alfresco ECM customer, but likely was not the direction that the original Activiti BPM team wanted to go.

Last month, I heard that key Activiti people had left Alfresco but had no word about where they were heading; last week, former Activiti project lead Tijs Rademakers and Activiti co-founder and core developer Joram Barrez announced that they were creating an Activiti fork to form Flowable with a team including former Alfresco Activiti people and other contributors.

To be clear to those who don’t dabble in open source, forking is a completely normal activity (no pun intended…well, maybe only a little) wherein a copy of the source code is take at a point in time to create a new open source project with its own name and developer community. This may be done because of a disagreement in product direction – as appears was the case here – or because someone wants to take it in a radically different direction to address a different need or market.

I heard about all of this from Jeff Potts, who has been involved in the Alfresco open source community for a decade, via his blog. He wrote about the Activiti leads leaving Alfresco back in September, although he reported it as three people leaving independently that just happened to occur at the same time. Possibly not completely accurate, in hindsight, but that was the word at the time. He then saw the Flowable announcement (linked above) and wrote about that,  which is where I first saw it. Potts has been involved in the Alfresco open source community for a decade.

Alfresco’s founder and CTO, John Newton, posted about the team departure and the fork:

Unfortunately, some of my early friends on the Activiti project have disagreed with our direction and have taken the step of forking the Activiti code. This is disappointing because the work that they have done is very good and has generally been in the spirit of open source. However, the one thing that we could not continue to give them was exclusive control of the project. I truly wish that we could have found a way to work with them within a community framework.

This seemed to confirm my suspicion that this was a disagreement in product direction as well as a philosophical divide; with Alfresco now a much bigger company than it was at the time that they took Activiti under their wing, it’s not surprising that the corporate mindset wouldn’t always agree with open source direction. Having to spend much more effort on the enterprise edition than the open source project and seeing BPM subsumed under ECM would not sit well with the original Activiti BPM team.

Newton’s comments are also an interesting echo of Barrez’ post at the time of the Camunda fork. In both situations, a sense of disappointment – and maybe a bit of betrayal? – although now Barrez is on the other side of the equation.

Flowable was quick to offer a guide on how to move from Activiti to Flowable – trivial at this point since the code base is still the same – and Camunda jumped in with a guide on moving from Activiti to Camunda, an update on what they’ve done to the engine since their fork in 2013, and reasons why you should make the switch.

If you’re using Activiti right now, you have to be a bit nervous about this news, but don’t be.

  • If you’re using it primarily for document workflow with your Alfresco ECM, you’re probably best to stay put in the long run: undoubtedly, Activiti will be restaffed with a good team and will continue to integrate tightly with Alfresco; it’s possible that some of the capabilities might find their way from the Activiti project to the Alfresco project over time. There’s obviously going to be a bit of a gap in the team for a while: the project shows no new commits for a month, and questions on the forum are going unanswered.
  • If you use Activiti without Alfresco ECM (or with very little of it), you still don’t need to do anything right now: as Camunda showed in their post, a migration path from Activiti to Flowable or Camunda or any other fork will still exist in the future because of the shared heritage. It will get more complex over time, but not impossible. Sit tight for 6-12 months, and reassess your options then.
  • If you’re considering Activiti as a new installation, consider your use cases. If you’re a heavy Alfresco ECM user and want it tightly integrated, Activiti is the way to go. For other cases, it’s not so clear. We need to hear a bit more from Alfresco on their plans, but it’s telling that Newton’s post said “Business Process Management is one of those critical components of any content management system” which seems to place ECM as the primary focus and BPM as a component. He also said that they would be more explicit in their roadmap, and I recommend that you wait for that if you’re in the evaluation stage for a pure (non-ECM) BPM platform.

In the meantime, Flowable has released their initial 5.22.0 engine, and have plans for version 6 by the end of November. They haven’t published a product roadmap yet, but I’m expecting significant diversions from Activiti to happen quickly to incorporate new technologies that bring the focus back to BPM.

Note: the photo accompanying this post was taken by my talented photographer friend, Pat Anderson, with whom I have eaten many delicious and photogenic meals.