BPM2023 Day 2: RPA Forum

In the last session of the day, I attended another part of the RPA Forum, with two presentations. 

The first presentation was “Is RPA Causing Process Knowledge Loss? Insights from RPA Experts” (Ishadi Mirispelakotuwa, Rehan Syed, Moe T. Wynn), presented by Moe Wynn. RPA has a lot of measurable benefits – efficiency, compliance, quality – but what about the “dark side” of RPA? Can it make organizations lose knowledge and control over their processes because people have been taken out of the loop? RPA is often quite brittle, and when (not if) it stops working, it’s possible that organizational amnesia has set in: no one remembers how the process works well enough to do it manually. The resulting process knowledge loss (PKL) can have a number of negative organizational impacts.

The study created a conceptual model for RPA-related PKL, and she walked us through the sets of human, organizational and process factors that may contribute. In layman’s terms, use it or lose it.

In my opinion, this is different from back-end or more technical automation (e.g., deploying a BPMS or creating APIs into enterprise system functionality) in that back-end automation is usually fully specified, rigorously coded and tested, and maintained as a part of the organization’s enterprise systems. Conversely, RPA is often created by the business areas directly and can be inherently brittle due to changes in the systems with which it interfaces. If an automated process goes down, there are likely service level agreements in place and IT steps in to get the system back online. If an RPA bot goes down, a person is expected to do the tasks manually that had been done by the bot, and there is less likely to be a robust SLA for getting the bot fixed and back online. Interesting discussion around this in the Q&A, although not part of the area of study for the paper as presented.

The second presentation was “A Business Model of Robotic Process Automation” (Helbig & Braun), presented by Eva Katarina Helbig of BurdaSolutions, an internal IT service provider for an international media group. Their work was based on a case study within their own organization, looking at establishing RPA as a driver of digitization and automation within a company based on an iterative, holistic view of business models with the Business Model Canvas as analysis tool.

They interviewed several people across the organization, mostly in operational areas, to develop a more structured model for how to approach, develop and deploy RPA projects, starting with the value proposition and expanding out to identify the customers, resources and key activities.

That’s it for day two of the main BPM2023 conference, and we’re off later to the Spoorwegmuseum for the conference dinner and a tour of the railway museum.

BPM2023 Day 1: RPA Forum

In the afternoon breakouts, I attended the RPA (robotic process automation) forum for three presentations.

The first presentation was “What Are You Gazing At? An Approach to Use Eye-tracking for Robotic Process Automation”, presented by Antonio Martínez-Rojas. RPA typically includes a training agent that captures what and where a human operator is typing based on UI logs, and uses that to create the script of actions that should be executed when that task is automated using the RPA “bot” without the person being involved – a type of process mining but based on UI event logs. In this presentation, we heard about using eye tracking — what the person is looking at and focusing on — during the training phase to understand where they are looking for information. This is especially interesting in less structured environments such as reading a letter or email, where the information may be buried in non-relevant text, and it’s difficult to filter out the relevant information. Unlike the UI event log methods, this can find what the user is focusing on while they are working, which may not be the same things in the screen that they are clicking on – an important distinction.

The second presentation was “Accelerating The Support of Conversational Interfaces For RPAs Through APIs”, presented by Yara Rizk. She presented the problem that many business people could be better supported through easier access to all types of APIs, including unattended RPA bots, and proposed a chatbot interface to APIs. This can be extracted by automatically interrogating the OpenAPI specifications, with some optional addition of phrases from people, to create natural language sentences: what is the intent of the action based on the API endpoint name and description plus sample sentences provided by the people. Then, the sentences are analyzed and filtered, and typically also with some involvement from human experts, and used to train the intent recognition models required to drive a chatbot interface.

The last presentation in this session was “Migrating from RPA to Backend Automation: An Exploratory Study”, presented by Andre Strothmann. He discussed how RPA robots need to be designed and prioritized so that they can be easily replaceable, with the goal to move to back-end automation as soon as it is available. I’ve written and presented many times about how RPA is a bridging technology, and most of it will go away in the 5-10 year horizon, so I’m pretty happy to see this presented in a more rigorous way than my usual hand-waving. He discussed the analysis of their interview data that resulted in some fundamental design requirements for RPA bots, design guidelines for the processes that orchestrate those bots, and migration considerations when moving from RPA bots to APIs. If you’re developing RPA bots now and understand that they are only a stopgap solution, you should be following this research.

Maximizing success in automation projects: my presentation from CommunityLIVE 2022

Hey, I gave a presentation yesterday, first time in person in almost three years! Here’s the slides, and feel free to contact me if you have questions. I can’t figure out how to get the embed short code on mobile, but when I’m back in the office I’ll give it another try and you may see the slideshow embedded below. Update: found the short code!

Live! From Nashville! It’s CommunityLIVE

It’s been a long 2.5 years since I was last at a conference in person, and I’m kicking off the new era with Hyland’s CommunityLIVE in Nashville. I came in early to attend today’s Executive Forum, where we were welcomed by Stephanie Dedmon, CIO of the state of Tennessee. She gave us a brief view of their IT initiatives, one of which is process automation (specifically RPA). I will be giving a presentation tomorrow about some of the best practices around intelligent automation, and one of those is having process automation right on your strategic initiatives list, like what Dedmon tells us is the case with the local state government.

We had a corporate update from Hyland’s CEO, Bill Priemer. I haven’t been to a Hyland event before — I came to this from my past relationship with Alfresco prior to their acquisition by Hyland — and it’s good to see a more complete briefing including how their recent acquisitions are being handled. He covered some financials and other numbers that I have not included here since I usually just focus on the technology, and I’m not sure if I’m cleared to discuss those outside this venue.

Priemer said that they are “solely focused on content services”, which does not sound all that great for the process side of the former Alfresco product; recall that the absorption of Activiti into Alfresco which turned it into essentially (just) a content-centric process engine was controversial, and led to the departure of some of the original Activiti architects and developers. I expect that many Activiti customers/users that were not doing content-centric projects have already migrated to other platforms that came from the same core code base, such as Camunda and Flowable.

Their corporate priorities around product development are focused on developing their next-gen SaaS experience platform, and building a cloud core engine to migrate existing customers. I’m a bit surprised that they’re this far behind the curve on cloud technology, but they have a pretty significant on-premise customer base for their legacy OnBase product. Having acquired Perceptive (2017) and Nuxeo (2021) in addition to Alfresco (2020), they are also still busy digesting those: supporting (and advancing) each of them as separate products, while planning out a product roadmap for convergence. Interestingly, they have committed to their current 80% remote workforce (which used to be 80% in the office), and are likely learning to “eat their own dog food” and therefore coming to a full understanding of what their customers are facing as they move to cloud platforms to support remote work. If nothing else, they could become their own best testbed for cloud.

There was a panel hosted by Ed McQuiston, Chief Commercial 1Officer (which includes sales, marketing, customer success and a few other things); panels are difficult to capture in a post like this, but there was an interesting bit of the discussion on how automation is becoming paramount: costs are being cut after a couple of years of “drunken sailor” spending just to stay in business, and if you don’t start automating, you’re going to be in trouble. The easy stuff needs to get automated, to leave the hard stuff for the staff remaining after the Great Resignation. In my presentation tomorrow, I’m going to be talking about the “automation imperative” which expands these ideas a bit more.

I stepped out while they did some roundtable sessions, then returned at the end of the afternoon for the product update with Hyland’s Chief Product Officer, John Phelan. He will be covering some of this same territory in the general keynote tomorrow morning, but I’ve grabbed what I could from this session and can fill in some of the blanks tomorrow. He spoke quite a bit about platform extensibility, allowing many other types of capabilities to plug into Hyland’s content services core. Or rather, cores, since this could be any of their (competing) content services engines. I’m looking forward to hearing more about the roadmap for convergence of the engines; with content engines, this is an tough one because full platform convergence requires a migration pathway — at a reasonable cost — for clients. He showed a slide with different use cases for platform extensibility, being able to plug in RPA, or records management, or intelligent capture, or case management. But not mentioned (obvious to my process-centric ears) was process management, a capability that they now have in the Activiti/Process Services that came with the Alfresco acquisition. Even if they call it workflow, a term that most people in process management feel is a bit too simplistic, it still was missing from his slide. Case management and process management are highly related, but not the same thing, unless you’re going to restrict your process management to case management paradigms in order to have process exist only as an adjunct to content. RPA is, of course, task automation, not process management. I’m seeing a bit of a gap in the strategy, or maybe it’s a terminology issue; I’d like to see a more detailed briefing of the whole platform to gain a better understanding.

Phelan was followed by Hyland’s Chief Innovayion Officer, Sam Babic, who gave a bit of a review of Gartner’s definition of hyperautomation (a term that still makes me giggle a bit in spite of having written a paper on the topic recently). Every vendor has their spin on hyperautomation, and Babic spoke about some of the practical aspects of how to implement solutions in a hyperautomation fashion: leveraging multiple leading-edge technologies (IoT, event-driven architecture, AI/ML, RPA, chatbots, etc.) to be able to swiftly create new business solutions. He does include workflow as a (I believe) headless orchestration of triggers that can then instantiate a case, so that’s something, and included the phrase BPM/BPA/Workflow on his product capability word salad slide. Obviously, they have a very content-centric view of the product space, whereas I’m a column 2 kind of girl.

I’ll be presenting tomorrow afternoon in the Business Transformation track — in the least desirable time spot at the end of the day, where I’m contractually obligated to tell the attendees that I’m the only thing standing between them and the bar — with on the topic of maximizing success in automation projects. I’ve spent 30+ years building automation software (content and process) and building solutions using that same type of software, so have seen a lot of things go wrong, and some things go right. If you’re here at CommunityLIVE, stop by to hear about my best practices, plus a few anti-patterns to watch out for.

Camunda Platform 7.15: now low-code (-ish)

I had a quick briefing with Daniel Meyer, CTO of Camunda, about today’s release. With this new version 7.15, they are rebranding from Camunda BPM to Camunda Platform (although most customers just refer to the product as “Camunda” since they really bundle everything in one package). This follows the lead of other vendors who have distanced themselves from the BPM (business process management) moniker, in part because what the platforms do is more than just process management, and in part because BPM is starting to be considered an outdated term. We’ve seen the analysts struggle with naming the space, or even defining it in the same way, with terms like “digital process automation”, “hyperautomation” and “digitalization” being bandied about.

An interesting pivot for Camunda in this release is their new support for low-code developers — which they distinguish as having a more technical background than citizen developers — after years of primarily serving the needs of professional technical (“pro-code”) developers. The environment for pro-code developers won’t change, but now it will be possible for more collaboration between low-code and pro-code developers within the platform with a number of new features:

  • Create a catalog of reusable workers (integrations) and RPA bots that can be integrated into process models using templates. This allows pro-code developers to create the reusable components, while low-code developers consume those components by adding them to process models for execution. RPA integration is driving some amount of this need for collaboration, since low-code developers are usually the ones on the front-end of RPA initiatives in terms of determining and training bot functionality, but previously may have had more difficult integrating those into process orchestrations. Camunda is extending their RPA Bridge to add Automation Anywhere integration to their existing UIPath integration, which gives them coverage of a significant portion of the RPA market. I covered a bit of their RPA Bridge architecture and their overall view on RPA in one of my posts from their October 2020 CamundaCon. I expect that we will soon see Blue Prism integration to round out the main commercial RPA products, and possibly an open source alternative to appeal to their community customers.
  • DMN support, including DRD and decision tables, in their Cawemo collaborative modeler. This is a good way to get the citizen developers and business analysts involved in modeling decisions as well as processes.
  • A form builder. Now, I’m pretty sure I’ve heard Jakob Freund claim that they would never do this, but there it is: a graphical form designer for creating a rudimentary UI without writing code. This is just a preliminary release, only supporting text input fields, so isn’t going to win any UI design awards. However, it’s available in the open source and commercial versions as well as accessible as a library in bpmn.io, and will allow a low-code developer to do end-to-end development: create process and decision models, and create reusable “starter” UIs for attaching to start events and user activities. When this form builder gets a bit more robust in the next version, it may be a decent operational prototyping tool, and possibly even make it into production for some simple situations.

They’ve also added some nice enhancements to Optimize, their monitoring and analytics tool, and have bundled it into the core commercial product. Optimize was first released mid-2017 and is now used by about half of their customers. Basically, it pumps the operational data exhaust out of the BPM engine database and into an elastic search environment; with the advent of Optimize 3.0 last year, they could also collect tracking events from other (non-Camunda) systems into the same environment, allowing end-to-end processes to be tracked across multiple systems. The new version of Optimize, now part of Camunda Platform 7.15, adds some new visualizations and filtering for problem identification and tracking.

Overall, there’s some important things in this release, although it might appear to be just a collection of capabilities that many of the all-in-one low-code platforms have had all along. It’s not really in Camunda’s DNA to become a proprietary all-in-one application development platform like Appian or IBM BPM, or even make low-code a primary target, since they have a robust customer base of technical developers. However, these new capabilities create an important bridge between low-code developers who have a better understanding of the business needs, and pro-code developers with the technical chops to create robust systems. It also provides a base for Camunda customers who want to build their own low-code environment for internal application development: a reasonably common scenario in large companies that just can’t fit their development needs into a proprietary application development platform.

OpenText Enterprise World 2020, Day 1

The last time that I was on a plane was mid-February, when I attended the OpenText analyst summit in Boston. For people even paying attention to the virus that was sweeping through China and spreading to other Asian countries, it seemed like a faraway problem that wasn’t going to impact us. How wrong we were. Eight months later, many businesses have completely changed their products, their markets and their workforce, much of this with the aid of technology that automates processes and supply chains, and enables remote work.

By early April, OpenText had already moved their European regional conference online, and this week, I’m attending the virtual version of their annual OpenText World conference, in a completely different world than in February. Similar to many other vendors that I cover (and have attended virtual conferences for in the past several months), OpenText’s broad portfolio of enterprise automation products has the opportunity to make gains during this time. The conference opened with a keynote from CEO Mark Barrenechea, “Time to Rethink Business”, highlighting that we are undergoing a fundamental technological (and societal) disruption, and small adjustments to how businesses work aren’t going to cut it. Instead of the overused term “new normal”, Barrenechea spoke about “new equilibrium”: how our business models and work methods are achieving a stable state that is fundamentally different than what it was prior to 2020. I’ve presented about a lot of these same issues, but I really like his equilibrium analogy with the idea that the landscape has changed, and our ball has rolled downhill to a new location.

He announced OpenText Cloud Edition (CE) 20.4, which includes five domain-oriented cloud platforms focused on content, business network, experience, security and development. All of these are based on the same basic platform and architecture, allowing them to updated on a quarterly basis.

  • The Content Cloud provides the single source of truth across the organization (via information federation), enables collaboration, automates processes and provides information governance and security.
  • The Business Network Cloud deals directly with the management and automation of supply chains, which has increased in importance exponentially in these past several months of supply chain disruption. OpenText has used this time to expand the platform in terms of partners, API integrations and other capabilities. Although this is not my usual area of interest, it’s impossible to ignore the role of platforms such as the Business Network Cloud in making end-to-end processes more agile and resilient.
  • The Experience Cloud is their customer communications platform, including omnichannel customer engagement tools and AI-driven insights.
  • The Security and Protection Cloud provides a collection of security-related capabilities, from backup to endpoint protection to digital forensics. This is another product class that has become incredibly important with so many organizations shifting to work from home, since protecting information and transactions is critical regardless of where the worker happens to be working.
  • The Developer Cloud is a new bundling/labelling of their software development (including low-code) tools and APIs, with 32 services across eight groupings including capture, storage, analysis, automation, search, integration, communicate and security. The OpenText products that I’ve covered in the past mostly live here: process automation, low-code application development, and case management.

Barrenechea finished with their Voyager program, which appears to be an enthusiastic rebranding of their training programs.

Next up was a prerecorded AppWorks strategy and roadmap with Nic Carter and Nick King from OpenText product management. It was fortunate that this was prerecorded (as much as I feel it decreases the energy of the presentation and doesn’t allow for live Q&A) since the keynote ran overtime, and the AppWorks session could be started when I was ready. Which begs the question why it was “scheduled” to start at a specific time. I do like the fact that OpenText puts the presentation slides in the broadcast platform with the session, so if I miss something it’s easy to skip back a slide or two on my local copy.

Process Suite (based on the Cordys-heritage product) was rolled into the AppWorks branding starting in 2018, and the platform and UI consolidated with the low-code environment between then and now. The sweet spot for their low-code process-centric applications is around case management, such as service requests, although the process engine is capable of supporting a wide range of application styles and developer skill levels.

They walked through a number of developer and end-user feature enhancements in the 20.4 version, then covered new automation features. This includes enhanced content and Brava viewer integration, but more significantly, their RPA service. They’re not creating/acquiring their own RPA tool, or just focusing on one tool, but have created a service that enables connectors to any RPA product. Their first connector is for UiPath and they have more on the roadmap — very similar rollout to what we saw at CamundaCon and Bizagi Catalyst a few weeks ago. By release 21.2 (mid-2021), they will have an open source RPA connector so that anyone can build a connector to their RPA of choice if it’s not provided directly by OpenText.

There are some AppWorks demos and discussion later, but they’re in the “Demos On Demand” category so I’m not sure if they’re live or “live”.

I checked out the content service keynote with Stephen Ludlow, SVP of product management; there’s a lot of overlap between their content, process, AI and appdev messages, so important to see how they approach it from all directions. His message is that content and process are tightly linked in terms of their business usage (even if on different systems), and business users should be able to see content in the context of business processes. They integrate with and complement a number of mainstream platforms, including Microsoft Office/Teams, SAP, Salesforce and SuccessFactors. They provide digital signature capabilities, allowing an external party to digitally sign a document that is stored in an OpenText content server.

An interesting industry event that was not discussed was the recent acquisition of Alfresco by Hyland. Alfresco bragged about the Documentum customers that they were moving onto Alfresco on AWS, and now OpenText may be trying to reclaim some of that market by offering support services for Alfresco customers and provide an OpenText-branded version of Alfresco Community Edition, unfortunately via a private fork. In the 2019 Forrester Wave for ECM, OpenText takes the lead spot, Microsoft and Hyland are some ways back but still in the leaders category, and Alfresco is right on the border between leaders and strong performers. Clearly, Hyland believes that acquiring Alfresco will allow it to push further up into OpenText’s territory, and OpenText is coming out swinging.

I’m finding it a bit difficult to navigate the agenda, since there’s no way to browse the entire agenda by time, but it seems to require that you know what product category that you’re interested in to see what’s coming up in a time-based format. That’s probably best for customers who only have one or two of their products and would just search in those areas, but for someone like me who is interested in a broader swath of topics, I’m sure that I’m missing some things.

That’s it for me for today, although I may try to tune in later for Poppy Crum‘s keynote. I’ll be back tomorrow for Muhi Majzoub’s innovation keynote and a few other sessions.

CamundaCon 2020.2 Day 1

I listened to Camunda CEO Jakob Freund‘s opening keynote from the virtual CamundaCon 2020.2 (the October edition), and he really hit it out of the park. I’ve known Jakob a long time and many of our ideas are aligned, and there was so much in particular in his keynote that resonated with me. He used the phrase “reinvent [your business] or die”, whereas I’ve been using “modernize or perish”, with a focus not just on legacy systems and infrastructure, but also legacy organizational culture. Not to hijack this post with a plug for another company, but I’m doing a keynote at the virtual Bizagi Catalyst next week on aligning intelligent automation with incentives and business outcomes, which looks at issues of legacy organizational culture as well as the technology around automation. Processes are, as he pointed out, the algorithms of an organization: they touch everything and are everywhere (even if you haven’t automated them), and a lot of digital-native companies are successful precisely because they have optimized those algorithms.

Jakob’s advice in achieving reinvention/modernization is to do a gradual transformation, not try to do a big bang approach that fails more often than it succeeds, and positions Camunda (of course) as the bridge between the worlds of legacy and new technology. In my years of technology consulting on BPM implementations, I also recommend using a gradual approach by building bridges between new and old technology, then swapping out the legacy bits as you develop or buy replacements. This is where, for example, you can use RPA to create stop-gap task automation with your existing legacy systems, then gradually replace the underlying legacy or at least create APIs to replace the RPA bots.

The second opening keynote was with Marco Einacker and Christoph Anzer of Deutsche Telekom, discussing how they are using process and task automation by combining Camunda for the process layer and RPA at the task layer. They started out with using RPA for automating tasks and processes, ending up with more than 3,000 bots and an estimated €93 million in savings. It was a very decentralized approach, with initially being created by business areas without IT involvement, but as they scaled up, they started to look for ways to centralize some of the ideas and technology. First was to identify the most important tasks to start with, namely those that were true pain points in the business (Einacker used the phrase ” look for the shittiest, most painful process and start there”) not just the easy copy-paste applications. They also looked at how other smart technologies, such as OCR and AI, could be integrated to create completely unattended bots that add significant value.

The decentralized approach resulted in seven different RPA platforms and too much process automation happening in the RPA layer, which increased the amount of technical debt, so they adapted their strategy to consolidate RPA platforms and separate the process layer from the bot layer. In short, they are now using Camunda for process orchestration, and the RPA bots have become tasks that are orchestrated by the process engine. Gradually, they are (or will be) replacing the RPA bots with APIs, which moves the integration from front-end to back-end, making it more robust with less maintenance.

I moved off to the business architecture track for a presentation by Srivatsan Vijayaraghavan of Intuit, where they are using Camunda for three different use cases: their own internal processes, some customer-facing processes for interacting with Intuit, and — most interesting to me — enabling their customers to create their own workflows across different applications. Their QuickBooks customers are primarily small and mid-sized business that don’t have the skills to set up their own BPM system (although arguably they could use one of the many low-code process automation platforms to do at least part of this), which opened the opportunity for Intuit to offer a workflow solution based on Camunda but customizable by the individual customer organizations. Invoice approvals was an obvious place to start, since Accounts Payable is a problem area in many companies, then they expanded to other approval types and integration with non-Intuit apps such as e-signature and CRM. Customers can even build their own workflows: a true workflow as a service model, with pre-built templates for common workflows, integration with all Intuit services, and a simplified workflow designer.

Intuit customers don’t interact directly with Camunda services; Camunda is a separately hosted and abstracted service, and they’ve used Kafka messages and external task patterns to create the cut-out layer. They’ve created a wrapper around the modeling tools, so that customers use a simplified workflow designer instead of the BPMN designer to configure the process templates. There is an issue with a proliferation of process definitions as each customer creates their own version of, for example, an invoice approval workflow — he mentioned 70,000 process definitions — and they will likely need to do some sort of automated cleanup as the platform matures. Really interesting use case, and one that could be used by large companies that want their internal customers to be able to create/customize their own workflows.

The next presentation was by Stephen Donovan of Fidelity Investments and James Watson of Doculabs. I worked with Fidelity in 2018-19 to help create the architecture for their digital automation platform (in my other life, I’m a technical architecture/strategy consultant); it appears that they’re not up and running with anything yet, but they have been engaging the business units on thinking about digital transformation and how the features of the new Camunda-based platform can be leveraged when the time comes to migrate applications from their legacy workflow platform. This doesn’t seem to have advanced much since they talked about it at the April CamundaCon, although Donovan had more detailed insights into how they are doing this.

At the April CamundaCon, I watched Patrick Millar’s presentation on using Camunda for blockchain ledger automation, or rather I watched part of it: his internet died partway through and I missed the part about how they are using Camunda, so I’m back to see it now. The RiskStream Collaborative is a not-for-profit consortium collaborating on the use of blockchain in the insurance industry; their parent organization, The Institutes, provides risk management and insurance education and is guided by senior executives from the property and casualty industry. To copy from my original post, RiskStream is creating a distributed network platform, called Canopy, that allows their insurance company members to share data privately and securely, and participate in shared business processes. Whenever you have multiple insurance companies in an insurance process, like a claim for a multi-vehicle accident, having shared business processes — such as first notice of loss and proof of insurance — between the multiple insurers means that claims can be settled quicker and at a much lower cost.

I do a lot of work with insurance companies, as well as with BPM vendors to help them understand insurance operations, and this really resonates: the FNOL (first notice of loss) process for multi-party claims continues to be a problem in almost every company, and using enterprise blockchain to facilitate interactions between the multiple insurers makes a lot of sense. Note that they are not creating or replacing claims systems in any way; rather, they are connecting the multiple insurance companies, who would then integrate Canopy to their internal claims systems such as Guidewire.

Camunda is used in the control framework layer of Canopy to manage the flows within the applications, such as the FNOL application. The control framework is just one slice of the platform: there’s the core distributed ledger layer below that, where the blockchain data is persisted, and an integration layer above it to integrate with insurers’ claims systems as well as the identity and authorization registry.

There was a Gartner keynote, which gave me an opportunity to tidy up the writing and images for the rest of this post, then I tuned back in for Niall Deehan’s session on Camunda Hackdays over on the community tech track, and some of the interesting creations that come out of the recent virtual version. This drives home the point that Camunda is, at its heart, open source software that relies on a community of developer both within and outside Camunda to extend and enhance the core product. The examples presented here were all done by Camunda employees, although many of them are not part of the development team, but come from areas such as customer-facing consulting. These were pretty quick demos so I won’t go into detail, but here are the projects on Github:

If you’re a Camunda customer (open source or commercial) and you like one of these ideas, head on over to the related github page and star it to show your interest.

There was a closing keynote by Capgemini; like the Gartner keynote, I felt that it wasn’t a great fit for the audience, but those are my only real criticisms of the conference so far.

Jakob Freund came back for a conversation with Mary Thengvall to recap the day. If you want to see the recorded videos of the live sessions, head over to the agenda page and click on Watch Now for any session.

There’s a lot of great stuff on the agenda for tomorrow, including CTO Daniel Meyer talking about their new RPA orchestration capabilities, and I’ll be back for that.

IBM acquires WDG Automation RPA

The announcement that IBM was acquiring WDG Automation for their RPA capabilities was weeks ago, but for some reason the analyst briefing was delayed, then delayed again. Today, however, we had a briefing with Mike Gilfix, VP Cloud Integration and Automation Software, Mike Lim, Acquisition Integration Executive, and Tom Ivory, VP IBM Automation Services, on the what, why and how of this. Interestingly, none of the pre-acquisition WDG executives/founders were included on the call.

IBM is positioning this as part of a “unified platform” for integration, but the reality is likely far from that: companies that grow product capabilities through acquisition, like IBM, usually end up with a mixed bag of lightly-integrated products that may not be better for a given use case than a best-of-breed approach from multiple vendors.

The briefing started with the now-familiar pandemic call to action: customer demand is volatile, industries are being disrupted, and remote employees are struggling to get work done. Their broad solution makes sense, it that is focused on digitizing and automating work, applying AI where possible, and augmenting the workforce with automation and bots. RPA for task automation was their missing piece: IBM already had BPM, AI and automated decisioning, but needed to address task automation. Now, they are offering their Cloud Pak for Automation, that includes all of these intelligent automation-related components.

Mike Lim walked through their reasons for selecting WDG — a relatively unknown Brazilian company — and it appears that the technology is a good fit for IBM because it’s cloud-native, offers multi-channel AI-powered chatbots integrated with RPA, and has a low-code bot builder with 650+ pre-built commands. There will obviously be some work to integrate this with some of the overlapping Watson capabilities, such as the Watson Assistant that offers AI-powered chatbots. WDG also has some good customer cases, with super-fast ROI. It offers unattended and attended bots, OCR (although it stops short of full-on document capture), and operational dashboards. The combination of AI and RPA has become increasingly important in the market, to the point where some vendors and analysts use “intelligent automation” to mean AI and RPA to the exclusion of other types of automation. I’m not arguing that it’s not important, but more that AI and other forms of intelligence need to be integrated across the automation suite, not just with RPA.

IBM is envisioning their new RPA having use cases both in business operations, as you usually see, and also with a strong focus on IT operations, such as semi-automated real-time event incident management. To get there, they have a roadmap to bring the RPA product into the IBM fold to offer IBM RPA as a service, integrate into the Cloud Pak, and roll it out via their GBS professional services arm. Tom Ivory from GBS gave us a view into their Services Essentials for Automation platform that includes a “hosted RPA” bucket: WDG will initially just be added to that block of available tools, although GBS will continue to offer competitive RPA products as part of the platform too.

It’s a bit unusual for IBM GBS and the software group to play together nicely: my history with IBM tends to show otherwise, and Mike Lim even commented on the (implied: unusual) cooperation and collaboration on this particular initiative.

There’s no doubt that RPA will play a strong role in the frantic reworking of business operations that’s going on now within many large organizations to respond to the pandemic crisis. Personally, I don’t think it’s a super long-term growth play: as more applications offer proper APIs and integration points, the need for RPA (which basically integrates with applications that don’t have integration points) will decrease. However, IBM needs to have it in their toolbox to show completeness, even if GBS ends up using their competitors’ RPA products in projects.

CamundaCon Live 2020 – Day 1: Optimize, RPA, and how 24 Hour Fitness executes 5B process nodes per month

We continued the first day of CamundaCon Live (virtual) 2020 with Felix Mueller, senior product manager, presenting on how to use Camunda Optimize for driving continuous improvement in processes. I attended the Optimze 3.0 release webinar a couple of weeks ago, and saw some of the new things that they’re doing with monitoring and optimization of event-based processes — this allows processes that are not part of Camunda to be included in Optimize. The CamundaCon session started with a broader view of Optimize functionality, showing how it collects information then can be used for root cause analysis of process bottlenecks as well as displaying realtime metrics. They have some good case studies for Optimize, including insurance provider Visana Group.

He then moved to show the event-based process monitoring, and how Optimize can ingest and aggregate information from any external system with a connector, such as RabbitMQ (which they have built). His demo showed a customer onboarding process that could be triggered either by an online form that would be a direct Camunda process instantiation, or via a mailed-in form that was scanned into another system that emitted an event that would trigger the process.

It was very obvious that this was a live presentation, because Mueller was scrambling against the clock since the previous session went a bit long, having to speed through his demo and take a couple of shortcuts. Although you might think of this as a logistical “bug”, I maintain that it’s an interactivity “feature”, and made the experience much closer to an in-person conference than a set of pre-recorded presentations that were just queued up in sequence.

This was followed by a presentation by Kris Barczynski of Nokia Bell Labs about a really interesting use case: they are using Camunda to guide visiting groups on tours through the Nokia Campus customer experience spaces, and interact with devices including the guests’ wearables, drones and robots. Visitors are welcomed and guided by a robot, and they can interact with voice-controlled drones; Camunda is orchestrating the processes behind the scenes. He talked about some of their design decisions, such as using Camunda JavaScript workers to call external services, and building a custom Android app. Really interesting combination of physical and virtual processes.

Next was a panel discussion on the future of RPA, with Vittorio Dal Bianco of Nokia, Marco Einacker of Deutsche Telekom, Paul Jones of NatWest Group, and Camunda CEO Jakob Freund, moderated by Jason Bloomberg of Intellyx Research. The three customer presenters are involved with the RPA initiatives at their own organizations, and also looking at how to integrate that with their Camunda processes. Panels are always a challenge to live-blog, but here’s some of the points discussed (attributed where I remembered):

  • The customer panelists agreed that RPA has allowed people to move to more interesting/valuable work, rather than doing routine tasks such as copying and pasting between application screens. Task automation through RPA reduces resources/costs, decreases cycle time, and also improves quality/compliance.
  • RPA is a “short-term bandaid” driven from outside the IT organization in order to get some immediate efficiency benefits. It’s maintenance-intensive, since any changes to the appliations being integrated means that the bots need to be reprogrammed. Deutsche Telekom is moving from RPA front-end integration/automation to drive the more strategic BPMS/API automation, so sees that RPA has been an important step on the strategic journey but not the endpoint. NatWest recognizes RPA as a key automation tool, but see it as a short-term tactical tool; they classify RPA as part of their technical debt, and it is not a part of their long-term architecture. Nokia thinks that RPA will remain in niche pockets for applications that will never have a proper API, such as Excel-based applications.
  • Nokia uses Blue Prism for RPA. NatWest uses UIPath RPA, and has a group that is building the integration for having Camunda execute a UIPath task — although I would have thought this would be a relatively simple service call or external task. Deutsche Telekom is using seven different RPA platforms, three of which are commercial including Another Monday and Kryon; they are just starting to look at the integration between Camunda and RPA with a plan to have Camunda orchestrate steps, and one “microbot” performing an atomic task at that step. As their core system offers an API for that task, the RPA bot will be replaced with a direct API call. This last approach is definitely aligned with Camunda’s vision of how their BPM can work with RPA bots as well as any other “task performers”.
  • More discussion on the role of RPA in digital transformation: recommendations to go ahead and use it, but consider it as a stop-gap measure to get a quick win before you can get the APIs built out in the systems that are being integrated. It’s considered technical debt because it will be replaced in the future as the APIs of the core systems become available. It’s a painkiller, not a cure.
  • Although some of the companies are using business people to build their own bots, that has a mixed degree of success and other companies do not classify RPA as citizen developer technology. This is pretty much the same as we’re seeing with other low-code environments, where they are often sold as application development platforms for non-professional developers, but the reality is that many applications require a professional developer because of the technical complexity of systems being integrated.
  • Cost and effort of RPA bot maintenance can be significant, in some cases more than back-end integration. Bot fixes may be fairly quick, but are required much more frequently such as when a password changes: bots require babysitting.
  • The customers had a few Camunda product requests, such as better connectors to more of the RPA tools. In general, however, they don’t want Camunda to build/acquire their own RPA offering, but just see it as another example of where you can pick a best-of-breed RPA tool and use it for task automation at individual steps within a Camunda process.
  • Best practices/lessons learned:
    • Separate the process orchestration layer from the bot execution layer from the beginning, with the process orchestration being done by Camunda and the bot task execution being done by the RPA tool.
    • Use process mining first to objectively identify what should be automated; of course, this would also require that you mine the user interaction processes that would be automated with bots, not just the system logs.
    • Have a centralized control center for bot control.
    • Develop bot templates that can be more quickly modified and deployed.

Looking at how the panel worked, there are definitely aspects of online panels that work better than in-person panels, specifically how they respond to audience questions. Some people don’t want to speak up in front of an audience, while others get up and bloviate without actually asking a question. With online-only questions, the moderator can browse through and aggregate them, then select the ones that are best suited to the panel. With video on each of the presenters (except for one who lost his connection and had to dial in), it was still possible to see reactions and have a sense of the live nature of the panel.

The last session of the day was Jimmy Floyd of 24 Hour Fitness on their massive Camunda implementation of five billion (with a “B”) process node executions per month. You can see his presentation from CamundaCon Berlin 2018 as a point of comparison with today’s numbers. Pretty much everything that happens at 24 Hour Fitness is controlled by a Camunda process, from their internal processes to customer-facing activities such as a member swiping their card to gain access to a club. It hasn’t been without hiccups along the way: they had to turn off process history logging to attain this volume of data, and can’t easily drill down into processes that call a lot of other processes, but the use of BPMN and DMN has greatly improved the interactions between product owners and developers, sometimes allowing business people to make a rule change without involving developers.

He had a lot of technical information on how they built this and their overall architecture. Their use is definitely custom code, but using Camunda with BPMN and DMN gave them a huge step-up versus just writing code. Even logic inside of microservices is implemented with Camunda, not written in code. Their entire architecture is based on Camunda, so it’s not a matter of deciding whether or not to use it for a new application or to integrate in a new external solution. They are taking a look at Zeebe to decide if it’s the right choice of them moving forward, but it’s early days on that: it would be a significant migration for them, they would likey lose functionality (for BPMN elements not yet implemented in Zeebe, among other things), and Zeebe has only just achieved production readiness.

Camunda is changing how they handle history data relative to the transactional data, in part likely due to input from high-throughput customers, and this may allow 24 Hour Fitness to turn history logging back on. They’re starting to work with Optimize via Kafka to gain insights into their processes.

Day 1 finished with a quick wrapup from Jakob Freund; in spite of the fact that it’s probably been a really long day for him, he seemed pretty happy about how well things went today. Tomorrow will cover more on microservices orchestration, and have customer case studies from Cox Automotive, Capital One and Goldman Sachs.

As you probably gather from my posts today, I’m finding the CamundaCon online format to be very engaging. This is due to most of the presentations being performed live (not pre-recorded as is seen with most of the online conferences these days) and the use of Slack as a persistent chat platform, actively monitored by all Camunda participants from the CEO on down. They do need a little bit more slack in the schedule however: from 10am to 3:45pm there was only one 15-minute break scheduled mid-way, and it didn’t happen because the morning sessions ran overtime. If you’re attending tomorrow, be prepared to carry your computer to the kitchen and bathroom with you if you don’t want to miss a minute of the presentations.

As I finish off my day at the virtual CamundaCon, I notice that the videos of presentations from earlier today are already available — including the panel session that only happened an hour ago. Go to the CamundaCon hub, then change the selection from “Upcoming” to “On Demand” above the Type/Day/Track selectors.

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!