CamundaCon Live 2020 – Day 2: Microservices Orchestration, new stuff from Camunda, and legacy BPM migration

Day 2 of CamundaCon Live kicked off with Camunda co-founder Bernd Rücker talking about microservices orchestration and integation using workflow automation. This is a common theme for him, and I’ve seen earlier versions of this presentation, but he always brings something fresh to the discussion. He discussed reactive applications that are responsive, resilient, elastic and message-driven, then covered different styles of event-driven architecture.

He gave a (live) demo of autonomous services communicating using Kafka, and showed the issue with peer-to-peer choreography: there is no sense of the end-to-end orchestration to ensure that all services that should have run did actually run. He created an event-based process in Camunda Optimize that modeled the expected end-to-end process, and now by connecting that to the Kafka messages, he had a visualization of the workflow that he defined that showed what happens when one service isn’t running: effectively, the virtual workflow is stuck at the previous service since it does not receive a message that the (stopped) service has picked up the messages.

One solution is to extract the end-to-end responsibility into its own service: really, this implies some level of orchestration via commands rather than purely reacting to events, even if it’s not a completely tightly-coupled workflow. If you use an engine like Camunda to do that top-level orchestration, then you can move the monitoring of the process within that engine (Cockpit rather than Optimize) although it’s likely that anyone using an event-based architecture is going to be looking at an event monitoring system like Optimize as well. You can see his slides below, and the video will be available on the CamundaCon Live hub probably by the time that I publish this post.

The morning session continued with CTO Daniel Meyer on some of the new product capabilities. Camunda’s goal has moved from just being a BPM engine for Java developers to a much broader orchestration platform that can integrate any technology stack and any endpoints.

He introduced a new distribution called Camunda Run (or Lil’ Camboot, as Niall Deehan calls it) provides a lightweight package (50MB) that includes the BPMN and DMN workflow and decision engines, Cockpit, Tasklist and the REST API. It can even be run in headless mode, which disables the web apps, if you just want the engines. It’s Open API enabled, CORS enabled, and SSL enabled out of the box. He gave a quick demo of downloading, starting and running Camunda Run: it’s pretty familiar if you’ve spent any time with Camunda, and it starts fast. From the blog post announcement, the target audience for Run is if at least one of the following is true:

  • You need a standalone process engine accessible via REST API
  • You don’t have extensive Java knowledge (or none at all) but still want to use Camunda BPM
  • You don’t want to configure an application server yourself
  • You want to configure everything in one place
  • You just want to Run Camunda BPM

Meyer also talked about Camunda Optimize, specifically the event-based process monitoring. We saw a bit of that yesterday in Felix Müller’s presentation, and I had a more complete view of the event-based features of Optimize a few weeks ago on the 3.0 release webinar. Basically, you add the event source to Optimize (such as Kafka), and Optimize exposes messages and allows them to be attached to the entry/exit points of elements on a BPMN diagram that represents the event-driven process. They are offering a 30-day free trial for Optimize now if you want to try it out.

Meyer’s third topic was about process automation as a service via Camunda Cloud, which is powered by Zeebe (rather than Camunda BPM). Having cloud-native Zeebe behind the scenes means that it’s highly scalable and fault-tolerant, and uses pub-sub orchestration to let you include endpoints from anywhere. He demonstrated how to spin up a new Zeebe cluster, then deploy a BPMN model that was created in the Zeebe Modeler and start instances of the process using the zbctl command line. These instances were then visible in Camunda Operate (the Zeebe process monitoring tool), and he ran JavaScript workers and published messages to complete tasks in the process and show the instance progressing through the process model. There’s a free trial for Camunda Cloud, and an early access version for $699/month that includes access to larger clusters and technical support.

He fielded some questions that came up on the Slack workspace during his talk. Moving from an existing Camunda BPM implementation to Camunda Run is apparently as easy as just redirecting to the new application server. You can’t use Java delegates, but will have to switch those out for external tasks. There was a question about BPM versus Zeebe, which I think is a question that a lot of Camunda customers have: although most are likely familiar with the technical and functional differences, there is an open question of whether Camunda will continue to support two workflow engines in the future, and if they are going to shift focus more towards Zeebe use cases.

The morning finished by breaking out into two tracks; I stayed with the customer presentations rather than the technical breakout to hear some of the case studies. The one that I was most interested in was Fareed Saeed, head of Product and Tools for Advanced Process Solutions at Fidelity Investments, talking about migrating their monolithic legacy BPM to Camunda, in part because I did some early technical architecture consulting with them on their digital process automation platform over a year ago, although I’m not involved at this time. For those of you who know me mostly through this blog and as an independent industry analyst, you may not be aware that the other half of my business is as a consultant to large enterprises, mostly financial services and insurance, on technical architecture and strategy, or anything else to help make their process-centric implementation projects a success.

James Watson of Doculabs, who advised Fidelity on migration strategies, joined him on the discussion. Saeed talked about their current home-built workflow system, which runs thousands of different processes for most of their back office operations, and the need to move away from monolithic architecture and fragile, non-agile systems to a more flexible platform. This talk was not about the architecture or platform, but about the migration planning and execution: a key subject for any large enterprise moving off a legacy platform, but one that is often not fully considered during new digital automation platform implementation.

There are a few different strategies for migrating process-based applications, and it’s not the same story for each process. Watson shared his thoughts on this (see the slide at right), but this is my take on it:

  • High-volume processes, that usually represent a smaller number of process models but most of the transaction volume, are usually rewritten from scratch while incorporating some degree of re-engineering and process improvement along the way. These are the core business processes that need to be done right, and will most benefit from the more agile and scalable new platform.
  • Lower volume processes can be reviewed to see if they’re still required, may possible be combined into similar processes, then a straightforward “lift and shift” rewrite done to just duplicate the functionality as is. In short, these aren’t worth the time to do the re-engineering unless there are obvious wins, since the volume is relatively low. These are also candidates for low-code business-led development if that’s available on the automation platform, rather than the professional development teams required for the high-volume transactional processes.
  • Very low volume processes can be retired, especially if their functionality can be rolled into processes in one of the first two categories.

Although they are looking at a “factory model” for some level of automation around the migration, Saeed believes that this is an opportunity to re-engineer the processes rather than just rewriting the same (broken) process on a new platform. They want to have smaller, distributed groups for developing and delivering new applications, which means that they need to have the right governance and standards in place to support a distributed model. He sees the need for early pilots and successes to allow everyone to see how this can work, and learn how to make it successful. A strong diverse team of business leaders is also a plus, since there will be some degree of pain in the business units as the migration happens.

That’s it for the morning of Day 2, they must have read my comments yesterday and actually made sure that we finished on time so that we get our 15 minute lunch break. 🙂 I’ll be back for the afternoon to finish off CamundaCon Live 2020.





CamundaCon Live 2020 – Day 1: Jakob Freund keynote and customer presentations

Every conference organizer has had to deal with either cancelling their event or moving it to some type of online version as most of us work from home during the COVID-19 pandemic. Some of these have been pretty lacklustre, using only pre-recorded sessions and no live chat/Q&A, but I had expectations for Camunda being able to do this in a more “live” manner that doesn’t completely replace an in-person event, but has a similar feel to it. They did not disappoint: although a few of the CamundaCon presentations were pre-recorded, most were done live, and speakers were available for live Q&A. They also hosted a Slack workspace for live chat, which is much better than the Q&A/chat features on the webinar broadcast platform: it’s fundamentally more feature-rich, and also allows the conversations to continue after a particular presentation completes.

Very capably hosted by Director of Developer Relations Mary Thengvall, presentations were all done from the speaker’s individual locations, starting with CEO Jakob Freund’s keynote. He covered a bit of Camunda’s history and direction, and discussed their main focus of providing end-to-end process orchestration using the example of Camunda together with RPA, then gradually migrating the RPA bots (widely used as a stop-gap process automation measure) to more robust API integrations. He also shared some news on new and timely product offerings, including a starter package for work-from-home human workflow, and an early adopter package for Camunda Cloud. I’ve shared a few of his slides below, but you can also go and see the recording: they are getting the videos and slides up within about an hour after each presentation, directly on the conference hub.

Next up was Simon Letort, Chief Digital Officer at Société Générale, on how they implemented their corporate investment banking’s core process automation platform using Camunda. They use Camunda as the core of their managed workflow platform, with 500+ processes deployed throughout their operations worldwide. They also use bpmn.io and form.io as their built-in process and forms modelers. Letort responded to an audience question about why not use another large BPMS product that was already in use; they wanted a best-of-breed solution rather than a proprietary walled garden, and also wanted to leverage open source tools as part of that so that they weren’t building everything from scratch. They transitioned from some of the proprietary tools by first replacing the underlying engine with Camunda, then trading out other components such as form.io as a more flexible UI was required.

Interestingly, about half of their workflows are created by 30 expert modelers within centers of expertise, and half by 1200 “amateur” modelers, or citizen developers. This really points out the potential for companies to mix together the experts (focused on core processes) and amateurs (focused on tactical or administrative processes) using the same engine, although they likely use quite different tools for the full development cycle. The SG Workflow “product” offers three main features to support these different modeler/developer types: the (Camunda) process engine, a workflow aggregator for grouping tasks and cases from multiple systems, and UI web components and apps. Their platform also auto-generates process documentation. The core product is created and maintained by a team of about 10, distributed between France and Canada.

He shared some good information on their architecture and roadmap: I did a few presentations last year (one of them at CamundaCon in Berlin) and wrote a paper on building your own process-centric platform using a BPMS and an assembly of other tools, inspired in part by companies like Société Générale that have done this to create a much more flexible application development environment for their large enterprises.

We moved from the main stage to the track sessions, and I sat in on a presentation by Jeremy Warren of Keller Williams Realty (a Camunda customer that works with integrator BP3) about their “SmartPlans” dynamic processes — these aren’t actually dynamic at runtime, but use a flower process model that loops back to allow any task to lead to any other task — which allow real estate agents to create their own plans and tasks.

This is a great example of automating some of the processes that real estate agents use to drive new business, such as contacting prospects on a regular schedule, which would normally be done (or not done) manually. Agents can decide what tasks to do in what order; the branching logic in the model then executes the plan as specified. He also shared some of their experiences in rolling out and debugging applications on this scale.

The second track session was Derek Hunter and Uzma Khan of Ontario Teachers’ Pension Plan (who have been an occasional client of mine over a number of years, including introducing them to startup Camunda back in 2013). They have a number of case management style of processes to handle requests from members (teachers) regarding their pensions. They have 144 BPMN templates, and execute 70,000 process instances per year with up to 20,000 active instances at any time since these are generally long-running workflows. Some of the extremely long-running processes are actually terminated after a specific stage, then a scheduler restarts a corresponding instance when new work needs to be done. Other processes may be suspended in the workflow engine, making them invisible to a user’s worklist until work needs to be done.

Camunda is really just an engine buried within the OTPP workflow system, completely hidden from calling applications by a workflow intermediary. This was essential during their migration off other platforms: at one time, they had three different workflow engines running simultaneously, and could migrate everything to Camunda without having to retool the higher-level applications. In particular, end users are never aware of the specific workflow engine, but work within applications that integrate business data from multiple systems.

They take advantage of in-flight instance migration due to the long-running nature of their processes, which is something that Camunda offers that is missing from many other BPMS products. Because of the large number of process templates and the complex architecture with many other systems and components, they have implemented automated testing practices including modeling user interactions through their workflow interface service (that sits above the workflow intermediary and the Camunda engine), and handling work-arounds for emulating external task processing in their core services.

They’ve developed a lot of best practices for automated testing, and built tools such as a BPMN navigation tool to use during unit testing. Another of their colleagues, Zain Esmail, will be presenting more about this on the technical track tomorrow. They have also developed tools for administrative monitoring and reporting on external tasks, to allow these to be integrated with the internal Camunda workflow metrics in Prometheus.

We’re taking a short break between the morning and afternoon sessions, so I’ll close this out now and be back in another post as things progress, either this afternoon or tomorrow.

Free COVID-19 apps from @Trisotech @Appian and @Pega

Yesterday, I passed on a link to The Master Channel’s free e-learning courses that you can use to start skilling up if you’re on the bench right now due to COVID-19. I’m also aware of a few companies in our industry who are offering free apps — some just to customers, some to everyone — that can help to fight COVID-19 in different ways.

The ability to build apps quickly is a cornerstone in our industry of model-driven development and low-code, and it’s encouraging to see some good offerings on the table already in response to our current situation.

Appian was first out of the blocks with a COVID-19 Response Management application for collecting and managing employee health status, travel history and more in a HIPAA-compliant cloud. You can read about it on their blog, and sign up for it online. Their blog post says that it’s free to any enterprise or government agency, although the signup page says that it’s free to organizations with over 1,000 employees — not sure which is accurate, since the latter seems to exclude non-customers under 1,000 employees. It’s free only for six months at this point.

Pegasystems followed closely behind with a COVID-19 Employee Safety and Business Continuity Tracker, which seems to have similar functionality to the Appian application. It’s an accelerator, so you download it and configure it for your own needs, a familiar process if you’re an existing Pega customer — which you will have to be, because it’s only available for Pega customers. The page linked above has a link get the app from the Pega Marketplace, where it will be free through December 31, 2020.

Trisotech is going in a different direction by offering several free online COVID-19 assessment tools based on clinical guidelines: some for the general public, and some to be used by healthcare professionals.

As a founding member of OMG’s BPM+ Health community, Trisotech has been involved in developing shareable clinical pathways for other medical conditions (using visual models in BPMN, CMMN and/or DMN), and I imagine that these new tools might be the first bits of new shareable clinical pathways targeted at COVID-19, possibly packaged as consumable microservices. You can click on the tools and try them out without any type of registration or preparation: they ask a series of questions and provide an assessment based on the underlying business rules, and you can also upload files containing data and download the results.

My personal view is that making these apps available to non-customers is sure to be a benefit, since they will get a chance to work with your company’s platform and you’ll gain some goodwill all around.

Free online digital transformation courses from @TheMasterChnnl

E-learning platform The Master Channel (which inexplicably has only 8 Twitter followers after I followed them, so get over there and connect) is offering free courses, exams, certificates and downloads to anyone affected by COVID-19. That is pretty much everyone on the planet by now. You can find out more details at the link above and in a recent LinkedIn post by Jan Moons, and he writes in more detail about e-learning in the time of the current pandemic in another post.

There’s a very real possibility that a lot of people will be “on the bench” in the near future: either their work requires travel, or their company has to make tough decisions about staffing. This is a great time to consider skilling up, and The Master Channel has courses on process and decision modeling, business analysis, analytics and more. I have never taken one of their courses so can’t vouch for the quality, and I am not being compensated in any way for writing this post, but probably worth checking out what they have to offer.

Their current offer is only until April 5th, although it’s clear to most people that our period of confinement is going to last much longer than that. Get them while you can.

If you know of other e-learning companies making similar offers, please add them in the comments of this post (including a link if you have one). I know of several universities that offer free online courses for related topics although they tend to be longer and much more detailed — I had to dedicate four weeks and relearn a lot of forgotten graph theory to get through the Eindhoven University of Technology’s course in process mining, which is more than a lot of people have time (or patience) for.

Focus on Insurance Processes: Product Innovation While Managing Risk and Costs – my upcoming webinar with @Signavio

I know, I just announced a banking webinar with Signavio on February 25; this is another one with an insurance focus on March 10 at 1pm ET. From the webinar description:

With customer churn rates approaching 25% in some insurance sectors, insurers are attempting to increase customer retention by offering innovative products that better address today’s market. The ability to create and support innovative products has become a top-level goal for many insurance company executives, and requires agile and automated end-to-end processes for a personalized customer journey.

Similar to the banking webinar, the focus is on more management-level concerns, and I’ll look at some use cases around insurance product innovation and claims.

Head on over to the landing page to sign up for the webinar. If you’re unable to attend, you’ll receive a link to the replay.

Focus on Banking Processes: Improve Revenue, Costs and Compliance – my upcoming webinar with @Signavio

I’ll be presenting on two webinars sponsored by Signavio in the upcoming weeks, starting with one on banking processes on February 25 at 1pm ET. In this first webinar, I’ll be taking a look not just at the operational improvements, but at the (executive) management-level concerns of improving revenue, controlling costs and maintaining compliance. From the webinar description:

Today’s retail banks face more challenges than ever before: in addition to competing with each other, they are competing with fintech startups that provide alternatives to traditional banking products and methods. The concerns in the executive suite continue to focus on revenue, costs and compliance, but those top-level goals are more than just numbers. Revenue is tied closely to customer satisfaction and wallet share, with today’s customers expecting personalized banking products and modern omnichannel experiences.

You can sign up for the webinar here. This will be a concise 35 minutes plus Q&A, and I’ll include some use case examples from client onboarding and KYC in retail banking.

How to (not) become a digital enterprise – @jakobfreund CamundaCon keynote

I thought that I was done with my CamundaCon coverage, but noticed that Jakob Freund is blogging more details about what he covered in his keynote. I spent most of his keynote behind the curtain waiting for my turn to speak, but was able to see it again when they posted the video of his presentation.

He’s doing a five-part series on the themes that he covered, based in part on their experiences with their clients over the past years, with the first two available here:

  • Part 1: Intro to the four key elements of becoming a digital enterprise
  • Part 2: The first key element, customer-focused innovation

If he keeps to his posting schedule, the next one should be up tomorrow.

bpmNEXT 2019 keynote: @JimSinur on technology combinations that digitally deliver

Our second keynote on the first day of bpmNEXT 2019 is with long-time presenter Jim Sinur, looking at technology combinations that digitally deliver. Unlike his usual focus on future directions, he’s driving down into what technologies work for companies that are undergoing digital transformation. This is a great lead-in to what I’ll be talking about tomorrow morning, and I fully expect to be fine-tuning my presentation before then to incorporate ideas from Jim’s presentation as well as Nathaniel Palmer’s presentation that preceded it.

IMG_3358Digital business platforms – something bigger than a BPMS – provide the real pathway to digital transformation, combining a variety of technologies. The traditional BPMS products are strong in work/process management, but they also need proactive intelligence, integration, automation, IoT enablement and business functionality. He looks at technical streams and their benefits, ranging from computational technologies to consumer delivery channels. He had a draft version of a matrix that he’s working on that shows attributes for these different technologies, from skill level required to get started with the technology to the likelihood of the vendors in this category partnering with other category vendors successfully, IMG_3360leading to a list of top productive pairs and triplets that we’re seeing in the market today: BPM and AI, for example, for processes with smart resources and actions; or architecture, low code and RPA for incremental transformation of legacy.

He finished up with how we will be leveraging the trends for marketplace collaboration between vendor products, and encouraging the vendors in the room (mostly everybody) to collaborate along the lines of his top pairs and triplets. In my opinion, this won’t necessarily being the vendors deciding to partner to offer joint solutions, but larger enterprises deciding to roll their own platforms using a combination of best-of-breed technologies that they select themselves: the vendors will need to make sure that their products can be sliced, diced and re-integrated in the way that the customers want.

Slide decks and videos of all presentations will be online within a day or two; I’ll come back and update all of the posts with links then.

2019 @Alfresco Analyst Day: vision for the future with @JohnNewton

We wrapped up the 2019 Analyst Day with founder John Newton talking about Alfresco’s vision for the future.

Most digital transformation efforts today are focused on external experiences, that is, how a company interacts with its customers. However, there’s more to it than that: the external experience has to interact with employee experiences and operational systems; this linkage is what Newton calls digital operations. Looking at the ubiquitous onboarding use case, digital transformation is not just about the nice app that the customer sees to upload their documents: it’s also about the straight-through processing that manages what happens after the customer does that upload, or request a service. He points out that it’s all about the process, and that content follows the process. This, obviously, is music to my ears.

Customers need to think about their digital business platform, which is not the same as any vendor’s digital business platform: it’s more than that, and it may be made up of more than one vendor’s platform. It needs to handle the digital outside (customer-facing) as well as the digital inside (employee-facing), and the end-to-end processes and content repositories that link them. There are a number of disruptive technologies that are driving digital operations — cloud, microservices, edge computing, blockchain — and there will always be a new one to add to this list.

That took us to their strategic themes:

  1. Process-first digital operations, including process, content, search, governance and insight capabilities
  2. Global-scale, multi-cloud digital operations, which removes the enterprise infrastructure concerns such as scalability and global replication
  3. Artificial intelligence powering digital operations, with the modern range of AI services now widely available from the internet giants being applied to content and process
  4. Empowering business users with targeted solutions, and improved user experience
  5. Empowering builders to accelerate solutions, with development and deployment tools
  6. Differentiate open source and enterprise (note that this is the first mention of open source all day), with add-on capabilities to the open source core services and engines

Always an insightful speaker, and I’m particularly interested in how the layers above the API “surface” (such as the Alfresco Digital Framework and Digital Workspace built on the ADF) are adopted in practice versus direct API usage.

That’s it for the analyst day; I’ll be back tomorrow for the regular user conference.

TechnicityTO 2018: CIO @RobMeikle keynote

Rob Meikle, CIO at the city of Toronto, gave a fast-paced and inspiring keynote to close out the morning at Technicity. I can’t do justice to his talk here (hopefully there will be a video, because he’s a great speaker), but a few points did resonate with me.

  • There’s a correlation between digital access and socioeconomic level, and we need to use technology to drive digital inclusion.
  • Interactions between government and constituents needs to be more digital and more responsive.
  • The most inclusive cities are the most successful.
  • Focus on meaningful and measurable outcomes to make the city prosperous.
  • IT organization is being reworked to support a digital city model.
  • Policies need to be transformed faster to keep up with data usages: innovation is in policies, not just technology.
  • Increasing digital literacy is a mandate for the city in order to benefit residents.
  • The city creates a lot of opportunities, but also needs to focus on outcomes to benefit all residents — such as the one in four children in the city who live in poverty.

Good focus on how public sector technology should focus on social good as well as making government more efficient.

If I see a link to the video published, I’ll come back and update this post.

Update: here’s the video!