It’s been a quick two days at CamundaCon 2022 in Berlin, and as always I’ve enjoyed my time here. The second day finished with a quick fireside chat with Camunda co-founders Jakob Freund and Bernd Ruecker, who wrapped up some of the conference themes about process orchestration. I’ll post a link to the videos when they’re all available; not sure if Camunda is going to publish them on their YouTube channel or put them behind a registration page.
I mentioned previously about what a great example of a hybrid conference this has been, with both speakers and attendees either on-site or remote — my own panel included three of us on the stage and one person remotely, and it worked seamlessly. One part of this that I liked is that in the large break lounge area, there were screens set up with the video feed from each of the four stages, and wireless headsets that you could tune to any of the four channels. This let you be “remote” even when on site, which was necessary in some of the smaller rooms where it was standing room only. Or if you wanted to have a coffee while you were watching.
Thanks to Camunda for inviting me, and the exciting news is that next September’s CamundaCon will be in New York: a much shorter trip for me, as well as for many of Camunda’s North American customers and partners.
Michael Goldverg from BNY Mellon presented on their journey with automating processes within the bank across thousands of people in multiple business departments. They needed to deal with interdependencies between departments, variations due to account/customer types, SLAs at the departmental and individual level, and thousands of daily process instances.
They use the approach of a single base model with thousands of variations – the “super model” – where the base model appears to include smaller ad hoc models (mostly snippets surrounding a single task that were initially all manual operations) that are assembled dynamically for any specific type of process. Sort of an accidental case management model at first glance, although I’d love to get a closer look at their model. There was a question about the number of elements in their model, which Michael estimated as “three dozen or so” tasks and a similar number of gateways, but can’t share the actual model for confidentiality reasons.
They have a deployment architecture that allows for multiple clusters accessing a single operational database, where each cluster could have a unique version of the process model. Applications could then be configured to select the server cluster – and therefore the model version – at runtime, allowing for multiple models to be tested in a live environment. There’s also an automated process instance migration service that moves the live process instances if the old and new process models are not compatible. Their model changes constantly, and they update the production model at least once per week.
They’ve had to deal with optimistic locking exceptions (fairly common when you have a lot of parallel gateways and multiple instances of the engine) by introducing their own external locking mechanism, and by offloading some of this to the Camunda JobExecutor using asynchronous continuations although that can cause a hit on performance. The hope is that this will be resolved when they move to the V8 engine – V8 doesn’t rely on a single relational database and is also highly distributed by design.
They run 50-100k transactions per day, and have hundreds of millions of tasks in the history database. They manage this with aggressive cleaning of the history database – currently set to 60 days – by archiving the task history as PDFs in their content management system where it’s discoverable. They are also very careful about the types of queries that they allow directly on the Camunda database, since a single poorly-constructed search can bring the database to its knees: this is why Camunda, like other vendors, discourage the direct querying of their database.
There are a lot of trade offs to be understood when it comes to performance optimization at this scale. Also some good points about starting your deployment with a more complex configuration, e.g., two servers even if one is adequate, so that you’re not working out the details of how to run the more complex configuration when you’re also trying to scale up quickly. Lots of details in Michael’s presentation that I’m not able to capture here, definitely check out the recorded video later if you have large deployment requirements.
My little foldable keyboard isn’t playing nice, so I’m typing this directly on my iPad which is…not ideal. However, I will do my best and debug the keyboard later.
Day 2 of CamundaCon 2022 here in Berlin started off with a keynote from Bernd Ruecker, Camunda co-founder and chief technologist, and Daniel Meyer, CTO. Version 8.1 is coming up, and with it some new connectors as well as other core enhancements. Bernd started out with a reinforcement of some of Jakob Freund’s messages yesterday: the distinction between task (depth) and process (breadth) automation, and how process orchestration is characterized by endpoint diversity and process complexity. These are important points in understanding the scope of process orchestration, but also for companies like Camunda to distinguish themselves in an increasingly diverse and crowded “process automation” market.
Once Bernd had walked us through what an initial process orchestration could look like (for a bank account opening example), Daniel took over to take about moving from an initial project to a transformed, process-centric enterprise. Some of this requires tools that allow less technical developers to get involved, which means having more connectors available for these developers to create apps by assembling and configuring connectors, while more technical developers may be creating new connectors for what’s not offered out of the box by Camunda. Bernd, who loves his live demos, showed us how to create a new connector quickly in Java, then expose it graphically in the modeler using a connector template – this makes it appears as an activity type directly in the Camunda modeler. Once they are in the modeler, connectors can be used in any application, so that (for example) a connector to your bespoke mainframe monolith can be created and added to the modeler once, then used in a variety of applications.
The concept of connectors as a way for less technical developers to use a BPMN model as an application development framework isn’t new: many other BPMS vendors have been doing this for a long time. Camunda is obviously feeling the pressure to move from a purely developer-focused platform and address some level of low-code needs, and connectors is one if the main ways that they are doing this. The ease in creating new connectors is pretty cool – many products let you use their out of the box connectors but don’t make it that easy to make new ones. Camunda is positioning this capability (creating new connectors quickly) as core to automating the enterprise.
We heard about more of what they’ve been releasing this year, including the web modeler that allows new developers and business analysts to be onboarded quickly without having to install anything locally. The modeler includes BPMN validation so that correct process models are created and errors avoided before deploying to the server. They are also using FEEL (friendly enough expression language) – borrowed from the DMN specification – for scripting within tasks. This use of FEEL is also being done by other standards-focused vendors, such as Trisotech. We also saw some of the things that they’re working on, such as interactive debugging to step through processes, and an improved forms UI builder. Again, not completely new ideas in the BPM space, but nice productivity enhancements to their developer experience. Based on what they’ve seen within their own company, they’re integrating Slack and Microsoft Teams for human task orchestration to avoid the requirement for users to go to a separate app for their process task list.
Bernd addressed the issue of Camunda supporting low code, when they have been staunchly pro code only for most of their history. Fundamentally, the market (that is, their customers and prospective customers) need this capability, and it’s clear that you have to offer at least something low code (ish) to play in the process automation space these days. This is definitely a shift for them, and they are doing it fairly gracefully although are a bit behind the curve in much of the functionality because they stuck to their roots for so long. In their favour, they’re still a small and nimble company and can roll out this type of functionality in their product fairly quickly. They are mostly just dipping into the pro code end of the low code space, and it will be interesting to see how far they go in upcoming releases. Creating more low code tooling and more connectors obviously creates more long-term technical debt for Camunda: if they decide this isn’t the way forward after a while, or they change some of the underlying architecture, customers could end up with legacy versions of connectors and low code tooling that need to be updated. Definitely worth checking out for existing Camunda customers who want to accelerate adoption within their organizations.
By the way, I’ve had so much great feedback on our panel yesterday: happy to hear that we had some nuggets of wisdom in there. So many good conversations last night at the BBQ and continuing into today between sessions. I’ll post a link to the recorded session when it’s published.
I realize that I’m completely remiss for not posting about last week’s DecisionCAMP, but in my defense, I was co-hosting it and acting as “master of ceremonies”, so was a bit busy. This was the third year for a virtual DecisionCAMP, with a plan to be back in person next year, in Oslo. And speaking of in-person conferences, I’m in Berlin! Yay! I dipped my toe back into travel three weeks ago by speaking at Hyland’s CommunityLive conference in Nashville, and this week I’m on a panel at Camunda’s annual conference. I’ve been in Berlin for this conference several times in the past, from the days when they held the Community Day event in their office by just pushing back all the desks. Great to be back and hear about some of their successes since that time.
Day 1 started with an opening keynote by Jakob Freund, Camunda’s CEO. This is a live/online hybrid conference, and considering that Camunda did one of the first successful online conferences back in 2020 by keeping it live and real, this is shaping up to be a forerunner in the hybrid format, too. A lot of companies need to learn to do this, since many people aren’t getting back on a plane any time soon to go to a conference when it can be done online just as well.
Anyway, back to the keynote. Camunda just published the Process Orchestration Handbook, which is a marketing piece that distills some of the current messaging around process automation, and highlights some of the themes from Jakob’s keynote. He points out the problems with a lot of current process automation: there’s no end-to-end automation, no visibility into the overall process, and little flexibility to rework those processes as business conditions change. As a result, a lot of process automation breaks since it falls over whenever there’s a problem outside the scope of the automation of a specific set of tasks.
Jakob focused on a couple of things that make process orchestration powerful as a part of business automation: endpoint diversity (being able to connect a lot of different types of tasks into an integrated process) and process complexity (being able to include dynamic parallel execution, event-driven message correlation, and time-based escalation). These sound pretty straightforward, and for those of us who have been in process automation for a long time these are accepted characteristics of BPMN-based processes, but these are not the norm in a lot of process orchestration.
He also walked through the complexities that arise due to long-running processes, that is, anything that involves manual steps with knowledge workers: not the same as straight-through API-only process orchestration that doesn’t have to worry about state persistence. There are a few good customer stories here this week that bring all of these things together, and I plan to be attending some of those sessions.
He presented a view of the process automation market: BPMS, low-code platforms, process mining, iPaaS/integration, RPA, microservices orchestration, and process orchestration. Camunda doesn’t position itself in BPMS any more – mostly since the big analysts have abandoned this term – but in the process orchestration space. Looking at the intersection between the themes of endpoint diversity and process complexity that he talked about earlier, you can see where different tools end up. He even gives a nod to Gartner’s hyperautomation term and how process orchestration fits into the tech stack.
He finished up with a bit of Camunda’s product vision. They released V8 this year with the Zeebe engine, but much more than that is under development. More low-code especially around modeling and front-end human task technology, to enable less technical developers. Decision automation tied into process orchestration. And stronger coverage of AI, process mining and other parts of the hyperautomation tech stack through partnerships as well as their own product development.
Definitely some shift in the messaging going on here, and in some of Camunda’s direction. A big chunk of their development is going into supporting low-code and other “non-developer” personas, which they resisted for many years. They have a crossover point for pro-code developers to create connectors that are then used by low-code developers to assemble into process orchestrations – a collaboration that has been recognized by larger vendors for some time. Sensible plans and lots of cool new technology.
The rest of the day is pretty packed, and I’m having trouble deciding which sessions to attend since there are several concurrent that all sound interesting. Since most of them are also virtual for remote attendees, I assume the recordings will be available later and I can catch up on what I missed. It’s not too late to sign up to attend the rest of today and tomorrow virtually, or to see the recorded sessions after the fact.
After hearing Heidi Badenhorst of aYo Holdings speak this morning at the Hyland CommunityLIVE 2002 general session, I knew that I wanted to see her breakout session for more details on what they’re doing. I use microinsurance as an example of a new business model that insurance companies can consider once they’ve automated a lot of their processes (otherwise, it’s not cost-effective), but this is the first chance that I‘ve really had to hear more about microinsurance in action.
Ayo provides low-cost hospital and life insurance (as well as a few other types) for more than 17M people across several African countries, with the goal to scale up to more than 100M customers. As with a lot of other businesses spreading into developing countries, the customers use their mobile phones to interact with aYo’s insurance products through mobile money for receiving payments and WhatsApp chatbots for gathering information and submitting documents. aYo is owned by MTN, the largest mobile provider in Africa, and the insurance service was first started as a loyalty benefit for mobile customers.
Microinsurance is about tiny premiums and small payouts — small amounts in our rich countries, but a day’s pay in many African markets — and the only way to do this effectively is to maximize the amount of automation. Medical records are rudimentary, often hand-written and without standard treatment or claim codes, making it difficult to automate and subject to fraud.
They have been managing all of this with manual processes (including manual downloads of documents) and spreadsheets, but are moving to a greater degree of automation using Alfresco Process Automation (APA) and other components to pay 80% of the claims without human intervention. Obviously, they need content management and intelligent capture as well, but the content-centric process orchestration and AI for fraud detection are key to automation. They also needed a cloud solution to support their multi-national operations, and something that integrated well with their claims system. Since their solution is tightly integrated with the phone network, they can use location data from the claim to correlate with hospital locations as another potential anti-fraud check. They’re also using behavioral data from how their customers interact with WhatsApp to optimize their customer experience.
We saw a video of what a claim looks like from the customer side — WhatsApp chatbot with links for uploading documents — as well as the internal aYo operations side in more conventional Alfresco workspaces and dashboards. This was really inspirational on a number of levels. First of all, just from a business and technology standpoint, they’re doing a great job of improving their business through automation. More importantly, they are using this to allow for cost-effective processing of very small claims, and thereby enabling coverage for millions of people who have never previously had access to insurance. Truly, a transformational business model for insurance.
I’ll be heading home this afternoon, but wanted to grab a couple of the morning sessions while I’m here in Nashville. Nashville is really a music city, and we’ve started of each day with live music from the main stage, plus at the evening event last night. Susan deCathelineau, Hyland’s Chief Customer Success Officer, kicked things off with a review of some of the customer support and services improvements that they have made in response to customer feedback, and how the recent acquisitions and product improvements have resonated with customers. Sticking with the “voice of the customer” theme, Ed McQuiston, Chief Commercial Officer, hosted a panel of customers in a “Late Morning Show” format.
His guests were Heidi Badenhorst, Group Head of Strategy and Special Projects at aYo Holdings (South African micro-insurance provider); Adam Podber, VP of Digital Experience at PVH (a fashion company that owns brands such as Tommy Hilfiger and Calvin Klein); and Kim Ferren, Senior AP Manager at Match (the online dating company).
Badenhorst spoke first about how aYo is trying to bridge the financial gap by providing insurance to the low end of the market, especially health insurance for people who have no other support network in situations when they can’t work (and therefore feed their families). They use Alfresco to automatically capture and store medical documents directly from customers (via WhatsApp), and plan to automate the (still manual) claims processing using rules and process in the future. This is such an exciting application of automation, and exactly the type of thing that I spoke about yesterday in my presentation: what new business models are possible once we automate processes. I’m definitely going to hit her breakout session later this morning.
Podber talked about their experience with Nuxeo for digital asset management, moving from 17 DAMs across different regions to a consolidated environment that has different user experience depending on the user’s role and interests. With a number of different brands and a huge number of products within each brand, this provides them with a much more effective way to manage their product information.
Ferren was there to talk about accounts payable, but there was a hilarious Match.com ad shown first where Satan and 2020 go on a date in all the empty places that we couldn’t go back then, plus stole some toilet paper and ended up posing in front of a dumpster fire. Match is an OnBase customer, and although AP isn’t necessarily a sexy application, it’s a critical part of any business — one of my first imaging and workflow project implementations back in the 1990s was AP and I learned a lot about it how it works. Match used to combine Workday, Great Plains, NetSuite and several other local systems across their different geographic regions; now it’s primarily Workday with Hyland providing integrated support and Brainware intelligent capture.
There was a good conversation amongst the panelists about lessons learned and what they are planning to do going forward; expect some good breakout sessions from each of these companies with more details about what they’re doing with Hyland products.
Hey, I gave a presentation yesterday, first time in person in almost three years! Here’s the slides, and feel free to contact me if you have questions. I can’t figure out how to get the embed short code on mobile, but when I’m back in the office I’ll give it another try and you may see the slideshow embedded below. Update: found the short code!
We had a small analyst Q&A with Ed McQuiston (Chief Commercial Officer) and John Phelan (Chief Product Officer) this afternoon at Hyland’s CommunityLIVE, giving us a chance to hear more and ask some questions about the company and product direction. I’m particularly interested in the product roadmap in terms of convergence (or not) of the four content engines that they now have, both from a technology standpoint and go-to-market positioning. They mostly go to market through verticals, which begs the question whether they will position each engine as serving a specific industry to avoid customer confusion and also reduce the training cycle for their own industry-specific teams. There are definitely some places where that makes sense, such as using Nuxeo for digital asset management for rich media, but there’s arguably a lot of overlap between functionality in, for example, OnBase and Alfresco.
McQuiston addressed how they are consolidating the external view of the company and product pages: instead of having separate entry points for each product, they are consolidating the online experiences regardless of what product that you’re looking for information on. Their sales teams are very oriented around the industry verticals, and there is a strong alignment between products and verticals. However, they are finding that some of the verticals that were previously OnBase are shifting to be a more natural match with one of the other platforms, such that net new customers in those verticals may use a different platform than what they sold with OnBase previously. Not surprising, but its possible that the vertical sales teams end up in a Maslow’s Hammer situation of selling what they’re most familiar with.
Phelan (and I quizzed him at lunch about this) isn’t discussing any particular plans about core engine convergence/unification: their public message is that they are supporting all four content engines. As I said to them in the Q&A session, I hope we’re not sitting here in five years hearing the same message. Since their company value is based on ARR — annual recurring revenue — they have little financial basis for cutting any of their existing product lines, but from a technology standpoint, they will move faster as a company if they can start unifying at least the core engines under the covers, then gradually migrate the “product specific” capabilities to integrate with a shared core engine.
Hyland’s an old company in software terms, and could definitely benefit from shedding some of their legacy mindset. They’ve got a lot of solid technology in their portfolio, and a huge amount of industry vertical experience; they need to find the right product and go-to-market roadmap to best leverage that.
This might be the only breakout session that I make it to today, since I’m in an executive Q&A after this, then need a bit of time for final preparations for my own session later this afternoon. The insurance industry session was presented by Richard Medina of Doculabs, with the title “Don’t Just Survive – Thrive”, a phrase that I used quite a bit in talking about digital business during the pandemic era to stress that it’s not just about doing the minimum possible to survive, but leverage the new technologies and methods to go far beyond that and become a market leader. Here, he was specifically talking about digital insurance operations, which is coincidentally the use case that I will cover in my presentation is about insurance claims.
He started with a slide defining intelligent automation, specifically referencing workflow (process orchestration), RPA, intelligent document processing, natural language processing, and process mining, since these are the specific technologies that Doculabs covers. There was quite a bit on their market and methods, but he came back to a key point for those in the audience: a lot of organizations didn’t consider content management as a real part of digital transformation. So wrong. In applications like insurance claims, content is core to the process: the entire process of handling a claim is based on populating the claims file with all of the necessary documentation to support the claim decision. It doesn’t mean that all of this content is on paper any more, or even ever exists on paper within the organization: forms are created online and e-signed, spreadsheets are used to document a full statement of loss, and policyholders upload images related to their claim. This is, of course, not the same as claims operations of old, where everything was on paper in huge file folders, occasionally with the addition of a CD that holds some photos of damage, although those were often printed for the file. E-mails would be printed out and added to the paper folder.
This brings some challenges to insurance operations, particularly claims where there may be rich media involved. Not only do paper and possibly multiple online document repositories need to be consolidated (via merging or federation), but they also need to include other types of unstructured content: photos, videos, social media conversations, and more. The core content engine(s) needs to support all of this, but there’s much more: it needs to be cloud-based for today’s remote workforce, include NLP and AI during intelligent capture for automatic content classification and extraction, have process automation to move the content through its lifecycle and integrate with line-of-business systems, and include chatbots for simpler interactions with policyholders. Medina talked about the unicorn of intelligent automation applied to documents: AI/ML, intelligent capture, RPA, and BPM (orchestration). He walked through a couple of scenarios on policy administration, servicing and claims, showing how different technologies come into play at each point in these processes.
He showed some of the issues to consider for different levels of transformation at each stage in a potential roadmap, starting with content ingestion (capturing content, automating completion checklists, and integrating the content with the LOB systems), then workflow. He finished up with a bit on process mining to show how it can be used to introspect your current processes, do some root cause analysis, and optimize the process. A flying tour through how many of the technologies being discussed here this week can be applied in insurance applications.
If you’re interested in some of the best practices around projects involving these technologies, check out my presentation at 4:45pm today on maximizing success in business automation projects.
I arrived a day early for my first time at Hyland’s CommunityLIVE conference to attend yesterday’s executive forum, and enjoy a very cool dinner experience at the Grand Ole Opry. This morning, the opening general session is back to IRL in a big way: big conference venue, live band before the start, lots of butts in seats (not wearing pajama pants). It’s their first live conference in 3 years, and for many of us it’s been almost as long. In fact, the first IRL conference that was cancelled for me was with Alfresco in New York that was scheduled for March 2020, and now Alfresco is part of Hyland.
The general session started with Hyland CEO Bill Priemer talked about trends in content management, and how the pandemic accelerated the shift towards a cloud-based digital workplace for many organizations. Hyland has done two significant acquisitions during the past two years — Alfresco and Nuxeo — which potentially positions then to address a wider range of customer needs, if they can move forward with a reasonable product roadmap that (eventually) converges their portfolio. It appears that they are only now moving their legacy OnBase product to the cloud, although their acquisitions add that capability already. Also, this puts them into the open source space, which is new to them but they appear to be embracing that. As he discussed yesterday at the executive forum, Priemer talked about plugging in other capabilities to their core content platform: not just with Hyland’s products, but with anything that adds value to an integrated digital workspace.
He spoke about upgrade challenges, which seems to be a bit of a sore point: with a 30-year-old content management company, there’s going to be a lot of legacy customers who have millions of documents in those systems, and upgrading is a non-trivial undertaking. Moving to the cloud is not only a big migration job, but a scary concept for organizations who believe that only their own on-premise servers are safe. That’s not true, of course, but the beliefs are there. Pre-acquisition, Alfresco already had a significant campaign showing customers moving from on-premise content management (such as IBM/FileNet and Hyland) to their cloud solution, and how much it could reduce costs while maintaining security and access. If there are any Alfresco marketing people left at Hyland, this would be a good tine to bring their views to bear on how to motivate on-prem customers to move to the cloud.
John Phelan, Chief Product Officer, was up next, and also stressed extensibility as a necessity as opposed to the old days of standalone content management systems. He stressed that Hyland is not just “the OnBase company” any more, but a company that offers four core content platforms (OnBase, Perceptive, Alfresco, Nuxeo), although that’s arguably not really a good thing since it divides focus of the product groups, can create islands of sales and support based on product, and confuses the customers.
Sam Babic, Chief Innovation Officer, took the stage to expand his talk about hyperautomation that he gave yesterday at the executive forum. Interestingly, his first slide called out business process management and business process automation (although I’m a bit unclear on the distinction that he makes between them) as well as RPA and case management, and had a quick screen grab video of an Alfresco process manager orchestration. This is a much better message to the audience on how content and process work together in general, as well as in the context of the myriad technologies included in Gartner’s definition of hyperautomation.
Don Dittmar, who manages industry partner relationships, joined remotely (via a pre-recorded video) on how Hyland works together with partner companies that offer vertical or line-of-business systems, including Workday, ServiceNow and Guidewire. I’ll be talking about an insurance claims use case in my presentation this afternoon, and the integration with Guidewire fits right into that. This is a classic content management problem, and having pre-built integrations with these systems is a huge help for companies that want to better manage content that is directly related to transactions and cases in their LOB systems.
Alex Cameron, product manager for healthcare solutions showed us some of their solutions around healthcare enterprises. Intelligent medical records, which captures and classifies unstructured medical records (documents) then manages the content according to regulatory requirements. Then Max Gavanon, product manager for PAM and DAM, discussed their solutions for digital asset management (i.e., non-document content), such as 3D designs cross-referenced with materials for product design.
Up next was Eileen Thornton, AVP of user experience, to talk about their user experience development across the Hyland portfolio, and show a few screens of what this will look like. This seems to indicate that their initial integration/consolidation of their content engines will happen “at the glass” by providing a common UX. It sounds like most of their current OnBase customers are still on the old-style desktop UI, since she talked about using this new UX to move to a modern web-based experience.
Lots of good content, and now we’re off to individual industry sessions and later breakout tracks. I’ll be presenting at 4:45 this afternoon in the Business Transformation track, hope you can join me!