CamundaCon 2023 Day 1: Behavior-Driven Development at Zurich Insurance

Zurich has been working on their end-to-end processes for a long time, and found that the lead time for changes became a problem as they attempted to move to continuous improvement through continuously changing processes. Laurenz Harnischmacher and Stefan Post of Zurich presented their approach to improving this. They measured Change Lead Time, which is the time that the developer starts working on a change until it is released to production, and found that it was generally about 10 weeks. This was a problem not just for agile processes in general, but they also had a mismatch between what the developers were creating versus what the users actually needed, which meant that there were potentially multiple of those 10-week cycles to solve a single problem. In other words, like many other organizations, there was not an appropriate and complete method of communicating needs from business to the developer.

They adopted Behavior Driven Development, which provides a methodology for the business to define the desired system behavior using structured test scenarios, aided by the use of Cucumber. These scenarios are defined with given…when…then…and language to specify the context, constraints and expected behavior. This takes a bit longer up front, since the business unit has to write complete test cases, then the developers/testers create the automated testing scenarios and feedback to the business if they find inconsistencies or omissions, then development occurs, then the automated testing can be applied to the finished code. This is much more likely to result in code that works right for the business the first time rather than requiring multiple cycles of the full development cycle.

Although they’ve been able to have a huge improvement in lead time, I’m left with the thought that somewhere along the Agile journey, we lost the ability to properly create business requirements for software. Agile in the absence of written requirements works okay if your development cycle is super-short, but 10 weeks is not really that Agile. I’m definitely not advocating a return to waterfall development, where requirements are completely documented and signed off before any development begins, but we need to bake in some of the successful bits of waterfall with a more (lower-case A) agile iterative approach. This is independent of Camunda or any other platform, and more a comment on how application development needs to work in order to avoid excessive lengthy rework cycles. Also, the concepts of BDD borrow a lot from earlier test-driven development methods; in software development, like much else in life, everything old is new again.

Interesting to note from the comments at the end that the automated test cases are only for automated/service steps, and that the business people do not create process models directly. 樂

CamundaCon 2023 Day 1: Connectors Demo

When a customer presenter was unable to attend at the last minute, Joe Pappas stepped in and gave an impromptu presentation and demo on building connectors. The Camunda Marketplace (formerly the Community Hub) is the place for some of the expanding shared resources that Jakob Freund spoke about in the keynote: there’s one section for Camunda-provided connectors, one for those provided by partners, and one for those from the community.

Joe walked through some of the connectors that he has built and published, showing the code and then demonstrating the functionality. You can find his connectors in the Community section of the Marketplace, including NATS inbound/outbound, email inbound, file watch inbound, and database inbound from Postgres or MongoDB. Not much to write about since it was mostly a code demo, but cool stuff!

CamundaCon 2023 Day 1 Keynote

I feel like I’m barely back from the academic research BPM conference in Utrecht, and I’m already at Camunda’s annual CamundaCon, being held in New York (Brooklyn, actually) — the first time for the main conference outside of Germany. The location change from Berlin is a bit of a tough call since they will lose some of the European customers who don’t have a budget for international travel, but the opportunity to see their North American customers will make up for it. They’re also running the conference virtually for those of you who can’t be here in person, and you can sign up for free to attend the presentations online.

Although I don’t blog about anything that happens after the bar is open, I did have a couple of interesting conversations at the networking event last night about my relationship with Camunda. I’m here this week as an independent analyst, and although they are covering my travel expenses, I’m not being paid for my time and (as usual) the opinions that I write here are my own. This is the same arrangement I have with any vendor whose conference I attend, although I have got a bit pickier about which locations I’m willing to travel to (hint: not Vegas). I’ve been covering Camunda a long time, starting 10 years ago with their fork from Activiti, back when they didn’t capitalize their name. They’ve been a client of mine in the past for creating white papers, webinars and speaking at their conference. I’ve also worked with some of their clients on technical strategy and architecture, which is the other side of my business.

The first day opened with a keynote from Camunda CEO Jakob Freund giving a brief retrospective of the last 10 years of their growth and especially their current presence in North America. There’s over 200 people attending today in person at the 74Wythe event space, plus an online contingent of attendees. He started with a vision of the automated enterprise, and how this is made difficult by the complexity of mission-critical processes that cross multiple silos of systems and organizational departments. Process orchestration allows for automation of the end-to-end processes by acting a a controller that can invoke the right resource — whether a person or a system — at the right time while maintaining end-to-end visibility and management. If you’re not embracing process orchestration, you run the risk of having broken processes that have a significant impact on your customer satisfaction, efficiency and innovation.

Camunda has more than 500 customers globally now, and has amassed over 5000 use cases for how those organizations are using Camunda’s software. This has allowed them to develop a process orchestration maturity model: from single projects, to broader initiatives, to distributed adoption, to a strategic scaled adoption of process orchestration. Although obviously Jakob sees the Camunda Process Orchestration Platform as a foundational platform, he looked at a number of other non-technical components such as stakeholder buy-in, plus technical add-ons and integration partners. I like that he started with strategic alignment and ended with value monitoring wrapping back to the alignment; this type of alignment between strategic goals and operational metrics is something that I strongly believe in and have written about quite a bit.

Since we’re in New York, his process orchestration in action part was focused on financial services, although with lessons for many other industries. I work a lot with my own financial services clients, and the challenges listed are very familiar. He walked through case studies of Desjardins (legacy BPMS replacement), Truist (merging systems from two merged banks), National Bank of Canada (automation CoE to radically reduce project development time), and NatWest (CoE to aid self-service projects).

He moved on to talk about the innovation that Camunda is introducing through their technology. They now address more of the BPM lifecycle than they started out with — which was purely as a developer tool — and now provide more tools for business and IT to collaborate on process improvement/automation projects. They are also addressing the accelerating of solutions through some low-code aspects; this was a necessary move for them in the face of the current market. Their challenge will be keeping the low code tooling from getting in the way of the developers, and keeping the technical details from getting in the way of the business people.

No technical conference today is complete without at least one slide on AI, and Jakob did not disappoint. He walked through how they see AI as it applies to process orchestration: predictive AI (e.g., process mining and decisioning), generative AI (e.g., form generator from simple language), and assistive AI (e.g., knowledge worker helper).

He described their connectors marketplace, which includes connectors created by them but also curated from their partners. Connectors are essential for integration, but their roadmap also includes process templates, internal marketplaces within an organization, and entire industry solutions and applications. This is an ambitious undertaking that a lot of vendors have done badly, and I’ll be very interested in seeing how this develops.

He finished up with some larger architecture issues: cloud support, security and compliance, multi-tenancy and how this allows them to support organizations both big and small. Their roadmap shows a lot of components that are targeted at broadening their reach while still supporting their long-term technical customers.

State of Process Orchestration panel replay

I was at CamundaCon in Berlin last month, and was on a panel about the state of process orchestration. Check it out.

Lots of interesting discussion, and it was fun to hear other perspectives from a large SI (Infosys) and a customer (PershingX) on the panel with me. Thanks to Camunda for the invitation, and my first European trip in almost three years!

CamundaCon 2022 – that’s a wrap!

It’s been a quick two days at CamundaCon 2022 in Berlin, and as always I’ve enjoyed my time here. The second day finished with a quick fireside chat with Camunda co-founders Jakob Freund and Bernd Ruecker, who wrapped up some of the conference themes about process orchestration. I’ll post a link to the videos when they’re all available; not sure if Camunda is going to publish them on their YouTube channel or put them behind a registration page.

I mentioned previously about what a great example of a hybrid conference this has been, with both speakers and attendees either on-site or remote — my own panel included three of us on the stage and one person remotely, and it worked seamlessly. One part of this that I liked is that in the large break lounge area, there were screens set up with the video feed from each of the four stages, and wireless headsets that you could tune to any of the four channels. This let you be “remote” even when on site, which was necessary in some of the smaller rooms where it was standing room only. Or if you wanted to have a coffee while you were watching.

Thanks to Camunda for inviting me, and the exciting news is that next September’s CamundaCon will be in New York: a much shorter trip for me, as well as for many of Camunda’s North American customers and partners.

CamundaCon Day 2: Business Process Optimization at Scale

Michael Goldverg from BNY Mellon presented on their journey with automating processes within the bank across thousands of people in multiple business departments. They needed to deal with interdependencies between departments, variations due to account/customer types, SLAs at the departmental and individual level, and thousands of daily process instances.

They use the approach of a single base model with thousands of variations – the “super model” – where the base model appears to include smaller ad hoc models (mostly snippets surrounding a single task that were initially all manual operations) that are assembled dynamically for any specific type of process. Sort of an accidental case management model at first glance, although I’d love to get a closer look at their model. There was a question about the number of elements in their model, which Michael estimated as “three dozen or so” tasks and a similar number of gateways, but can’t share the actual model for confidentiality reasons.

They have a deployment architecture that allows for multiple clusters accessing a single operational database, where each cluster could have a unique version of the process model. Applications could then be configured to select the server cluster – and therefore the model version – at runtime, allowing for multiple models to be tested in a live environment. There’s also an automated process instance migration service that moves the live process instances if the old and new process models are not compatible. Their model changes constantly, and they update the production model at least once per week.

They’ve had to deal with optimistic locking exceptions (fairly common when you have a lot of parallel gateways and multiple instances of the engine) by introducing their own external locking mechanism, and by offloading some of this to the Camunda JobExecutor using asynchronous continuations although that can cause a hit on performance. The hope is that this will be resolved when they move to the V8 engine – V8 doesn’t rely on a single relational database and is also highly distributed by design.

They run 50-100k transactions per day, and have hundreds of millions of tasks in the history database. They manage this with aggressive cleaning of the history database – currently set to 60 days – by archiving the task history as PDFs in their content management system where it’s discoverable. They are also very careful about the types of queries that they allow directly on the Camunda database, since a single poorly-constructed search can bring the database to its knees: this is why Camunda, like other vendors, discourage the direct querying of their database.

There are a lot of trade offs to be understood when it comes to performance optimization at this scale. Also some good points about starting your deployment with a more complex configuration, e.g., two servers even if one is adequate, so that you’re not working out the details of how to run the more complex configuration when you’re also trying to scale up quickly. Lots of details in Michael’s presentation that I’m not able to capture here, definitely check out the recorded video later if you have large deployment requirements.

CamundaCon 2022 Day 2 keynote

My little foldable keyboard isn’t playing nice, so I’m typing this directly on my iPad which is…not ideal. However, I will do my best and debug the keyboard later.

Day 2 of CamundaCon 2022 here in Berlin started off with a keynote from Bernd Ruecker, Camunda co-founder and chief technologist, and Daniel Meyer, CTO. Version 8.1 is coming up, and with it some new connectors as well as other core enhancements. Bernd started out with a reinforcement of some of Jakob Freund’s messages yesterday: the distinction between task (depth) and process (breadth) automation, and how process orchestration is characterized by endpoint diversity and process complexity. These are important points in understanding the scope of process orchestration, but also for companies like Camunda to distinguish themselves in an increasingly diverse and crowded “process automation” market.

Once Bernd had walked us through what an initial process orchestration could look like (for a bank account opening example), Daniel took over to take about moving from an initial project to a transformed, process-centric enterprise. Some of this requires tools that allow less technical developers to get involved, which means having more connectors available for these developers to create apps by assembling and configuring connectors, while more technical developers may be creating new connectors for what’s not offered out of the box by Camunda. Bernd, who loves his live demos, showed us how to create a new connector quickly in Java, then expose it graphically in the modeler using a connector template – this makes it appears as an activity type directly in the Camunda modeler. Once they are in the modeler, connectors can be used in any application, so that (for example) a connector to your bespoke mainframe monolith can be created and added to the modeler once, then used in a variety of applications.

The concept of connectors as a way for less technical developers to use a BPMN model as an application development framework isn’t new: many other BPMS vendors have been doing this for a long time. Camunda is obviously feeling the pressure to move from a purely developer-focused platform and address some level of low-code needs, and connectors is one if the main ways that they are doing this. The ease in creating new connectors is pretty cool – many products let you use their out of the box connectors but don’t make it that easy to make new ones. Camunda is positioning this capability (creating new connectors quickly) as core to automating the enterprise.

We heard about more of what they’ve been releasing this year, including the web modeler that allows new developers and business analysts to be onboarded quickly without having to install anything locally. The modeler includes BPMN validation so that correct process models are created and errors avoided before deploying to the server. They are also using FEEL (friendly enough expression language) – borrowed from the DMN specification – for scripting within tasks. This use of FEEL is also being done by other standards-focused vendors, such as Trisotech. We also saw some of the things that they’re working on, such as interactive debugging to step through processes, and an improved forms UI builder. Again, not completely new ideas in the BPM space, but nice productivity enhancements to their developer experience. Based on what they’ve seen within their own company, they’re integrating Slack and Microsoft Teams for human task orchestration to avoid the requirement for users to go to a separate app for their process task list.

Bernd addressed the issue of Camunda supporting low code, when they have been staunchly pro code only for most of their history. Fundamentally, the market (that is, their customers and prospective customers) need this capability, and it’s clear that you have to offer at least something low code (ish) to play in the process automation space these days. This is definitely a shift for them, and they are doing it fairly gracefully although are a bit behind the curve in much of the functionality because they stuck to their roots for so long. In their favour, they’re still a small and nimble company and can roll out this type of functionality in their product fairly quickly. They are mostly just dipping into the pro code end of the low code space, and it will be interesting to see how far they go in upcoming releases. Creating more low code tooling and more connectors obviously creates more long-term technical debt for Camunda: if they decide this isn’t the way forward after a while, or they change some of the underlying architecture, customers could end up with legacy versions of connectors and low code tooling that need to be updated. Definitely worth checking out for existing Camunda customers who want to accelerate adoption within their organizations.

By the way, I’ve had so much great feedback on our panel yesterday: happy to hear that we had some nuggets of wisdom in there. So many good conversations last night at the BBQ and continuing into today between sessions. I’ll post a link to the recorded session when it’s published.

Back to Berlin! It’s time for CamundaCon 2022

I realize that I’m completely remiss for not posting about last week’s DecisionCAMP, but in my defense, I was co-hosting it and acting as “master of ceremonies”, so was a bit busy. This was the third year for a virtual DecisionCAMP, with a plan to be back in person next year, in Oslo. And speaking of in-person conferences, I’m in Berlin! Yay! I dipped my toe back into travel three weeks ago by speaking at Hyland’s CommunityLive conference in Nashville, and this week I’m on a panel at Camunda’s annual conference. I’ve been in Berlin for this conference several times in the past, from the days when they held the Community Day event in their office by just pushing back all the desks. Great to be back and hear about some of their successes since that time.

Day 1 started with an opening keynote by Jakob Freund, Camunda’s CEO. This is a live/online hybrid conference, and considering that Camunda did one of the first successful online conferences back in 2020 by keeping it live and real, this is shaping up to be a forerunner in the hybrid format, too. A lot of companies need to learn to do this, since many people aren’t getting back on a plane any time soon to go to a conference when it can be done online just as well.

Anyway, back to the keynote. Camunda just published the Process Orchestration Handbook, which is a marketing piece that distills some of the current messaging around process automation, and highlights some of the themes from Jakob’s keynote. He points out the problems with a lot of current process automation: there’s no end-to-end automation, no visibility into the overall process, and little flexibility to rework those processes as business conditions change. As a result, a lot of process automation breaks since it falls over whenever there’s a problem outside the scope of the automation of a specific set of tasks.

Jakob focused on a couple of things that make process orchestration powerful as a part of business automation: endpoint diversity (being able to connect a lot of different types of tasks into an integrated process) and process complexity (being able to include dynamic parallel execution, event-driven message correlation, and time-based escalation). These sound pretty straightforward, and for those of us who have been in process automation for a long time these are accepted characteristics of BPMN-based processes, but these are not the norm in a lot of process orchestration.

He also walked through the complexities that arise due to long-running processes, that is, anything that involves manual steps with knowledge workers: not the same as straight-through API-only process orchestration that doesn’t have to worry about state persistence. There are a few good customer stories here this week that bring all of these things together, and I plan to be attending some of those sessions.

He presented a view of the process automation market: BPMS, low-code platforms, process mining, iPaaS/integration, RPA, microservices orchestration, and process orchestration. Camunda doesn’t position itself in BPMS any more – mostly since the big analysts have abandoned this term – but in the process orchestration space. Looking at the intersection between the themes of endpoint diversity and process complexity that he talked about earlier, you can see where different tools end up. He even gives a nod to Gartner’s hyperautomation term and how process orchestration fits into the tech stack.

He finished up with a bit of Camunda’s product vision. They released V8 this year with the Zeebe engine, but much more than that is under development. More low-code especially around modeling and front-end human task technology, to enable less technical developers. Decision automation tied into process orchestration. And stronger coverage of AI, process mining and other parts of the hyperautomation tech stack through partnerships as well as their own product development.

Definitely some shift in the messaging going on here, and in some of Camunda’s direction. A big chunk of their development is going into supporting low-code and other “non-developer” personas, which they resisted for many years. They have a crossover point for pro-code developers to create connectors that are then used by low-code developers to assemble into process orchestrations – a collaboration that has been recognized by larger vendors for some time. Sensible plans and lots of cool new technology.

The rest of the day is pretty packed, and I’m having trouble deciding which sessions to attend since there are several concurrent that all sound interesting. Since most of them are also virtual for remote attendees, I assume the recordings will be available later and I can catch up on what I missed. It’s not too late to sign up to attend the rest of today and tomorrow virtually, or to see the recorded sessions after the fact.

Virtual conference best practices: 2020 in review

Wow, it’s been over two months since my last post. I took a long break over the end of the year since there wasn’t a lot going on that inspired me to write, and we were in conference hiatus. Now that (virtual) conferences are ramping up again for 2021, I wanted to share some of the best practices that I gathered from attending — and in one case, organizing — virtual conferences over 2020. Having sent this information by email to multiple people who were organizing their own conferences, I decided to just put it here where everyone could enjoy it. Obviously, these are all conferences about intelligent automation platforms, but the best practices are applicable to any technical conference, and likely to many non-technical conferences.

In summary, I saw three key things that make a virtual conference work well:

  1. Live presentations, not pre-recorded. This is essential for the amount of energy in the presentation, and makes the difference between a cohesive conference and a just a bunch of webinars. Screwups happen when you’re live, but they do at in-person conferences, too.
  2. Separate and persistent discussion platform, such as Slack (or Pega’s community in the case of their conference). Do NOT use the broadcast vendor’s chat/discussion platform, since a) it will disappear once your conference is over, and b) it probably sucks.
  3. Replays of the video posted as soon as possible, so that people who missed a live session can watch it and jump into the discussion later the same day while others are still talking about it. Extra points for also publishing the presentation slides at the same time.

A conference is not a one-way broadcast, it’s a big messy collaborative conversation

Let’s start with the list of the virtual conferences that I wrote about, with links to the posts:

What I saw by attending these helped me when I was asked to organize DecisionCAMP, which ran in late June: we did the sessions using Zoom with livestreaming to YouTube (participants could watch either way), used Slack as a discussion platform (which is still being used for ongoing discussions and to run monthly events), and YouTube for the on-demand videos. Fluxicon used a similar setup for their Process Mining Camp: Skype (I think) instead of Zoom to capture the speakers’ sessions with all participants watching through the YouTube livestream and discussions on Slack.

Some particular notes excerpted from my posts on the vendor conferences follow. If you want to see the full blog posts, use the tag links above or just search.

Camunda

  • “Every conference organizer has had to deal with either cancelling their event or moving it to some type of online version as most of us work from home during the COVID-19 pandemic. Some of these have been pretty lacklustre, using only pre-recorded sessions and no live chat/Q&A, but I had expectations for Camunda being able to do this in a more “live” manner that doesn’t completely replace an in-person event, but has a similar feel to it. They did not disappoint: although a few of the CamundaCon presentations were pre-recorded, most were done live, and speakers were available for live Q&A. They also hosted a Slack workspace for live chat, which is much better than the Q&A/chat features on the webinar broadcast platform: it’s fundamentally more feature-rich, and also allows the conversations to continue after a particular presentation completes.”
  • “As you probably gather from my posts today, I’m finding the CamundaCon online format to be very engaging. This is due to most of the presentations being performed live (not pre-recorded as is seen with most of the online conferences these days) and the use of Slack as a persistent chat platform, actively monitored by all Camunda participants from the CEO on down.”
  • “I mentioned on Twitter today that CamundaCon is now the gold standard for online conferences: all you other vendors who have conferences coming up, take note. I believe that the key contributors to this success are live (not pre-recorded) presentations, use of a discussion platform like Slack or Discord alongside the broadcast platform, full engagement of a large number of company participants in the discussion platform before/during/after presentations, and fast upload of the videos for on-demand watching. Keep in mind that a successful conference, whether in-person or online, allows people to have unscripted interactions: it’s not a one-way broadcast, it’s a big messy collaborative conversation.”
  • Note that things did go wrong occasionally — one presentation was cut off part way through when the presenter’s home internet died. However, the energy level of the presentations was really high, making me want to keep watching. Also hilarious when one speaker talked about improving their “shittiest process” which is probably only something that would come out spontaneously during a live presentation.

Alfresco

  • “Alfresco Modernize didn’t have much of a “live” feel to it: the sessions were all pre-recorded which, as I’ve mentioned in my coverage of other online conferences, just doesn’t have the same feel. Also, without a full attendee discussion capability, this was more like a broadcast of multiple webinars than an interactive event, with a short Q&A session at the end as the only point of interaction.”

Celonis

  • “A few notes on the virtual conference format. Last week’s CamundaCon Live had sessions broadcast directly from each speaker’s home plus a multi-channel Slack workspace for discussion: casual and engaging. Celonis has made it more like an in-person conference by live-broadcasting the “main stage” from a studio with multiple camera angles; this actually worked quite well, and the moderator was able to inject live audience questions. Some of the sessions appeared to be pre-recorded, and there’s definitely not the same level of audience engagement without a proper discussion channel like Slack — at an in-person event, we would have informal discussions in the hallways between sessions that just can’t happen in this environment. Unfortunately, the only live chat is via their own conference app, which is mobile-only and has a single chat channel, plus a separate Q&A channel (via in-app Slido) for speakers that is separated by session and is really more of a webinar-style Q&A than a discussion. I abandoned the mobile app early and took to Twitter. I think the Celosphere model is probably what we’re going to see from larger companies in their online conferences, where they want to (attempt to) tightly control the discussion and demonstrate the sort of high-end production quality that you’d have at a large in-person conference. However, I think there’s an opportunity to combine that level of production quality with an open discussion platform like Slack to really improve the audience experience.”
  • “Camunda and Celonis have both done a great job, but for very different reasons: Camunda had much better audience engagement and more of a “live” feel, while Celonis showed how to incorporate higher production quality and studio interviews to good effect.”
  • “Good work by Celonis on a marathon event: this ran for several hours per day over three days, although the individual presentations were pre-recorded then followed by live Q&A. Lots of logistics and good production quality, but it could have had better audience engagement through a more interactive platform such as Slack.”

IBM

  • “As I’ve mentioned over the past few weeks of virtual conferences, I don’t like pre-recorded sessions: they just don’t have the same feel as live presentations. To IBM’s credit, they used the fact that they were all pre-recorded to add captions in five or six different languages, making the sessions (which were all presented in English) more accessible to those who speak other languages or who have hearing impairments. The platform is pretty glitchy on mobile: I was trying to watch the video on my tablet while using my computer for blogging and looking up references, but there were a number of problems with changing streams that forced me to move back to desktop video for periods of time. The single-threaded chat stream was completely unusable, with 4,500 people simultaneously typing “Hi from Tulsa” or “you are amazing”.”
  • “IBM had to pivot to a virtual format relatively quickly since they already had a huge in-person conference scheduled for this time, but they could have done better both for content and format given the resources that they have available to pour into this event. Everyone is learning from this experience of being forced to move events online, and the smaller companies are (not surprisingly) much more agile in adapting to this new normal.”

Appian

  • “This was originally planned as an in-person conference, and Appian had to pivot on relatively short notice. They did a great job with the keynotes, including a few of the Appian speakers appearing (appropriately distanced) in their own auditorium. The breakout sessions didn’t really grab me: too many, all pre-recorded, and you’re basically an audience of one when you’re in any of them, with little or no interactivity. Better as a set of on-demand training/content videos rather than true breakout sessions, and I’m sure there’s a lot of good content here for Appian customers or prospects to dig deeper into product capabilities but these could be packaged as a permanent library of content rather than a “conference”. The key for virtual conferences seems to be keeping it a bit simpler, with more timely and live sessions from one or two tracks only.”

Signavio

  • “Signavio has a low-key format of live presentations that started at 11am Sydney time with a presentation by Property Exchange Australia: I tuned in from my timezone at 9pm last night, stayed for the Deloitte Australia presentation, then took a break until the last part of the Coca-Cola European Partners presentation that started at 8am my time. In the meantime, there were continuous presentations from APAC and Europe, with the speakers all presenting live in their own regular business hours.”
  • “The only thing missing is a proper discussion platform — I have mentioned this about several of the online conferences that I’ve attended, and liked what Camunda did with a Slack workspace that started before and continued after the conference — although you can ask questions via the GoToWebinar Question panel. To be fair, there is very little social media engagement (the Twitter hashtag for the conference is mostly me and Signavio people), so possibly the attendees wouldn’t get engaged in a full discussion platform either. Without audience engagement, a discussion platform can be a pretty lonely place. In summary, the GTW platform seems to behave well and is a streamlined experience if you don’t expect a lot of customer engagement, or you could use it with a separate discussion platform.”

Pega

  • “In general, I didn’t find the prerecorded sessions to be very compelling. Conference organizers may think that prerecording sessions reduces risk, but it also reduces spontaneity and energy from the presenters, which is a lot of what makes live presentations work so well. The live Q&A interspersed with the keynotes was okay, and the live demos in the middle breakout section as well as the live Tech Talk were really good. PegaWorld also benefited from Pega’s own online community, which provided a more comprehensive discussion platform than the broadcast platform chat or Q&A.”

Fluxicon

  • “The format is interesting, there is only one presentation each day, presented live using YouTube Live (no registration required), with some Q&A at the end. The next day starts with Process Mining Café, which is an extended Q&A with the previous day’s presenter based on the conversations in the related Slack workspace (which you do need to register to join), then a break before moving on to that day’s presentation. The presentations are available on YouTube almost as soon as they are finished.”
  • “The really great part was engaging in the Slack discussion while the keynote was going on. A few people were asking questions (including me), and Mieke Jans posted a link to a post that she wrote on a procedure for cleansing event logs for multi-case processes – not the same as what van der Aalst was talking about, but a related topic. Anne Rozinat posted a link to more reading on these types of many-to-many situations in the context of their process mining product from their “Process Mining in Practice” online book. Not surprisingly, there was almost no discussion on the Twitter hashtag, since the attendees had a proper discussion platform; contrast this with some of the other conferences where attendees had to resort to Twitter to have a conversation about the content. After the keynote, van der Aalst even joined in the discussion and answered a few questions, plus added the link for the IEEE task force on process mining that promotes research, development, education and understanding of process mining: definitely of interest if you want to get plugged into more of the research in the field. As a special treat, Ferry Timp created visual notes for each day and posted them to the related Slack channel.”

Bizagi

  • “The broadcast platform fell over completely…I’m not sure if Bizagi should be happy that they had so many attendees that they broke the platform, or furious with the platform vendor for offering something that they couldn’t deliver. The “all-singing, all-dancing” platforms look nice when you see the demo, but they may not be scalable enough.”

Final thoughts

Just to wrap things up, it’s fair to say that things aren’t going to go back to the way that they were any time soon. Part of this is due to organizations understanding that things can be done remotely just as effectively (or nearly so) as they can in person, if done right. Also, a lot of people are still reluctant to even think about travelling and spending days in poorly-ventilated rooms with a bunch of strangers from all over the world.

The vendors who ran really good virtual conferences 2020 are almost certain to continue to run at least some of their events virtually in the future, or find a way to have both in-person and remote attendees simultaneously. If you run a virtual conference that doesn’t get the attendee engagement that you expected, the problem may not be that “virtual conferences don’t work”: it could be that you just aren’t doing it right.

CamundaCon 2020.2 Day 2: roadmap, business-driven DMN and ethical algorithms

I split off the first part of CamundaCon day 2 since it was getting a bit long: I had a briefing with Daniel Meyer earlier in the week on the new RPA integration, and had a lot of thoughts on that already. I rejoined for Camunda VP of Product Management Rick Weinberg’s roadmap presentation, which covered what’s coming in 2021. If you’re a Camunda customer, or thinking about becoming one, you should check out the replay of his session if you missed it. Expect to see updates to decision automation, developer experience, process monitoring and interoperability.

I tuned in to the business architecture track for a presentation by David Ibl, Enterprise Architect at LV 1871 (a German insurance company) on how they enabled their business specialists to perform decision model simulation and test case definition using their own DMN Manager based on the Camunda modeler toolkit. Their business people were already using BPMN for modeling processes, but were modeling business decisions as part of the process, and needed to use externalize the rules from the processes in order to simplify the processes. This was initially done by moving the decisions to code, then calling that from within the process, but that made the decisions much less transparent to the business. Now, the business specialists model both BPMN and DMN in Signavio, which are then committed to git; these models are then pulled from git both for deployment and for testing and simulation directly by the business people. You can read a much better description of it written by David a few months ago. A good example (and demo) on how business people can model, test and simulate their own decisions as well as processes. And, since they’re committed to open source, you can find the code for it on github.

I also attended a session by Omid Tansaz of Nexxbiz, a Camunda consulting services partner, on their insurance process monitoring capability that allows systems across the entire end-to-end chain of insurance processes to be monitored in a consolidated fashion. This includes broker systems, front- and back-off systems within the insurer, as well as microservices. They were already using Camunda’s BPM engine, and started using Optimize for process visualization since Optimize 3.0 can include external event sources (from all of the other systems in the end-to-end process) as well as the Camunda BPM processes. This is one of the first case studies of the external event capability in Optimize, since that was only released in April, and show the potential for having a consolidated view across multiple systems: not just visibility, but compliance auditing, bottleneck analysis, and real-time issue prevention.

The conference closed with a keynote by Michael Kearns from the University of Pennsylvania on the science of socially-aware algorithm design. Ethical algorithms (the topic of his recent book written with Aaron Roth) are not just an abstract concept, but impact businesses from risk mitigation through to implementation patterns. There are many cases of how algorithmic decision-making shows definite biases, and instead of punting to legal and regulatory controls, their research looks at technical solutions to the problem in the form of better algorithms. This is a non-trivial issue, since algorithms often have outcomes that are difficult to predict, especially when machine learning is involved. This is exactly why software testing is often so bad (just to inject my own opinion): developers can’t or don’t consider the entire envelope of possible outcomes, and often just test the “happy path” and a few variants.

Kearns’ research proposes embedding social values in algorithms: privacy, fairness, accountability, interpretability and morality. This requires a definition of what these social values mean in a precise mathematical. There’s already been some amount of work on privacy by design, spearheaded by the former Ontario Information and Privacy Commissioner Ann Cavoukian, since privacy is one of the better-understood algorithmic concepts.

Kearns walked us through issues around algorithmic privacy, including the idea that “anonymized” data often isn’t actually anonymized, since the techniques used for this assume that there is only a single source of data. For example, redacting data within a data set can make it anonymous if that’s the only data set that you have; as soon as other data sets exist that contain one or more of the same unredacted data values, you can start to correlate the data sets and de-anonymize the data. In short, anonymization doesn’t work, in general.

He then looked at “differential privacy”, which compares the results of an algorithm with and without a specific person’s data: if an observer can’t tell the discern between the outcomes, then the algorithm is preserving the privacy of that person’s data. Differential privacy can be implemented by adding a small amount of random noise to each data point, which makes is impossible to figure out the contribution of any specific data point., and the noise contributions will cancel out of the results when a large number of data points are analyzed. Problems can occur, however, with data points that have very small values, which may be swamped by the size of the noise.

He moved on to look at algorithmic fairness, which is trickier: there’s no agreed-upon definition of fairness, and we’re only just beginning to understand tradeoffs, e.g., between race and gender fairness, or between fairness and accuracy. He had a great example of college admissions based on SAT and GPA scores, with two different data sets: one for more financially-advantaged students, and the other for students from modest financial situations. The important thing to note is that the family financial background of a student has a strong correlation with race, and in the US, as in other countries, using race as an explicit differentiator is not allowed in many decisions due to “fairness”. However, it’s not really fair if there are inherent advantages to being in one data set over the other, since those data points are artificially elevated.

There was a question at the end about the role of open source in these algorithms: Kearns mentioned OpenDP, an open source toolset for implementing differential privacy, and AI Fairness 360, an open source toolkit for finding and mitigating discrimination and bias in machine learning models. He also discussed some techniques for determining if your algorithms adhere to both privacy and fairness requirements, and the importance of auditing algorithmic results on an ongoing basis.