150 episodes of the Process Pioneers podcast

The Process Pioneers podcast recently published their 150th episode, which is a significant milestone considering that most podcasts wither into inactivity pretty quickly. It’s hosted by Daniel Rayner, managing director of APAC for GBTEC, so I suppose that technically they sponsor it, but it’s really just a free-ranging discussion that doesn’t talk about their products (or at least didn’t when I was on it).

You can see my interview with Daniel on the podcast from last year, it was a lot of fun and we delved into some interesting points.

21st International BPM Conference in Utrecht – opportunities to attend, present or sponsor

Way back in 2008, I took a chance and attended the international academic research conference on business process management in Milan. I was hooked, as you might have gathered from the 1000s of words that I blogged that week. Since then, I’ve attended a few more: Ulm, Germany in 2009; Hoboken, US in 2010; Clermont-Ferrand, France in 2011 (where I had the honour of delivering a keynote); Tallinn, Estonia in 2012; and Vienna, Austria in 2019 (where I gave a talk at a workshop). They are always hosted by a university that has a BPM research program, and often the sessions are held in the university lecture rooms which gives it a more relaxed atmosphere than your usual industry conference.

I’m fascinated by the research topics, and one common theme of my blogging from these conferences is that software vendors need to send their product owners/developers here, both to hear about and present ideas on research in BPM and related fields. There’s so many good ideas and smart people, you can’t help but come away having learned something of value. Starting in 2010, the conference started to include an industry track to be more inclusive of people who were not in academia or research environments. At some point, they also started offering companies the opportunity to sponsor the conference: I believe that some vendors sponsored coffee breaks and meals, or had booths at parts of the event. A good way to raise their profile with the attendees, which include not only academics but a lot of people from industry as well. And, as I’ve pointed out, it’s a great place for companies to meet promising young researchers who might be looking for a job in the future.

This year, the conference is in Utrecht, The Netherlands, on September 11-15, 2023. I’m hoping to attend after three years of hiatus due to that pesky little virus; I did attend some of the sessions virtually in the past couple of years but it’s just not the same as being there. If you want to submit a paper or give a presentation, you can see the important dates here – note that the abstracts for research papers are due next week, with other deadlines coming up shortly. If you just want to attend, they have an early bird registration price until July 18. If your company wants to sponsor the event in any way, there’s some information here along with contact information.

I’m really looking forward to getting back to this, and to other conferences this year, after dipping my toe back in the in-person conference pool with speaking slots last September (Hyland CommunityLIVE) and October (CamundaCon). I’ll also still be participating in virtual conferences, which allows me to attend more than I would normally have time or budget for, including speaking on a Voices in Tech panel next week. There is no question that the way we attend conferences has changed in the past three years. Some conferences are staying completely virtual, some are making a hard shift back to in-person only, while some are going the hybrid route. Meanwhile, companies that slashed their conference budget for attendees and sponsorships are reconsidering their spending in the light of increased attendance at in-person conferences. It’s going to take another year or two to see whether people will flock back to in-person conferences, or prefer to stick with the virtual style.

Voices in Tech panel

Edit with correction to above graphic: the panel is at 10am Eastern which is 3pm Central European time because we’re in that hellish period where North American clocks have moved forward but European ones haven’t.

I posted earlier this week on Mastodon that I’ve been taking a bit of a break but now getting back to things, and one of events that’s on my upcoming agenda is presenting on a panel Voices in Tech: Building Effective Automation Teams, hosted by Camunda and also sponsored by Infosys. This will take place online on March 15th, but you can head over to the link now and sign up. I will have the pleasure of reconnecting with co-panelists Uzma Khan of the Ontario Teachers’ Pension Plan, who I have known for many years, and Smriti Gupta of Infosys, who I shared the stage with at CamundaCon in Berlin last October. We will be joined by Ola Inozemtceva, a senior product marketing manager at Camunda, and the moderator will be Lana Ginns, product marketing manager at Camunda.

This is the 2023 version of the International Women’s Day panel that Camunda has been organizing for a few years now, and I really like that the focus is not on the fact that all of the panelists are women, but that we are “brilliant trailblazers in the tech world, who inspire people every day to redefine technology and how it can transform the world”. We’ll be discussing challenges and best practices with building high-performing orchestration teams, which ties in nicely with the series of video blogs that I’ve been doing lately for Trisotech on best practices in business automation.

I hope to see you there (virtually).

More on Mastodon

I’m not really active much on Mastodon yet, and still use Twitter as a broadcast platform (rather than discussion, which is how it started), but I’ve moved over to mastodon.social – you can find me at mastodon.social/@skemsley

My plan is to start doing a bit of engagement to see how that works out; as always, my main content will be here on my blog, with pointers to new posts on social media.

Playing with AI filters on photos, too. 🙂

Flowable’s FlowFest 2022

Flowable is holding their half-day online FlowFest today, and in spite of the eye-wateringly early start here in North America, I’ve tuned in to watch some of the sessions. All of these will be available on demand after the conference, so you can watch the sessions even if you miss them live.

There are three tracks — technical, architecture and business — and I started the day in the tech stream watching co-founder Tijs Rademakers‘ presentation on what’s new in Flowable. He spent quite a bit of the hour on a technical backgrounder, but did cover some of the new features: deprecation of Angular, new React Flow replacing the Oryx modelers, a new form editor, improved deployment options and cloud support, a managed service solution, and a quick-start migration path that includes an automatic migration of Camunda 7 process instance database to Flowable (for those companies that don’t want to make the jump to Camunda 8 and are concerned about the long-term future of V7).

For the second session, I switched over to the architect stream for Roman Saratz’ presentation on low-code integration with data objects. He showed some cool stuff where changes to the data in an external data object would update a case, in the example tied to a Microsoft Dynamics instance. The presentation was relatively short and there was an extended Q&A, obviously a lot of people interested in this form of integration. At the end, I checked in on the business track and realized that the sessions there were not time-aligned with the two technical tracks: they were already well into the Bosch session that was third on the agenda – not sure why the organizers thought that people couldn’t be interested in technology AND business.

In the third session, I went back to the tech stream and attended Joram Barrez‘ presentation on scripting. Like a few of the other Flowable team, Joram came from Alfresco’s Activiti core development team (and jBPM before that), and is now Principal Software Architect. He looked at the historical difference between programs and scripts, which is that programs are compiled and scripts are interpreted, and the current place of pre-compiled [Java] delegates in service tasks versus script tasks that are interpreted at runtime. In short, the creation, compilation and deployment of Java delegates are definitely the responsibility of technical developers, while scripts can be created and maintained by less-technical low code developers. Flowable now allows for the creation a “service registry” task that is actually a Javascript or Groovy script rather than a REST call, which allows scripts to be reusable across models as if they were external service tasks rather than embedded within one specific process or case model. There are, of course, tradeoffs. Pre-compiled delegates typically have higher performance, and provide more of a structured development experience such as unit testing, and backwards-compatible API agreements. Scripts open up more development capability to the model developer who may not be Java-savvy. Flowable has created some API constructs that make scripts more capable and less brittle, including REST service request/response processing and BPMN error handling. It appears that they are shifting the threshold for what’s being done by a low code developer directly in their modeling environment, versus what requires a more technical Java developer, an external IDE and a more complex deployment path: making scripts first-class citizens in Flowable applications. In fat, Joram talked about ideas (not yet in the product) such as having a more robust scripting IDE embedded directly in their product. I am reminded of companies like Trisotech that are using FEEL as their scripting language in BPMN-CMMN-DMN applications, on the assumption that if you’re already using FEEL in DMN then using it throughout your application is a good idea; I asked if Flowable is considering this, and Joram said that it’s not currently supported but it would not be that difficult to add if there was demand for it.

To wrap up the conference, I attend Paul Holmes-Higgin‘s architecture talk on Flowable future plans. Paul is co-founder of Flowable and Chief Product Officer. He started with a discussion of what they’re doing in Flowable Design, which is the modeling and design environment. Tijs spoke about some of this earlier, but Paul dug into more detail of what they’ve done in the completely rebuilt Design tool that will be released in early 2023. Both the technical underpinnings and the visuals have changed, to update to newer technology and to support a broader range of developer types from pro code to low code. He also spoke about longer term (2-3 year) innovation plans, starting with a statement of the reality that end-to-end processes don’t all happen within a centralized monolithic orchestrated system. Instead, they are made up of what he refers to as “process chains”, which is more of a choreography of different systems, services and organizations. He used a great example of a vehicle insurance claim that uses multiple technology platforms and crosses several organizational boundaries: Flowable Work may only handle a portion of those, with micro-engines on mobile devices and serverless cloud services for some capabilities. They’re working on Flowable Jet, a pared-down BPMN-CMMN-DMN micro-engine for edge automation that will run natively on mobile, desktop or cloud. This would change the previous insurance use case to put Flowable Jet on the mobile and cloud platforms to integrate directly with Flowable Work inside organizations. With the new desktop RPA capabilities in Windows 11, Flowable Jet could also integrate with that as a bridge to Flowable Work. This is pretty significant, since currently end-to-end automation has a lot of variability around the edges; allowing for their own tooling in the edge as well as central automation could provide better visibility and security throughout.

Tijs, Jorram and Paul are all open source advocates in spite of Flowable’s current more prominent commercial side; I’m hoping to see them shifting some of their online conversations over to the Fosstodon (or some other Mastodon instance), where I have started posting.

That’s it for FlowFest: a good set of informational sessions, and some that I missed due to multiple concurrent tracks that I’ll go back and watch later.

Will the elephant replace the bird?

tldr; I’m on Mastodon at fosstodon.org/@skemsley

I’ve been on Twitter since March 2007 and have amassed over 7,500 followers (probably half of them bots, but whatever). There’s a current push to move off Twitter and onto Mastodon, an open source microblogging social network, because of the declining standards of content and new ownership over at the bird site. Can we successfully make the shift from tweeting to tooting?

In the old days of Twitter, which don’t seem that long ago, I used to engage in a lot of conversations: my timeline was mostly tweets from my friends and business colleagues, and we would banter back and forth. These days, however, I use Twitter mostly as a broadcast platform, where I post links to new blog posts, videos and other publications. I respond if someone mentions me directly, but it’s no longer the place that I go to start a conversation. It’s just too noisy, full of promoted tweets, and retweets about topics that I don’t care about by people who I barely know. To be fair, some of that is my fault: I tended to follow most people who followed me and had some sort of similar interests, and it’s a lot of work to go back and pare down that list of 2,000 who I follow to a more reasonable number. When lists came out, I started putting people on lists rather than following them directly, but it was probably already too late. Same, by the way, for LinkedIn: I was indiscriminate about who I added to my network, and it’s just too noisy over there for a real conversation.

Enter the elephant. Mastodon is an open source, decentralized social platform that has functionality quite similiar to Twitter: posts are “toots” instead of tweets; you can like, share and reply to posts, and can see a running feed of posts. The big difference is that Mastodon isn’t one company, or one instance: anyone can create a Mastodon instance, either privately for use within a smaller group, or included in a group of federated servers that share posts and (to some extent) account information. When you sign up, you need to choose the server where you want your account, although you can follow accounts from other servers. If you want to change servers at some point in the future, you can; however, it doesn’t appear that you can move your posts to the new server (although you will move your following/follower lists), so there is less incentive to do this once you start posting a lot.

I looked at the available servers that follow the Mastodon Server Covenant and are part of the fediverse (the group of federated servers), and picked fosstodon.org, which is a technology-focused server that includes a lot of (but is not exclusive to) free and open source software. I’m not exclusive to open source, but I do cover a number of process automation vendors with open source offerings and this felt like a good fit. You can find and follow me there at fosstodon.org/@skemsley. Will I be better at curating my follows on this platform, which I so miserably failed at on Twitter and LinkedIn? I have way less FOMO these days, so maybe.

I’m already starting to have some conversations over there, but finding it difficult to find who from my current circle of friends and colleagues is on Mastodon, and on which server — searching by name really only gives you who is on your server unless someone else on your server mentions them. I also have a lot to learn about curating my feed, since the defaults are Home (my posts, re-toots of my posts, and my followers’ posts), Local (all posts from everyone on my server) and Federated (holy crap, everything in the fediverse). I’ve discovered an unoffial but quite good source of helpful into at fedi.tips and will be reviewing more of that.

On a side note, Twitter tends to be a good platform for contacting customer service for some organizations, so I’m not going to abandon it outright, and I’ll still use it for broadcasts. Just covering my bases.

State of Process Orchestration panel replay

I was at CamundaCon in Berlin last month, and was on a panel about the state of process orchestration. Check it out.

Lots of interesting discussion, and it was fun to hear other perspectives from a large SI (Infosys) and a customer (PershingX) on the panel with me. Thanks to Camunda for the invitation, and my first European trip in almost three years!

CamundaCon 2022 – that’s a wrap!

It’s been a quick two days at CamundaCon 2022 in Berlin, and as always I’ve enjoyed my time here. The second day finished with a quick fireside chat with Camunda co-founders Jakob Freund and Bernd Ruecker, who wrapped up some of the conference themes about process orchestration. I’ll post a link to the videos when they’re all available; not sure if Camunda is going to publish them on their YouTube channel or put them behind a registration page.

I mentioned previously about what a great example of a hybrid conference this has been, with both speakers and attendees either on-site or remote — my own panel included three of us on the stage and one person remotely, and it worked seamlessly. One part of this that I liked is that in the large break lounge area, there were screens set up with the video feed from each of the four stages, and wireless headsets that you could tune to any of the four channels. This let you be “remote” even when on site, which was necessary in some of the smaller rooms where it was standing room only. Or if you wanted to have a coffee while you were watching.

Thanks to Camunda for inviting me, and the exciting news is that next September’s CamundaCon will be in New York: a much shorter trip for me, as well as for many of Camunda’s North American customers and partners.

CamundaCon Day 2: Business Process Optimization at Scale

Michael Goldverg from BNY Mellon presented on their journey with automating processes within the bank across thousands of people in multiple business departments. They needed to deal with interdependencies between departments, variations due to account/customer types, SLAs at the departmental and individual level, and thousands of daily process instances.

They use the approach of a single base model with thousands of variations – the “super model” – where the base model appears to include smaller ad hoc models (mostly snippets surrounding a single task that were initially all manual operations) that are assembled dynamically for any specific type of process. Sort of an accidental case management model at first glance, although I’d love to get a closer look at their model. There was a question about the number of elements in their model, which Michael estimated as “three dozen or so” tasks and a similar number of gateways, but can’t share the actual model for confidentiality reasons.

They have a deployment architecture that allows for multiple clusters accessing a single operational database, where each cluster could have a unique version of the process model. Applications could then be configured to select the server cluster – and therefore the model version – at runtime, allowing for multiple models to be tested in a live environment. There’s also an automated process instance migration service that moves the live process instances if the old and new process models are not compatible. Their model changes constantly, and they update the production model at least once per week.

They’ve had to deal with optimistic locking exceptions (fairly common when you have a lot of parallel gateways and multiple instances of the engine) by introducing their own external locking mechanism, and by offloading some of this to the Camunda JobExecutor using asynchronous continuations although that can cause a hit on performance. The hope is that this will be resolved when they move to the V8 engine – V8 doesn’t rely on a single relational database and is also highly distributed by design.

They run 50-100k transactions per day, and have hundreds of millions of tasks in the history database. They manage this with aggressive cleaning of the history database – currently set to 60 days – by archiving the task history as PDFs in their content management system where it’s discoverable. They are also very careful about the types of queries that they allow directly on the Camunda database, since a single poorly-constructed search can bring the database to its knees: this is why Camunda, like other vendors, discourage the direct querying of their database.

There are a lot of trade offs to be understood when it comes to performance optimization at this scale. Also some good points about starting your deployment with a more complex configuration, e.g., two servers even if one is adequate, so that you’re not working out the details of how to run the more complex configuration when you’re also trying to scale up quickly. Lots of details in Michael’s presentation that I’m not able to capture here, definitely check out the recorded video later if you have large deployment requirements.