The (old) new software industry

Facebook is a hot mess most of the time, but I usually enjoy the “memories” that remind me what I posted on this date in past years. A couple of days ago, on April 30, I was reminded that in 2007 I attended the New Software Industry conference at the Microsoft campus in Mountain View. These were the days when SaaS and other cloud platforms were emerging as a significant packaging concept, and companies were rethinking their delivery models as well as their split between product and services.

In reviewing those old posts, there were a lot of points that are still valid today, and topics ranged from development practices to software company organization to venture capital. The discussion about the spectrum of software development practices was especially on point: there are some things that lend themselves to a heavily-specified waterfall-like model (e.g., infrastructure development), while others that benefit from an agile approach (e.g., most applications). I also liked this bit that I picked up from one of the sessions about software industry qualifications:

In World of Warcraft you can tell if someone has a Master’s in Dragon Slaying, and how good they are at it, whereas the software industry in general, and the open source community in particular, has no equivalent (but should).

I finished my coverage by pointing out that this was a very Valley-centric view of the software industry, and that new software industry conferences in the future would need to much more exclusive of the global software industry.

I was already live-blogging at conferences by this point in time, and you can read all my posts for the conference here.

Process and content: the chocolate and peanut butter of business

I’m definitely a process person, but my start in the business was through document-driven imaging and workflow systems. It’s important to keep in mind, no matter where you lie on the spectrum of interest between process and content, that they are often intertwined: unstructured content may be a driver for process, or be the product of a process. Processes sometimes exist only to manage the content, and sometimes content only exists as supporting documentation for a process. A few years ago, I wrote about several of the process/content use cases that I see in practice for the Alfresco blog.

One thing I didn’t cover at that time is the use of processes (and rules) to govern access to content: although a good content management system will let the right person see the right content, not all unstructured content is stored in a content management system at all, much less a good one. Even if content is in a content management system, it may not be appropriate to just let everyone root around in there to find whatever documents that they might want to see. Access to content is often contextual, that is, when someone is acting in a certain role and performing a certain task, they should see specific content. In another context, they might see different content. This is even more important when you open up your processes and content to external participants, including customers and business partners.

I’ve had the chance to talk about some of these ideas in more detail in a couple of places. First, my most recent guest post on the Trisotech blog is called In financial services, process rules content, and looks at how this can work in financial applications such as insurance underwriting:

There are a lot of laggards [which have] a somewhat disorganized collection of content related to and created by processes, stored on multiple internal systems, with little or no internal access control, and no external access. In fact, I would say that in every insurance and financial operation that I’ve visited as a consultant, I’ve seen some variation of this lack of content governance, and the very real impacts on operational performance as well as privacy concerns. This is definitely a situation where process can come to the rescue for getting control over access to unstructured content.

Secondly, I’m presenting a webinar and writing a short paper for ASG Technologies on content governance in customer-facing processes. The webinar will be on May 5th, and the white paper available as a follow-on shortly after that. You can register here to attend. Hope to see you there!

Camunda Platform 7.15: now low-code (-ish)

I had a quick briefing with Daniel Meyer, CTO of Camunda, about today’s release. With this new version 7.15, they are rebranding from Camunda BPM to Camunda Platform (although most customers just refer to the product as “Camunda” since they really bundle everything in one package). This follows the lead of other vendors who have distanced themselves from the BPM (business process management) moniker, in part because what the platforms do is more than just process management, and in part because BPM is starting to be considered an outdated term. We’ve seen the analysts struggle with naming the space, or even defining it in the same way, with terms like “digital process automation”, “hyperautomation” and “digitalization” being bandied about.

An interesting pivot for Camunda in this release is their new support for low-code developers — which they distinguish as having a more technical background than citizen developers — after years of primarily serving the needs of professional technical (“pro-code”) developers. The environment for pro-code developers won’t change, but now it will be possible for more collaboration between low-code and pro-code developers within the platform with a number of new features:

  • Create a catalog of reusable workers (integrations) and RPA bots that can be integrated into process models using templates. This allows pro-code developers to create the reusable components, while low-code developers consume those components by adding them to process models for execution. RPA integration is driving some amount of this need for collaboration, since low-code developers are usually the ones on the front-end of RPA initiatives in terms of determining and training bot functionality, but previously may have had more difficult integrating those into process orchestrations. Camunda is extending their RPA Bridge to add Automation Anywhere integration to their existing UIPath integration, which gives them coverage of a significant portion of the RPA market. I covered a bit of their RPA Bridge architecture and their overall view on RPA in one of my posts from their October 2020 CamundaCon. I expect that we will soon see Blue Prism integration to round out the main commercial RPA products, and possibly an open source alternative to appeal to their community customers.
  • DMN support, including DRD and decision tables, in their Cawemo collaborative modeler. This is a good way to get the citizen developers and business analysts involved in modeling decisions as well as processes.
  • A form builder. Now, I’m pretty sure I’ve heard Jakob Freund claim that they would never do this, but there it is: a graphical form designer for creating a rudimentary UI without writing code. This is just a preliminary release, only supporting text input fields, so isn’t going to win any UI design awards. However, it’s available in the open source and commercial versions as well as accessible as a library in bpmn.io, and will allow a low-code developer to do end-to-end development: create process and decision models, and create reusable “starter” UIs for attaching to start events and user activities. When this form builder gets a bit more robust in the next version, it may be a decent operational prototyping tool, and possibly even make it into production for some simple situations.

They’ve also added some nice enhancements to Optimize, their monitoring and analytics tool, and have bundled it into the core commercial product. Optimize was first released mid-2017 and is now used by about half of their customers. Basically, it pumps the operational data exhaust out of the BPM engine database and into an elastic search environment; with the advent of Optimize 3.0 last year, they could also collect tracking events from other (non-Camunda) systems into the same environment, allowing end-to-end processes to be tracked across multiple systems. The new version of Optimize, now part of Camunda Platform 7.15, adds some new visualizations and filtering for problem identification and tracking.

Overall, there’s some important things in this release, although it might appear to be just a collection of capabilities that many of the all-in-one low-code platforms have had all along. It’s not really in Camunda’s DNA to become a proprietary all-in-one application development platform like Appian or IBM BPM, or even make low-code a primary target, since they have a robust customer base of technical developers. However, these new capabilities create an important bridge between low-code developers who have a better understanding of the business needs, and pro-code developers with the technical chops to create robust systems. It also provides a base for Camunda customers who want to build their own low-code environment for internal application development: a reasonably common scenario in large companies that just can’t fit their development needs into a proprietary application development platform.

It’s My 16th Blogaversary

Sixteen years ago, with some trepidation, I hit the Publish button for the first time on Column 2, posting a review of the BPTrends 2005 BPM Suites Report. This was early in the social media days, and I wasn’t sure if anyone would be interested in anything that I had to write about. Since then, I’ve written 2,236 posts and 1,026,740 words. My readers from all over the world have contributed 2,095 comments. The readership stats are not completely accurate, since I’ve transferred platforms twice and they would have been reset at those points, although the last change was quite a number of years ago. Based on the current site stats, aside from the Home and About Me pages, the most popular post of all time is Policies, procedures, processes and rules from 2007. More readers are from the US than any other country, although India and Germany have respectable second and third place showings.

Social publishing platforms come and go, and I occasionally dabble in other places such as LinkedIn and Medium, but I believe that maintaining control over my content is important. I choose to make this open platform (self-hosted WordPress) my main platform, rather than a proprietary walled garden that may limit who sees what I write, or go out of business and take my content with them.

When I started writing this blog, I was doing similar technical strategy and architecture consulting work to what I do now. The exposure that I have from writing here has leveraged me into a different and complementary business, and I now spend half my time as an indepedent industry analyst. That started with me asking for free press passes to vendor and industry conferences, since I was writing about the conferences; eventually, the value of what I was writing was recognized, and vendors started to invite me to attend (covering my travel expenses) and include me in analyst sessions and product briefings. Now, they hire me to help with internal strategy, as well as to write and present on thought-leadership and educational topics regarding our industry.

Writing this blog has expanded my business social circle enormously, and I count as friends (or at least friendly business colleagues) many people who I have met because I hit that Publish button. Without a doubt, it has been transformational for me, both in business and in life.

ProcessMaker’s ProcessCon 2021 – it’s a wrap!

Today, I attended (and spoke at) ProcessMaker’s first user conference, ProcessCon 2021. This was virtual, of course, with some pre-recorded presentations and some — like mine — live. There was a reasonable amount of live Q&A, keeping things interesting.

We started with CEO Brian Reale’s opening keynote, then I was up with a presentation on Process Automation for Business Survival; you can see my slides below, and I’ll add a link to the presentation replays when I get it.

I took a bit of a break, then tuned back to see Alan Bollinger, ProcessMaker’s Director of Product & Engineering, talk about the next 12 months of their roadmap, with the upgrade of their ProcessMaker 4.0 to today’s release of v4.1. This adds more than 35 enhancements to the previous version, and cleans up a lot of open issues. I’m not a ProcessMaker aficionado so not sure all of what’s completely new versus enhancements, but there’s some cool new stuff with BPMN signals that allow for choreographed (rather than orchestrated) processes, new things with data object handling, and some new conversational forms capabilities for driving a chatbot-style interface.

Alan Bollinger presents new capabilities for user signals: trigger a process when a user is created

Beyond this newly-released version, they are adding user signals (BPMN signals that can be thrown with any changes to user profile data, e.g., to trigger a process when a new user is created), SCIM for cross-domain identity management, organization rules, parallel/serial multi-instance activities, and script APIs to allow direct REST access to any scripts without using BPMN.

There were some interesting customer case studies, and we finished the half-day conference with an open Q&A with a number of the speakers rejoining, including me. A lot of the questions were product-specific ones for Bollinger on the upcoming releases, but we had a good chat on the relative merits of choreography and orchestration, plus about conversational interfaces.

Keynote at ProcessMaker’s ProcessCon: Process Automation for Business Survival

This Thursday, ProcessMaker is having their first-ever user conference, and I’m giving a keynote! I’m going to look back at our year of living disruptively, and give my view of why process automation is no longer a luxury, but a necessity for businesses to survive and thrive.

The conference runs about 3.5 hours starting at 10am Eastern, and it will kick off with a keynote by founder and CEO Brian Reale on the future of hyperautomation and low-code before I take the virtual stage. The event is free and open to everyone, you will likely find some of the talks valuable even if you’re not a ProcessMaker customer. Also, there’s a Slack workspace to hang out and have discussions before, during and after the event.

Opening up open source development with Bonitasoft 2021.1

I had a briefing with Bonitasoft CEO Miguel Valdes earlier this week to hear more about their 2021.1 release, announced today. Note that they have changed their version numbers to align with the year and release number relative to that year, something I’ve seen starting to happen with a number of vendors.

The most important part of this release, in my opinion, is their shift in what’s open source versus commercial. Like most open source vendors, Bonitasoft is built on an open source core engine and has a number of other open source capabilities, but creates proprietary commercial components that they license to paying customers in a commercial version of the system. In many cases, the purely open source version is great for technical developers who are using their own development tools, e.g., for data modeling or UI development, but it’s a non-starter for business developers since you can’t build and deploy a complete application using just the open source platform. Bonitasoft is shaking up this concept by taking everything to do with development — page/form/layout UI design, data object models, application descriptors, process diagrams, organization models, system connectors, and Git integration — and adding it to the open source version. In other words, instead of having one version of their Studio development environment for open source and another for commercial, everything is unified, including everything in the open source version necessary to develop and deploy process automation applications. Having a unified development environment also makes it easy to move a Bonitasoft application from the open source to the commercial version (and the reverse) without project redevelopment, allowing customers to start with one version and shift to the other as project requirements change.

This is a big deal for semi-technical business developers, who can now use Bonita Studio to create complete applications on the same underlying platform as technical developers. Bonitasoft has removed anything that requires coding from Studio, recognizing that business developers don’t want to write code, and technical developers don’t use the visual development environment anyway. [As a Java developer at one of my enterprise clients once said when presented with a visual development environment, “yeah, it looks nice, but I’ll never use it”.] That doesn’t mean that these two types of developers don’t collaborate, however: technical developers are critical for the development of connectors, for example, which allow an application to connect to an external system. Once connectors are created to link to, for example, a legacy system, the business developers can consume those connectors within their applications. Bonitasoft provides connector archetypes that allow technical developers to create their own connectors, but what is missing is a way to facilitate collaboration between business and technical developers for specifying the functionality of a connector. For example, allowing the business developer to create a placeholder in an application/process and specify the desired behavior, which would then be passed on to the technical developer.

Miguel took me through a demo of Bonita Studio using only the open source version, showing their new starting point of MyProcessAutomation that includes sections for Organization, Business Data Model Applications, Diagrams, and Pages/Forms/Layouts. There are also separate sections for Custom Widgets and Environments: the latter is to be able to define separate environments for development, testing, production, etc., and was previously only in the commercial edition. It’s a very unified look and feel, and seems to integrate well between the components: for example, the expression editor allows direct access to business object variables that will be understandable to business developers, and they are looking at ways to simplify this further.

They are moving their UI away from Angular to Web Components, and are even using their own UI tools to create their new user and administrator portals. The previous Bonita Portal is now being replaced by two applications: one is a user portal that provides case and task list functionality, while the other is for administrators to monitor and repair any process instance problems. These two applications can be freely modified by customers to personalize and extend them within their own environment.

There are definitely things remaining in their commercial edition, with a focus on security (single sign-on), scalability (clustering, elasticity), monitoring, and automated continuous deployment. There are a few maintenance tools that are being moved from the open source to commercial version, and maintenance releases (bug fixes between releases) will be limited to commercial customers. They also have a new subscription model that helps with self-managed elastic deployments (e.g., Amazon Cloud); provides update and migration services for on-premise customers; and includes platform audits for best practices, performance tuning and code review. Along with this are some new packaged professional services offering: Fast Track for first implementation, on-premise to Bonita Cloud upgrade, and on-premise upgrades for customers not on the new subscription model.

The last thing that Miguel mentioned was an upcoming open source project that they are launching related to … wait, that might not have been for public consumption yet. Suffice to say that Bonitasoft is disrupting the open source process automation space to be more inclusive of non-technical developers, and we’ll be seeing more from them along these lines in the near future.

SAP acquiring Signavio into Business Process Intelligence unit

I first met Signavio CEO Gero Decker in 2008, when he was a researcher at Hasso Platner Institut and emailed me about promoting their BPMN poster — a push to have BPMN (then version 1.1) recognized as a standard for process modeling. I attended the academic BPM conference in Milan that year but Gero wasn’t able to attend, although his name was on a couple of that year’s modeling-related demo sessions and papers related to Oryx, an open source process modeling project. By the 2009 conference in Ulm we finally met face-to-face, where he told me about what he was working on, process modeling ideas that would eventually evolve into Signavio. By the 2010 BPM conference in Hoboken, he was showing me a Signavio demo, and we ended up running into each other at many other BPM events over the years, as well as having many online briefings as they released new products. The years of hard work that he and his team have put into Signavio have paid off this week with the announcement of Signavio’s impending acquisition by SAP (Signavio press release, SAP press release). There have been rumors floating around for a couple of days, and this morning I had the chance for a quick chat with Gero in advance of the official announcement.

The combination of business process intelligence from SAP and Signavio creates a leading end-to-end business process transformation suite to help our customers achieve the requirements needed to gain a competitive edge.

Luka Mucic, CFO of SAP

SAP is launching RISE with SAP today, with the Signavio acquisition a part of the announcement. RISE with SAP is billed as “business transformation as a service”, providing business process redesign (including Signavio), technical migration (which appears to be a push to get reluctant customers onto their current platform), and building an intelligent enterprise (which is mostly a cloud infrastructure message).

This is a full company acquisition, including all Signavio employees (numbering about 500). Gero and the only other co-founder still at Signavio, CTO Willi Tscheschner, will continue in their roles to drive forward the product vision and implementation, becoming part of SAP’s relatively new Business Process Intelligence unit, which is directly under the executive board. Since that unit previously contained about 100 people, the Signavio acquisition will swell those ranks considerably, and Gero will co-lead the unit with the existing GM, Rouven Morato. A long-time SAP employee, Morato can no doubt help navigate the sometimes murky organizational waters that might otherwise trip up a newcomer. Morato was also a significant force in SAP’s own internal transformation through analytics and process intelligence, moving them from the dinosaur of old to a (relatively) more nimble and responsive company, hence understands the importance of products like Signavio’s in transforming large organizations.

Existing Signavio customers probably won’t see much difference right now. Over time, capabilities from SAP will become integrated into the process intelligence suite, such as deeper integration to introspect and analyze SAP S/4 processes. Eventually product names and SKUs will change, but as long as Gero is involved, you can expect the same laser focus on linking customer experience and actions back to processes. The potential customer base for Signavio will broaden considerably, especially as they start to offer dashboards that collect information on processes that include, but are not limited to, the SAP suite. In the past, SAP has been very focused on providing “best practice” processes within their suite; however, if there’s anything that this past year of pandemic-driven disruption has taught us, those best practices aren’t always best for every organization, and processes always include things outside of SAP. Having a broader view of end-to-end processes will help organizations in their digital transformations.

Obviously, this is going to have an impact on SAP’s current partnership with Celonis, since the SAP Process Mining by Celonis would be directly in competition with Signavio’s Process Intelligence. Of course, Signavio also has a long history with SAP, but their partnership has not been as tightly branded as the Celonis arrangement. Until now. Celonis arguably has a stronger process mining product than Signavio, especially with their launch into task mining, and have a long history of working with SAP customers on their process improvement. There’s always room for partners that provide different functionality even if somewhat in competition with an internal functionality, but Celonis will need to build a strong case for why a SAP customer should pick them over the Signavio-based, SAP-branded process intelligence offering.

Keep in mind that SAP hasn’t had a great track record of process products that aren’t part of their core suite: remember SAP NetWeaver BPM? Yeah, I didn’t think so. However, Signavio’s products are focused on modeling and analyzing processes, not automating them, so they might have a better chance of being positioned as discovering improvements to processes that are automated in the core suite, as well as giving SAP more visibility into how their customers’ businesses run outside of the SAP suite. There’s definitely great potential here, but also the risk of just becoming buried within SAP — time will tell.

Disclosure: Signavio has been a client of mine within the last year for creating a series of webinars. I was not compensated in any way for writing this post (or anything else on this blog, for that matter), and it represents my own opinions.

Virtual conference best practices: 2020 in review

Wow, it’s been over two months since my last post. I took a long break over the end of the year since there wasn’t a lot going on that inspired me to write, and we were in conference hiatus. Now that (virtual) conferences are ramping up again for 2021, I wanted to share some of the best practices that I gathered from attending — and in one case, organizing — virtual conferences over 2020. Having sent this information by email to multiple people who were organizing their own conferences, I decided to just put it here where everyone could enjoy it. Obviously, these are all conferences about intelligent automation platforms, but the best practices are applicable to any technical conference, and likely to many non-technical conferences.

In summary, I saw three key things that make a virtual conference work well:

  1. Live presentations, not pre-recorded. This is essential for the amount of energy in the presentation, and makes the difference between a cohesive conference and a just a bunch of webinars. Screwups happen when you’re live, but they do at in-person conferences, too.
  2. Separate and persistent discussion platform, such as Slack (or Pega’s community in the case of their conference). Do NOT use the broadcast vendor’s chat/discussion platform, since a) it will disappear once your conference is over, and b) it probably sucks.
  3. Replays of the video posted as soon as possible, so that people who missed a live session can watch it and jump into the discussion later the same day while others are still talking about it. Extra points for also publishing the presentation slides at the same time.

A conference is not a one-way broadcast, it’s a big messy collaborative conversation

Let’s start with the list of the virtual conferences that I wrote about, with links to the posts:

What I saw by attending these helped me when I was asked to organize DecisionCAMP, which ran in late June: we did the sessions using Zoom with livestreaming to YouTube (participants could watch either way), used Slack as a discussion platform (which is still being used for ongoing discussions and to run monthly events), and YouTube for the on-demand videos. Fluxicon used a similar setup for their Process Mining Camp: Skype (I think) instead of Zoom to capture the speakers’ sessions with all participants watching through the YouTube livestream and discussions on Slack.

Some particular notes excerpted from my posts on the vendor conferences follow. If you want to see the full blog posts, use the tag links above or just search.

Camunda

  • “Every conference organizer has had to deal with either cancelling their event or moving it to some type of online version as most of us work from home during the COVID-19 pandemic. Some of these have been pretty lacklustre, using only pre-recorded sessions and no live chat/Q&A, but I had expectations for Camunda being able to do this in a more “live” manner that doesn’t completely replace an in-person event, but has a similar feel to it. They did not disappoint: although a few of the CamundaCon presentations were pre-recorded, most were done live, and speakers were available for live Q&A. They also hosted a Slack workspace for live chat, which is much better than the Q&A/chat features on the webinar broadcast platform: it’s fundamentally more feature-rich, and also allows the conversations to continue after a particular presentation completes.”
  • “As you probably gather from my posts today, I’m finding the CamundaCon online format to be very engaging. This is due to most of the presentations being performed live (not pre-recorded as is seen with most of the online conferences these days) and the use of Slack as a persistent chat platform, actively monitored by all Camunda participants from the CEO on down.”
  • “I mentioned on Twitter today that CamundaCon is now the gold standard for online conferences: all you other vendors who have conferences coming up, take note. I believe that the key contributors to this success are live (not pre-recorded) presentations, use of a discussion platform like Slack or Discord alongside the broadcast platform, full engagement of a large number of company participants in the discussion platform before/during/after presentations, and fast upload of the videos for on-demand watching. Keep in mind that a successful conference, whether in-person or online, allows people to have unscripted interactions: it’s not a one-way broadcast, it’s a big messy collaborative conversation.”
  • Note that things did go wrong occasionally — one presentation was cut off part way through when the presenter’s home internet died. However, the energy level of the presentations was really high, making me want to keep watching. Also hilarious when one speaker talked about improving their “shittiest process” which is probably only something that would come out spontaneously during a live presentation.

Alfresco

  • “Alfresco Modernize didn’t have much of a “live” feel to it: the sessions were all pre-recorded which, as I’ve mentioned in my coverage of other online conferences, just doesn’t have the same feel. Also, without a full attendee discussion capability, this was more like a broadcast of multiple webinars than an interactive event, with a short Q&A session at the end as the only point of interaction.”

Celonis

  • “A few notes on the virtual conference format. Last week’s CamundaCon Live had sessions broadcast directly from each speaker’s home plus a multi-channel Slack workspace for discussion: casual and engaging. Celonis has made it more like an in-person conference by live-broadcasting the “main stage” from a studio with multiple camera angles; this actually worked quite well, and the moderator was able to inject live audience questions. Some of the sessions appeared to be pre-recorded, and there’s definitely not the same level of audience engagement without a proper discussion channel like Slack — at an in-person event, we would have informal discussions in the hallways between sessions that just can’t happen in this environment. Unfortunately, the only live chat is via their own conference app, which is mobile-only and has a single chat channel, plus a separate Q&A channel (via in-app Slido) for speakers that is separated by session and is really more of a webinar-style Q&A than a discussion. I abandoned the mobile app early and took to Twitter. I think the Celosphere model is probably what we’re going to see from larger companies in their online conferences, where they want to (attempt to) tightly control the discussion and demonstrate the sort of high-end production quality that you’d have at a large in-person conference. However, I think there’s an opportunity to combine that level of production quality with an open discussion platform like Slack to really improve the audience experience.”
  • “Camunda and Celonis have both done a great job, but for very different reasons: Camunda had much better audience engagement and more of a “live” feel, while Celonis showed how to incorporate higher production quality and studio interviews to good effect.”
  • “Good work by Celonis on a marathon event: this ran for several hours per day over three days, although the individual presentations were pre-recorded then followed by live Q&A. Lots of logistics and good production quality, but it could have had better audience engagement through a more interactive platform such as Slack.”

IBM

  • “As I’ve mentioned over the past few weeks of virtual conferences, I don’t like pre-recorded sessions: they just don’t have the same feel as live presentations. To IBM’s credit, they used the fact that they were all pre-recorded to add captions in five or six different languages, making the sessions (which were all presented in English) more accessible to those who speak other languages or who have hearing impairments. The platform is pretty glitchy on mobile: I was trying to watch the video on my tablet while using my computer for blogging and looking up references, but there were a number of problems with changing streams that forced me to move back to desktop video for periods of time. The single-threaded chat stream was completely unusable, with 4,500 people simultaneously typing “Hi from Tulsa” or “you are amazing”.”
  • “IBM had to pivot to a virtual format relatively quickly since they already had a huge in-person conference scheduled for this time, but they could have done better both for content and format given the resources that they have available to pour into this event. Everyone is learning from this experience of being forced to move events online, and the smaller companies are (not surprisingly) much more agile in adapting to this new normal.”

Appian

  • “This was originally planned as an in-person conference, and Appian had to pivot on relatively short notice. They did a great job with the keynotes, including a few of the Appian speakers appearing (appropriately distanced) in their own auditorium. The breakout sessions didn’t really grab me: too many, all pre-recorded, and you’re basically an audience of one when you’re in any of them, with little or no interactivity. Better as a set of on-demand training/content videos rather than true breakout sessions, and I’m sure there’s a lot of good content here for Appian customers or prospects to dig deeper into product capabilities but these could be packaged as a permanent library of content rather than a “conference”. The key for virtual conferences seems to be keeping it a bit simpler, with more timely and live sessions from one or two tracks only.”

Signavio

  • “Signavio has a low-key format of live presentations that started at 11am Sydney time with a presentation by Property Exchange Australia: I tuned in from my timezone at 9pm last night, stayed for the Deloitte Australia presentation, then took a break until the last part of the Coca-Cola European Partners presentation that started at 8am my time. In the meantime, there were continuous presentations from APAC and Europe, with the speakers all presenting live in their own regular business hours.”
  • “The only thing missing is a proper discussion platform — I have mentioned this about several of the online conferences that I’ve attended, and liked what Camunda did with a Slack workspace that started before and continued after the conference — although you can ask questions via the GoToWebinar Question panel. To be fair, there is very little social media engagement (the Twitter hashtag for the conference is mostly me and Signavio people), so possibly the attendees wouldn’t get engaged in a full discussion platform either. Without audience engagement, a discussion platform can be a pretty lonely place. In summary, the GTW platform seems to behave well and is a streamlined experience if you don’t expect a lot of customer engagement, or you could use it with a separate discussion platform.”

Pega

  • “In general, I didn’t find the prerecorded sessions to be very compelling. Conference organizers may think that prerecording sessions reduces risk, but it also reduces spontaneity and energy from the presenters, which is a lot of what makes live presentations work so well. The live Q&A interspersed with the keynotes was okay, and the live demos in the middle breakout section as well as the live Tech Talk were really good. PegaWorld also benefited from Pega’s own online community, which provided a more comprehensive discussion platform than the broadcast platform chat or Q&A.”

Fluxicon

  • “The format is interesting, there is only one presentation each day, presented live using YouTube Live (no registration required), with some Q&A at the end. The next day starts with Process Mining Café, which is an extended Q&A with the previous day’s presenter based on the conversations in the related Slack workspace (which you do need to register to join), then a break before moving on to that day’s presentation. The presentations are available on YouTube almost as soon as they are finished.”
  • “The really great part was engaging in the Slack discussion while the keynote was going on. A few people were asking questions (including me), and Mieke Jans posted a link to a post that she wrote on a procedure for cleansing event logs for multi-case processes – not the same as what van der Aalst was talking about, but a related topic. Anne Rozinat posted a link to more reading on these types of many-to-many situations in the context of their process mining product from their “Process Mining in Practice” online book. Not surprisingly, there was almost no discussion on the Twitter hashtag, since the attendees had a proper discussion platform; contrast this with some of the other conferences where attendees had to resort to Twitter to have a conversation about the content. After the keynote, van der Aalst even joined in the discussion and answered a few questions, plus added the link for the IEEE task force on process mining that promotes research, development, education and understanding of process mining: definitely of interest if you want to get plugged into more of the research in the field. As a special treat, Ferry Timp created visual notes for each day and posted them to the related Slack channel.”

Bizagi

  • “The broadcast platform fell over completely…I’m not sure if Bizagi should be happy that they had so many attendees that they broke the platform, or furious with the platform vendor for offering something that they couldn’t deliver. The “all-singing, all-dancing” platforms look nice when you see the demo, but they may not be scalable enough.”

Final thoughts

Just to wrap things up, it’s fair to say that things aren’t going to go back to the way that they were any time soon. Part of this is due to organizations understanding that things can be done remotely just as effectively (or nearly so) as they can in person, if done right. Also, a lot of people are still reluctant to even think about travelling and spending days in poorly-ventilated rooms with a bunch of strangers from all over the world.

The vendors who ran really good virtual conferences 2020 are almost certain to continue to run at least some of their events virtually in the future, or find a way to have both in-person and remote attendees simultaneously. If you run a virtual conference that doesn’t get the attendee engagement that you expected, the problem may not be that “virtual conferences don’t work”: it could be that you just aren’t doing it right.

Making experience matter by building the right incentives into processes

Last month at the Bizagi virtual conference, I gave a keynote on aligning intelligent process automation with employee incentives and business goals. I decided to expand on those themes a bit for my monthly post on the Trisotech blog. Rather than the usual sort of performance metrics, I suggest the following:

The key to designing metrics and incentives is to figure out the problems that the workers are there to solve, which are often tied in some way to customer satisfaction, then use that to derive performance metrics and employee incentives.

There are a lot of challenges with figuring out how to measure and reward experience and innovative thinking: if it’s done wrong, then companies end up measuring how long you spent with a particular app open on your screen, or how many times you clicked on your keyboard.

We’re going through a lot of process disruption right now, and smart companies are using this opportunity to retool the way that they do things. They also need to be thinking about how their employee incentives are lined up with that redesign, and whether business goals are being served appropriately.

You can check out the whole post over at Trisotech’s blog.

Disclaimer: Trisotech is my client.

(Post image by my talented friend Alison Garwood-Jones).