CamundaCon 2020.2 Day 1

I listened to Camunda CEO Jakob Freund‘s opening keynote from the virtual CamundaCon 2020.2 (the October edition), and he really hit it out of the park. I’ve known Jakob a long time and many of our ideas are aligned, and there was so much in particular in his keynote that resonated with me. He used the phrase “reinvent [your business] or die”, whereas I’ve been using “modernize or perish”, with a focus not just on legacy systems and infrastructure, but also legacy organizational culture. Not to hijack this post with a plug for another company, but I’m doing a keynote at the virtual Bizagi Catalyst next week on aligning intelligent automation with incentives and business outcomes, which looks at issues of legacy organizational culture as well as the technology around automation. Processes are, as he pointed out, the algorithms of an organization: they touch everything and are everywhere (even if you haven’t automated them), and a lot of digital-native companies are successful precisely because they have optimized those algorithms.

Jakob’s advice in achieving reinvention/modernization is to do a gradual transformation, not try to do a big bang approach that fails more often than it succeeds, and positions Camunda (of course) as the bridge between the worlds of legacy and new technology. In my years of technology consulting on BPM implementations, I also recommend using a gradual approach by building bridges between new and old technology, then swapping out the legacy bits as you develop or buy replacements. This is where, for example, you can use RPA to create stop-gap task automation with your existing legacy systems, then gradually replace the underlying legacy or at least create APIs to replace the RPA bots.

The second opening keynote was with Marco Einacker and Christoph Anzer of Deutsche Telekom, discussing how they are using process and task automation by combining Camunda for the process layer and RPA at the task layer. They started out with using RPA for automating tasks and processes, ending up with more than 3,000 bots and an estimated €93 million in savings. It was a very decentralized approach, with initially being created by business areas without IT involvement, but as they scaled up, they started to look for ways to centralize some of the ideas and technology. First was to identify the most important tasks to start with, namely those that were true pain points in the business (Einacker used the phrase ” look for the shittiest, most painful process and start there”) not just the easy copy-paste applications. They also looked at how other smart technologies, such as OCR and AI, could be integrated to create completely unattended bots that add significant value.

The decentralized approach resulted in seven different RPA platforms and too much process automation happening in the RPA layer, which increased the amount of technical debt, so they adapted their strategy to consolidate RPA platforms and separate the process layer from the bot layer. In short, they are now using Camunda for process orchestration, and the RPA bots have become tasks that are orchestrated by the process engine. Gradually, they are (or will be) replacing the RPA bots with APIs, which moves the integration from front-end to back-end, making it more robust with less maintenance.

I moved off to the business architecture track for a presentation by Srivatsan Vijayaraghavan of Intuit, where they are using Camunda for three different use cases: their own internal processes, some customer-facing processes for interacting with Intuit, and — most interesting to me — enabling their customers to create their own workflows across different applications. Their QuickBooks customers are primarily small and mid-sized business that don’t have the skills to set up their own BPM system (although arguably they could use one of the many low-code process automation platforms to do at least part of this), which opened the opportunity for Intuit to offer a workflow solution based on Camunda but customizable by the individual customer organizations. Invoice approvals was an obvious place to start, since Accounts Payable is a problem area in many companies, then they expanded to other approval types and integration with non-Intuit apps such as e-signature and CRM. Customers can even build their own workflows: a true workflow as a service model, with pre-built templates for common workflows, integration with all Intuit services, and a simplified workflow designer.

Intuit customers don’t interact directly with Camunda services; Camunda is a separately hosted and abstracted service, and they’ve used Kafka messages and external task patterns to create the cut-out layer. They’ve created a wrapper around the modeling tools, so that customers use a simplified workflow designer instead of the BPMN designer to configure the process templates. There is an issue with a proliferation of process definitions as each customer creates their own version of, for example, an invoice approval workflow — he mentioned 70,000 process definitions — and they will likely need to do some sort of automated cleanup as the platform matures. Really interesting use case, and one that could be used by large companies that want their internal customers to be able to create/customize their own workflows.

The next presentation was by Stephen Donovan of Fidelity Investments and James Watson of Doculabs. I worked with Fidelity in 2018-19 to help create the architecture for their digital automation platform (in my other life, I’m a technical architecture/strategy consultant); it appears that they’re not up and running with anything yet, but they have been engaging the business units on thinking about digital transformation and how the features of the new Camunda-based platform can be leveraged when the time comes to migrate applications from their legacy workflow platform. This doesn’t seem to have advanced much since they talked about it at the April CamundaCon, although Donovan had more detailed insights into how they are doing this.

At the April CamundaCon, I watched Patrick Millar’s presentation on using Camunda for blockchain ledger automation, or rather I watched part of it: his internet died partway through and I missed the part about how they are using Camunda, so I’m back to see it now. The RiskStream Collaborative is a not-for-profit consortium collaborating on the use of blockchain in the insurance industry; their parent organization, The Institutes, provides risk management and insurance education and is guided by senior executives from the property and casualty industry. To copy from my original post, RiskStream is creating a distributed network platform, called Canopy, that allows their insurance company members to share data privately and securely, and participate in shared business processes. Whenever you have multiple insurance companies in an insurance process, like a claim for a multi-vehicle accident, having shared business processes — such as first notice of loss and proof of insurance — between the multiple insurers means that claims can be settled quicker and at a much lower cost.

I do a lot of work with insurance companies, as well as with BPM vendors to help them understand insurance operations, and this really resonates: the FNOL (first notice of loss) process for multi-party claims continues to be a problem in almost every company, and using enterprise blockchain to facilitate interactions between the multiple insurers makes a lot of sense. Note that they are not creating or replacing claims systems in any way; rather, they are connecting the multiple insurance companies, who would then integrate Canopy to their internal claims systems such as Guidewire.

Camunda is used in the control framework layer of Canopy to manage the flows within the applications, such as the FNOL application. The control framework is just one slice of the platform: there’s the core distributed ledger layer below that, where the blockchain data is persisted, and an integration layer above it to integrate with insurers’ claims systems as well as the identity and authorization registry.

There was a Gartner keynote, which gave me an opportunity to tidy up the writing and images for the rest of this post, then I tuned back in for Niall Deehan’s session on Camunda Hackdays over on the community tech track, and some of the interesting creations that come out of the recent virtual version. This drives home the point that Camunda is, at its heart, open source software that relies on a community of developer both within and outside Camunda to extend and enhance the core product. The examples presented here were all done by Camunda employees, although many of them are not part of the development team, but come from areas such as customer-facing consulting. These were pretty quick demos so I won’t go into detail, but here are the projects on Github:

If you’re a Camunda customer (open source or commercial) and you like one of these ideas, head on over to the related github page and star it to show your interest.

There was a closing keynote by Capgemini; like the Gartner keynote, I felt that it wasn’t a great fit for the audience, but those are my only real criticisms of the conference so far.

Jakob Freund came back for a conversation with Mary Thengvall to recap the day. If you want to see the recorded videos of the live sessions, head over to the agenda page and click on Watch Now for any session.

There’s a lot of great stuff on the agenda for tomorrow, including CTO Daniel Meyer talking about their new RPA orchestration capabilities, and I’ll be back for that.

IBM acquires WDG Automation RPA

The announcement that IBM was acquiring WDG Automation for their RPA capabilities was weeks ago, but for some reason the analyst briefing was delayed, then delayed again. Today, however, we had a briefing with Mike Gilfix, VP Cloud Integration and Automation Software, Mike Lim, Acquisition Integration Executive, and Tom Ivory, VP IBM Automation Services, on the what, why and how of this. Interestingly, none of the pre-acquisition WDG executives/founders were included on the call.

IBM is positioning this as part of a “unified platform” for integration, but the reality is likely far from that: companies that grow product capabilities through acquisition, like IBM, usually end up with a mixed bag of lightly-integrated products that may not be better for a given use case than a best-of-breed approach from multiple vendors.

The briefing started with the now-familiar pandemic call to action: customer demand is volatile, industries are being disrupted, and remote employees are struggling to get work done. Their broad solution makes sense, it that is focused on digitizing and automating work, applying AI where possible, and augmenting the workforce with automation and bots. RPA for task automation was their missing piece: IBM already had BPM, AI and automated decisioning, but needed to address task automation. Now, they are offering their Cloud Pak for Automation, that includes all of these intelligent automation-related components.

Mike Lim walked through their reasons for selecting WDG — a relatively unknown Brazilian company — and it appears that the technology is a good fit for IBM because it’s cloud-native, offers multi-channel AI-powered chatbots integrated with RPA, and has a low-code bot builder with 650+ pre-built commands. There will obviously be some work to integrate this with some of the overlapping Watson capabilities, such as the Watson Assistant that offers AI-powered chatbots. WDG also has some good customer cases, with super-fast ROI. It offers unattended and attended bots, OCR (although it stops short of full-on document capture), and operational dashboards. The combination of AI and RPA has become increasingly important in the market, to the point where some vendors and analysts use “intelligent automation” to mean AI and RPA to the exclusion of other types of automation. I’m not arguing that it’s not important, but more that AI and other forms of intelligence need to be integrated across the automation suite, not just with RPA.

IBM is envisioning their new RPA having use cases both in business operations, as you usually see, and also with a strong focus on IT operations, such as semi-automated real-time event incident management. To get there, they have a roadmap to bring the RPA product into the IBM fold to offer IBM RPA as a service, integrate into the Cloud Pak, and roll it out via their GBS professional services arm. Tom Ivory from GBS gave us a view into their Services Essentials for Automation platform that includes a “hosted RPA” bucket: WDG will initially just be added to that block of available tools, although GBS will continue to offer competitive RPA products as part of the platform too.

It’s a bit unusual for IBM GBS and the software group to play together nicely: my history with IBM tends to show otherwise, and Mike Lim even commented on the (implied: unusual) cooperation and collaboration on this particular initiative.

There’s no doubt that RPA will play a strong role in the frantic reworking of business operations that’s going on now within many large organizations to respond to the pandemic crisis. Personally, I don’t think it’s a super long-term growth play: as more applications offer proper APIs and integration points, the need for RPA (which basically integrates with applications that don’t have integration points) will decrease. However, IBM needs to have it in their toolbox to show completeness, even if GBS ends up using their competitors’ RPA products in projects.

Can the for-profit conferences make it in an online world?

I’ve attended a lot of online conferences so far in 2020, and even helped to run one. We’re in the summer lull now, and I expect to attend several more in the fall/winter season. With only a few exceptions, the online events that I’ve attended have been vendor conferences, and they have all offered free access for attendees. That works well because for vendors, since conferences are really part of their marketing and sales efforts, and most of them only charge enough for in-person events to cover costs. That equation changes, however, with the conferences run by professional organizers who make their money by charging (sometimes quite high) fees to conference attendees, presumably for higher-quality and less-biased content than you will find at a vendor conference.

I had the first inkling of the professional conference organizers dilemma with the Collision conference that was supposed to be held in Toronto in June. I had purchased a ticket for it, then in early March they decided to move to a virtual format. They automatically transferred by ticket to an online ticket, meaning that they intended to charge the same price for the online event as the in-person event, and it took several requests to get a refund for my ticket. The event was mostly about in-person networking as well as being able to see some big names presenting live; as soon as this becomes online, it’s just not as interesting any more. I do fine with online networking in a number of other ways, and those big names have a lot of published videos on YouTube where I can see the same content that they may have presented at the (now virtual) conference. I suspect many others made the same decision.

Now the fall conference season is almost upon us, and although the BPM academic conference and the vendors (TIBCO, OpenText, Camunda) long ago announced that they were going virtual, there were a few obvious holdouts from the professional conference organizers until just a few days ago:

  • APQC announced on July 23 — barely two months before their October 6-8 event — that the Houston-based Process and Performance Management conference would be moving to a virtual format. APQC members have access for free, but non-members pay $275. This is a decrease from the in-person non-member price of $1,595 plus $950 per day of workshop (up to three), with early bird discounts. I was scheduled to keynote at this event, and that’s now cancelled; their schedule is just time blocks without specific speakers as of today.
  • On the same day, Building Business Capability announced that BBC 2020 on October 19-23 will now be virtual, instead of in Las Vegas. They have a full speaker agenda listed on their site, but also a somewhat eye-watering price for a virtual conference: $1,357 for the tutorials and conference if you pay before September 11, or $1,577 if you wait until closer to the date. If you only want to watch live and not have access to on-demand recordings, then the price drops by $300, and another $300 if you don’t want the tutorials. That means that their lowest price is the early bird livestream-only, conference-only (no tutorials) for $717. Pricing for their in-person conference was significantly higher, with the top price of $3,295 for the non-discounted conference and tutorials, and the lowest price of $1,795 for the early bird conference-only pass.

Almost every industry has been impacted by the pandemic, and conferences are no exception. Vendor conferences can actually be every effective online if done right, and save a lot of money for the vendors. The professional conference organizers are going to be making a harder transition, since they need to offer content that is clearly valuable and unique in order to charge any significant amount. If a large number of the speakers already have content available elsewhere (e.g., YouTube, webinars), the value of having them behind a conference paywall is much lower; however, if they don’t already have content available, they may not be enough of a draw.

Personally, I’m just happy to be able to avoid Vegas for the foreseeable future.

Next week at DecisionCAMP 2020, hosted by @DecisionMgtCom

We’re reaching the end of what would have been the usual spring season of tech conferences, although all of them moved online with varying degrees of success. After the first few that I attended, I promised a summary of the best and worst practices, and I still plan to do that, but Jacob Feldman convinced me to help him out with the logistics for the online version of DecisionCAMP, which was supposed to be in Oslo next week. I first attended DecisionCAMP last year in Bolzano since I was already in Berlin the week before for CamundaCan, and managed to spend a few days vacation in northern Italy as a bonus. This year, I won’t be blogging about it live, because I’ll be running the logistics and the on-screen monitoring. This is giving me a chance to test-drive some of my ideas about how to run a good online event without spending a fortune.

Note that the last day of the conference, Wednesday July 1, is Canada Day: a major national holiday sometimes referred to as “Canada’s birthday”, but I’ll be online moderating the conference because who’s really going anywhere these days. I do expect everyone on the Zoom meeting that day to be sporting a red maple leaf, or at least be wearing red and white, at risk of having their video disabled by the diabolical Canadian moderator.

Here’s how we’re running it:

  • Registration is via the Declarative AI 2020 site, and is open until tomorrow, June 27.
  • All presentations will be live on Zoom, with simultaneously livestreaming on YouTube. If you are registered, you will receive the Zoom link; if you’re not registered or prefer to watch on YouTube, subscribe to the DecisionCAMP YouTube channel and watch it there.
  • Discussions and Q&A will be on the DecisionCAMP Slack workspace, with dedicated channels for discussions about each day’s presentations. We are encouraging presenters to engage with their audience there after their presentation to answer any open questions, and we already have some discussions going on. This type of persistent, multi-threaded platform is much better for emulating the types of hallway conversations and presenter Q&A that might occur at an in-person conference
  • For Zoom attendees, there will also be the option to use the “raise hand” feature and ask a question verbally during the presentation.

We already have four pre-conference presentations that you can see on the YouTube channel; all of the presentations from next week will join them for on-demand viewing except where the presenter asks us not to record their session.

I’ve learned a lot about online conference tools in the past month or so, including:

  • Zoom features, settings and all variations on recording to have the best possible experience during and after each presentation. I will share all of those in my “best practices” post that I’ll create after DecisionCAMP is over, based on what I’ve seen from all the online conferences this spring.
  • Slack, which I have used before but I’ve never created/administered a workspace or added apps.
  • YouTube livestreaming, or rather, stream-through from Zoom. This is a very cool feature of how Zoom and YouTube work together, but you have to learn a few things, such as to manually end the stream over on YouTube once you’ve closed the Zoom meeting so that it doesn’t keep running with no data input for several hours. Oops.

I’m not being financially compensated for working on DecisionCAMP: I’ve been treating it as a learning experience.

Process Mining Camp 2020: @Fluxicon takes it online, with keynote by @wvdaalst

Process Mining Camp 2020 (3)Last week, I was busy preparing and presenting to webinars for two different clients, so I ended up missing Software AG’s ARIS international user groups (IUG) conference and most of Fluxicon’s Process Mining Camp online conference, although I did catch a bit of the Lufthansa presentation. However, Process Mining Camp continues this week, giving me a chance to tune in for the remaining sessions. The format is interesting, there is only one presentation each day, presented live using YouTube Live (no registration required), with some Q&A at the end. The next day starts with Process Mining Café, which is an extended Q&A with the previous day’s presenter based on the conversations in the related Slack workspace (which you do need to register to join), then a break before moving on to that day’s presentation. The presentations are available on YouTube almost as soon as they are finished, but are being shared via Slack using unlisted links, so I’ll let Fluxicon make them public at their own pace (subscribe to their YouTube channel since they will likely end up there).

Anne Rozinat, co-founder of Fluxicon, was moderator for the event, and was able to bring life to the Q&A since she’s an expert in the subject matter and had questions of her own. Each day’s session runs a maximum of two hours starting at 10am Eastern, which makes it a reasonable time for all of Europe and North America (having lived in California, I know the west coasters are used to getting up for 7am events to sync with east coast times). Also, each presentation is a practitioner who uses process mining (specifically, Fluxicon’s Disco product) in real applications, meaning that they have stories to share about their data analysis, and what worked and didn’t work.

Process Mining Camp 2020 (9)Monday started with Q&A with Zsolt Varga of the European Court of Auditors, who presented last Friday. It was a great discussion and made me want to go back and see Varga’s presentation: he had some interesting comments on how they track and resolve missing historical data, as well as one of the more interesting backgrounds. There was then a presentation by Hilda Klasky of the Oak Ridge National Laboratory on process mining for electronic health records with some cool data clustering and abstraction to extract case management state transition patterns from what seemed to be a massive spaghetti mess. Process Mining Camp 2020 (13)Tuesday, Klasky returned for Q&A, then a presentation by Harm Hoebergen and Redmar Draaisma of Freo (an online loans subsidiary of Rabobank) on loan and credit processes across multiple channels. It was great to track Slack during a presentation and see the back-and-forth conversations as well as watch the questions accumulate for the presenter; after each presentation, it was common to see the presenter respond to questions and discussion points that weren’t covered in the live Q&A. For online conferences, this type of “chaotic engagement” (rather than tightly controlled broadcasts from the vendor, or non-functionality single-threaded chat streams) replaces the “hallway chats” and is essential for turning a non-engaging set of online presentations into a more immersive conference experience.

Process Mining Camp 2020 (16)The conference closing keynote today was by Wil van der Aalst, who headed the process mining group at Eindhoven University of Technology where Fluxicon’s co-founders did their Ph.D. studies. He’s now at RWTH Aachen University, although remains affiliated with Eindhoven. I’ve had the pleasure of meeting van der Aalst several times at the academic/research BPM conferences (including last year in Vienna), and always enjoy hearing him present. He spoke about some of the latest research in object-centric process mining, which addresses the issue of handling events that refer to multiple business objects, such as multiple items in a single order that may be split into multiple deliveries. Traditionally in process mining, each event record from a history log that forms the process mining data has a single case ID, plus a timestamp and an activity name. But what happens if an event impacts multiple cases?

Process Mining Camp 2020 (54)He started with an overview of process mining and many of the existing challenges, such as performance issues with conformance checking, and the fact that data collection/cleansing still takes 80% of the effort. However, process mining (and, I believe, task mining as a secondary method of data collection) can be using event logs where an event refers to multiple cases, requiring that the data be “flattened” to pick one of the cases as the identifier for the event record, then duplicate the record for each case referred to in the event. The problem arises because events can disappear when cases are merged again, which will cause problems in generating accurate process models. Consider your standard Amazon order, like the one that I’m waiting for right now. I placed a single order containing eight items a couple of days ago, which were supposed to be delivered in a single shipment tomorrow. However, the single order was split into three separate orders the day after I placed the order, then two of the orders are being sent in a single shipment that is arriving today, while the third order will be in its own shipment tomorrow. Think about the complexity of tracking by order, or item, or shipment: processes diverge and converge in these many-to-many relationships. Is this one process (my original order), or two (shipments), or three (final orders)?

Process Mining Camp 2020 visual notesThe really great part was engaging in the Slack discussion while the keynote was going on. A few people were asking questions (including me), and Mieke Jans posted a link to a post that she wrote on a procedure for cleansing event logs for multi-case processes – not the same as what van der Aalst was talking about, but a related topic. Anne Rozinat posted a link to more reading on these types of many-to-many situations in the context of their process mining product from their “Process Mining in Practice” online book. Not surprisingly, there was almost no discussion on the Twitter hashtag, since the attendees had a proper discussion platform; contrast this with some of the other conferences where attendees had to resort to Twitter to have a conversation about the content. After the keynote, van der Aalst even joined in the discussion and answered a few questions, plus added the link for the IEEE task force on process mining that promotes research, development, education and understanding of process mining: definitely of interest if you want to get plugged into more of the research in the field. As a special treat, Ferry Timp created visual notes for each day and posted them to the related Slack channel – you can see the one from today at the left.

Great keynote and discussion afterwards, I recommend tracking Fluxicon’s blog and/or YouTube channel to watch it – and all of the other presentations – when published.

Process mining backgrounder – recovering from my #PowerPointFail

Did you ever make some last-minute changes before a presentation, only to look on in horror when a slide pops up that was not exactly what you were expecting? That was me today, on a webinar about process intelligence that I did with Signavio. In the webinar, I was taking a step back from process automation — my usual topic of discussion — to talk more about the analytical tools such as process mining. This morning, I decided that I wanted to add a brief backgrounder on process mining, and pulled in some slides that I had created for a presentation back in 2013 on (what were then) evolving technologies related to process management. I got a bit too fancy, and created a four-image build but accidentally didn’t have the animation set on what should have been the last image added to the build, so it obscured all the good stuff on the slide.

I thought it was a pretty interesting topic, and I rebuilt the slide and recorded it. Check it out (it’s only 3-1/2 minutes long):

It’s webinar week! Check out my process intelligence webinar with @Signavio on Thursday

On Thursday, I’m presenting a webinar on process intelligence with Signavio. Here’s the description:

How do you get a handle on your company’s disrupted processes? How do you get real-time visibility into your organization’s strengths and weaknesses? How do you confidently chart a path to the future? The key is process intelligence: seeing your processes clearly and understanding what is actually happening versus what’s supposed to happen.

For example, your order-to-cash process is showing increased sales but decreasing customer satisfaction. Why? What is the root cause? Or, you have an opportunity to offer a new product but aren’t sure if your manufacturing process can handle it. To make this decision, you need a clear line of sight into what your organization can do. These areas are where process intelligence shines.

This webinar will help you answer questions like these, showing you – with examples – how process intelligence can help you drive real business results.

Rather than my usual focus on process automation, I’m digging a bit more into the process analysis side, particularly around process mining. With the current situation with a largely distributed workforce for many businesses, processes have change and it’s not possible to do Gemba walks or job shadowing to collect information on what the adjusted processes look like. Process mining and task mining provide the capabilities to do that remotely and accurately, and identify any problems with conformance/compliance as well as discover root causes. You can sign up at the link above to attend or receive the on-demand replay after the event.

I also posted last week about the webinar that I’m presenting on Wednesday for ABBYY on digital intelligence in the insurance industry, which is a related but different spin on the same issue: how are processes changing now, and what methodologies and technologies are available to handle this disruption. In case it’s not obvious, I don’t work for either of these vendors (who have some overlap in products) but provide “thought leadership” presentations to help introduce and clarify concepts for audiences. Looking forward to seeing everyone on either or both of these webinars later this week.

#PegaWorld iNspire 2020

PegaWorld, in shifting from an in-person to virtual event, dropped down to a short 2.5 hours. The keynotes and many of the breakouts appeared to be mostly pre-recorded, hosted live by CTO Don Schuerman who provided some welcome comic relief and moderated live Q&A with each of the speakers after their session.

The first session was a short keynote with CEO Alan Trefler. It’s been a while since I’ve had a briefing with Pega, and their message has shifted strongly to the combination of AI and case management as the core of their digital platform capabilities. Trefler also announced Pega Process Fabric that allows the integration of multiple systems not just from Pega, but other vendors.

Next up was SVP of Products Kerim Akgonul, discussing their low-code Pega Express approach and how it’s helping customers to stand up applications faster. We heard briefly from Anna Gleiss, Global IT Head of Master Data Management at Siemens, who talked about how they are leveraging Pega to ensure reusability and speed deployment across the 30 different applications that they’re running in the Pega Cloud. Akgonul continued with use cases for self-service — especially important with the explosion in customer service in some industries due to the pandemic — and some of their customers such as Aflac who are using Pega to further their self-service efforts.

There was a keynote by Rich Gilbert, Chief Digital and Information Officer at Aflac, on the reinvention that they have gone through. There’s a lot of disruption in the insurance industry now, and they’ve been addressing this by creating a service-based operating model to deliver digital services as a collaboration between business and IT. They’ve been using Pega to help them with their key business drivers of settling claims faster and providing excellent customer service with offerings such as “Claims Guest Checkout”, which lets someone initiate a claim through self-service without knowing their policy number or logging in, and a Claims Status Tracker available on their mobile app or website. They’ve created a new customer service experience using a combination of live chat and virtual assistants, the latter of which is resolving 86% of inquiries without moving to a live agent.

Akgonul also provided a bit more information on the Process Fabric, which acts as a universal task manager for individual workers, with a work management dashboard for managers. There was no live Q&A at this point, but it was delayed until a Tech Talk later in the agenda. In the interim was a one-hour block of breakouts that had one track of three live build sessions, plus a large number of short prerecorded sessions from Pega, partners and customers. I’m interested in more information on the Process Fabric, which I believe will be in the later Tech Talk, although I did grab some screenshots from Akgonul’s keynote:

The live build sessions seemed to be overloaded and there was a long delay getting into them, but once started, they were good-quality demos of building Pega applications. I came in part way through the first one on low-code using App Studio, and it was quite interactive, with a moderator dropping in occasionally with live questions, and eventually hurrying the presenter along to finish on time. I was only going to stay for a couple of minutes, but it was pretty engaging and I watched all of it. The next live demo was on data and integration, and built on the previous demo’s vehicle fleet manager use case to add data from a variety of back-end sources. The visuals were fun, too: the presenter’s demo was most of the screen, with a bubble at the bottom right containing a video of the speaker, then a bubble popping in at the bottom left with the moderator when he had a question or comment. Questions from the audience helped to drive the presentation, making it very interactive. The third live demo was on user experience, which had a few connectivity issues so I’m not sure we saw the entire demo as planned, but it showed the creation of the user interface for the vehicle manager app using the Cosmos system, moving a lot of logic out of the UI and into the case model.

The final session was the Tech Talk on product vision and roadmap with Kerim Akgonul, moderated by Stephanie Louis, Senior Director of Pega’s Community and Developer Programs. He discussed Process Fabric, Project Phoenix, Cosmos and other new product releases in addition to fielding questions from social media and Pega’s online community. This was very interactive and engaging, much more so than his earlier keynote which seemed a bit stiff and over-rehearsed. More of this format, please.

In general, I didn’t find the prerecorded sessions to be very compelling. Conference organizers may think that prerecording sessions reduces risk, but it also reduces spontaneity and energy from the presenters, which is a lot of what makes live presentations work so well. The live Q&A interspersed with the keynotes was okay, and the live demos in the middle breakout section as well as the live Tech Talk were really good. PegaWorld also benefited from Pega’s own online community, which provided a more comprehensive discussion platform than the broadcast platform chat or Q&A. If you missed today’s event, you should be able to find all of the content on demand on the PegaWorld site within the next day or two.

Using Digital Intelligence to Navigate the Insurance Industry’s Perfect Storm: my upcoming webinar with @ABBYY_Software

I have a long history working with insurance companies on their digitization and process automation initiatives, and there’s a lot of interesting things happening in insurance as a result of the pandemic and associated lockdown: more automation of underwriting and claims, increased use of digital documents instead of paper, and trying to discover the “new normal” in insurance processes as we move to a world that will remain, at least in part, with a distributed workforce for some time in the future. At the same time, there is an increase in some types of insurance business activity, and decreases in other areas, requiring reallocation of resources.

On June 17, I’ll be presenting a webinar for ABBYY on some of the ways that insurance companies can navigate this perfect storm of business and societal disruption using digital intelligence technologies including smarter content capture and process intelligence. Here’s what we plan to cover:

  • Helping you understand how to transform processes, instead of falling into the trap of just automating existing, often broken processes
  • Getting your organization one step further of your competition with the latest content intelligence capabilities that help transform your customer experience and operational effectiveness
  • Completely automating your handling of essential documents used in onboarding, policy underwriting, claims, adjudication, and compliance
  • Having direct overview of your processes as living in real time to discover where bottlenecks and repetitions occur, where content needs to be processed, and where automation can be most effective

Register at the link, and see you on the 17th.

Around the world with Signavio Live 2020

I missed last year’s Signavio Live event, and it turns out that it gave them a year head start on the virtual conference format now being adopted by other tech vendors. Now that everyone has switched to online conferences, many have decided to go the splashy-but-prerecorded route, which includes a lot of flying graphics and catchy music but canned presentations that fall a bit flat. Signavio has a low-key format of live presentations that started at 11am Sydney time with a presentation by Property Exchange Australia: I tuned in from my timezone at 9pm last night, stayed for the Deloitte Australia presentation, then took a break until the last part of the Coca-Cola European Partners presentation that started at 8am my time. In the meantime, there were continuous presentations from APAC and Europe, with the speakers all presenting live in their own regular business hours.

Signavio started their product journey with a really good process modeler, and now have process mining and some degree of operational monitoring for a more complete process intelligence suite. In his keynote, CEO Gero Decker talked about how the current pandemic — even as many countries start to emerge from it — is highlighting the need for operational resilience: companies need to design for flexibility, not just efficiency. For example, many companies are reinventing customer touchpoints, such as curbside delivery for retail as an alternative to in-store shopping, or virtual walk-throughs for looking at real estate. Companies are also reinventing products and services, allowing businesses that rely on in-person interactions to take their services online; I’ve been seeing this shift with everything from yoga classes to art gallery lectures. Decker highlighted two key factors to focus on in order to emerge from the crisis stronger: operational excellence, and customer experience. One without the other does not provide the benefit, but they need to be combined into the concept of “Customer Excellence”. In the Q&A, he discussed how many companies started really stepping up their process intelligence efforts in order to deal with the COVID-19 crisis, then realized that they should be doing this in the normal course of business.

There was a session with Jan ten Sythoff, Senior TEI Consultant at Forrester, and Signavio’s Global SVP of Customer Service, Stefan Krumnow, on the total economic impact of the Signavio Suite (TEI is the Forrester take on ROI). Krumnow started with the different factors that might be part of what a customer organization might be getting out of Signavio — RPA at scale, operational excellence, risk and compliance, ERP transformation, and customer excellence — then ten Sythoff discussed the specific TEI report that Forrester created for Signavio in October 2019 with a few updates for the current situation. The key quantifiable benefits identified by Forrester were external resources cost avoidance, higher efficiency in implementing new processes, and cost avoidance of alternative tools; they also found non-quantifiable benefits such as a better culture of process improvement across organizations. For their aggregate case study created from all of their interviews, they calculated a payback of less than six months for implementing Signavio: this would depend, of course, on how closely a particular organization matched their fictitious use case, which was a 100,000-employee company.

There are a number of additional sessions running until 5pm Eastern North American time; I might peek back in for a few of those, and will write another post if there’s anything of particular interest. I expect that everything will be available on demand after the event if you want to check out any of the sessions.

On the conference format, there is a single track of live presentations, and a Signavio moderator on each one to introduce the speaker and help wrangle the Q&A. Each presentation is 40 minutes plus 10 minutes of Q&A, with a 10-minute break between each one. Great format, schedule-wise, and the live sessions make it very engaging. They’re using GoToWebinar, and I’m using it on my tablet where it works really well: I can control the screen split between speaker video and slides (assuming the speaker is sharing their video), it supports multiple simultaneous speakers, I can see at the top of the screen who is talking in case I join a presentation after the introduction, and the moderator can collect feedback via polls and surveys. Because it’s a single track, it’s a single GTW link, allowing attendees to drop in and out easily throughout the day. The only thing missing is a proper discussion platform — I have mentioned this about several of the online conferences that I’ve attended, and liked what Camunda did with a Slack workspace that started before and continued after the conference — although you can ask questions via the GTW Question panel. To be fair, there is very little social media engagement (the Twitter hashtag for the conference is mostly me and Signavio people), so possibly the attendees wouldn’t get engaged in a full discussion platform either. Without audience engagement, a discussion platform can be a pretty lonely place. In summary, the GTW platform seems to behave well and is a streamlined experience if you don’t expect a lot of customer engagement, or you could use it with a separate discussion platform.

Disclaimer: Signavio is my customer, and I’ve created several webinars for them over the past few months. We have another one coming up next month on Process Intelligence. However, they have not compensated me in any way for listening in on Signavio Live today, or writing about it here.