CamundaCon Live 2020 – Day 1: Optimize, RPA, and how 24 Hour Fitness executes 5B process nodes per month

We continued the first day of CamundaCon Live (virtual) 2020 with Felix Mueller, senior product manager, presenting on how to use Camunda Optimize for driving continuous improvement in processes. I attended the Optimze 3.0 release webinar a couple of weeks ago, and saw some of the new things that they’re doing with monitoring and optimization of event-based processes — this allows processes that are not part of Camunda to be included in Optimize. The CamundaCon session started with a broader view of Optimize functionality, showing how it collects information then can be used for root cause analysis of process bottlenecks as well as displaying realtime metrics. They have some good case studies for Optimize, including insurance provider Visana Group.

He then moved to show the event-based process monitoring, and how Optimize can ingest and aggregate information from any external system with a connector, such as RabbitMQ (which they have built). His demo showed a customer onboarding process that could be triggered either by an online form that would be a direct Camunda process instantiation, or via a mailed-in form that was scanned into another system that emitted an event that would trigger the process.

It was very obvious that this was a live presentation, because Mueller was scrambling against the clock since the previous session went a bit long, having to speed through his demo and take a couple of shortcuts. Although you might think of this as a logistical “bug”, I maintain that it’s an interactivity “feature”, and made the experience much closer to an in-person conference than a set of pre-recorded presentations that were just queued up in sequence.

This was followed by a presentation by Kris Barczynski of Nokia Bell Labs about a really interesting use case: they are using Camunda to guide visiting groups on tours through the Nokia Campus customer experience spaces, and interact with devices including the guests’ wearables, drones and robots. Visitors are welcomed and guided by a robot, and they can interact with voice-controlled drones; Camunda is orchestrating the processes behind the scenes. He talked about some of their design decisions, such as using Camunda JavaScript workers to call external services, and building a custom Android app. Really interesting combination of physical and virtual processes.

Next was a panel discussion on the future of RPA, with Vittorio Dal Bianco of Nokia, Marco Einacker of Deutsche Telekom, Paul Jones of NatWest Group, and Camunda CEO Jakob Freund, moderated by Jason Bloomberg of Intellyx Research. The three customer presenters are involved with the RPA initiatives at their own organizations, and also looking at how to integrate that with their Camunda processes. Panels are always a challenge to live-blog, but here’s some of the points discussed (attributed where I remembered):

  • The customer panelists agreed that RPA has allowed people to move to more interesting/valuable work, rather than doing routine tasks such as copying and pasting between application screens. Task automation through RPA reduces resources/costs, decreases cycle time, and also improves quality/compliance.
  • RPA is a “short-term bandaid” driven from outside the IT organization in order to get some immediate efficiency benefits. It’s maintenance-intensive, since any changes to the appliations being integrated means that the bots need to be reprogrammed. Deutsche Telekom is moving from RPA front-end integration/automation to drive the more strategic BPMS/API automation, so sees that RPA has been an important step on the strategic journey but not the endpoint. NatWest recognizes RPA as a key automation tool, but see it as a short-term tactical tool; they classify RPA as part of their technical debt, and it is not a part of their long-term architecture. Nokia thinks that RPA will remain in niche pockets for applications that will never have a proper API, such as Excel-based applications.
  • Nokia uses Blue Prism for RPA. NatWest uses UIPath RPA, and has a group that is building the integration for having Camunda execute a UIPath task — although I would have thought this would be a relatively simple service call or external task. Deutsche Telekom is using seven different RPA platforms, three of which are commercial including Another Monday and Kryon; they are just starting to look at the integration between Camunda and RPA with a plan to have Camunda orchestrate steps, and one “microbot” performing an atomic task at that step. As their core system offers an API for that task, the RPA bot will be replaced with a direct API call. This last approach is definitely aligned with Camunda’s vision of how their BPM can work with RPA bots as well as any other “task performers”.
  • More discussion on the role of RPA in digital transformation: recommendations to go ahead and use it, but consider it as a stop-gap measure to get a quick win before you can get the APIs built out in the systems that are being integrated. It’s considered technical debt because it will be replaced in the future as the APIs of the core systems become available. It’s a painkiller, not a cure.
  • Although some of the companies are using business people to build their own bots, that has a mixed degree of success and other companies do not classify RPA as citizen developer technology. This is pretty much the same as we’re seeing with other low-code environments, where they are often sold as application development platforms for non-professional developers, but the reality is that many applications require a professional developer because of the technical complexity of systems being integrated.
  • Cost and effort of RPA bot maintenance can be significant, in some cases more than back-end integration. Bot fixes may be fairly quick, but are required much more frequently such as when a password changes: bots require babysitting.
  • The customers had a few Camunda product requests, such as better connectors to more of the RPA tools. In general, however, they don’t want Camunda to build/acquire their own RPA offering, but just see it as another example of where you can pick a best-of-breed RPA tool and use it for task automation at individual steps within a Camunda process.
  • Best practices/lessons learned:
    • Separate the process orchestration layer from the bot execution layer from the beginning, with the process orchestration being done by Camunda and the bot task execution being done by the RPA tool.
    • Use process mining first to objectively identify what should be automated; of course, this would also require that you mine the user interaction processes that would be automated with bots, not just the system logs.
    • Have a centralized control center for bot control.
    • Develop bot templates that can be more quickly modified and deployed.

Looking at how the panel worked, there are definitely aspects of online panels that work better than in-person panels, specifically how they respond to audience questions. Some people don’t want to speak up in front of an audience, while others get up and bloviate without actually asking a question. With online-only questions, the moderator can browse through and aggregate them, then select the ones that are best suited to the panel. With video on each of the presenters (except for one who lost his connection and had to dial in), it was still possible to see reactions and have a sense of the live nature of the panel.

The last session of the day was Jimmy Floyd of 24 Hour Fitness on their massive Camunda implementation of five billion (with a “B”) process node executions per month. You can see his presentation from CamundaCon Berlin 2018 as a point of comparison with today’s numbers. Pretty much everything that happens at 24 Hour Fitness is controlled by a Camunda process, from their internal processes to customer-facing activities such as a member swiping their card to gain access to a club. It hasn’t been without hiccups along the way: they had to turn off process history logging to attain this volume of data, and can’t easily drill down into processes that call a lot of other processes, but the use of BPMN and DMN has greatly improved the interactions between product owners and developers, sometimes allowing business people to make a rule change without involving developers.

He had a lot of technical information on how they built this and their overall architecture. Their use is definitely custom code, but using Camunda with BPMN and DMN gave them a huge step-up versus just writing code. Even logic inside of microservices is implemented with Camunda, not written in code. Their entire architecture is based on Camunda, so it’s not a matter of deciding whether or not to use it for a new application or to integrate in a new external solution. They are taking a look at Zeebe to decide if it’s the right choice of them moving forward, but it’s early days on that: it would be a significant migration for them, they would likey lose functionality (for BPMN elements not yet implemented in Zeebe, among other things), and Zeebe has only just achieved production readiness.

Camunda is changing how they handle history data relative to the transactional data, in part likely due to input from high-throughput customers, and this may allow 24 Hour Fitness to turn history logging back on. They’re starting to work with Optimize via Kafka to gain insights into their processes.

Day 1 finished with a quick wrapup from Jakob Freund; in spite of the fact that it’s probably been a really long day for him, he seemed pretty happy about how well things went today. Tomorrow will cover more on microservices orchestration, and have customer case studies from Cox Automotive, Capital One and Goldman Sachs.

As you probably gather from my posts today, I’m finding the CamundaCon online format to be very engaging. This is due to most of the presentations being performed live (not pre-recorded as is seen with most of the online conferences these days) and the use of Slack as a persistent chat platform, actively monitored by all Camunda participants from the CEO on down. They do need a little bit more slack in the schedule however: from 10am to 3:45pm there was only one 15-minute break scheduled mid-way, and it didn’t happen because the morning sessions ran overtime. If you’re attending tomorrow, be prepared to carry your computer to the kitchen and bathroom with you if you don’t want to miss a minute of the presentations.

As I finish off my day at the virtual CamundaCon, I notice that the videos of presentations from earlier today are already available — including the panel session that only happened an hour ago. Go to the CamundaCon hub, then change the selection from “Upcoming” to “On Demand” above the Type/Day/Track selectors.

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!

bpmNEXT 2018: Bonitasoft, Know Process

We’re in the home stretch here at bpmNEXT 2018, day 3 has only a couple of shorter demo sessions and a few related talks before we break early to head home.

When Artificial Intelligence meets Process-Based Applications, Bonitasoft

Nicolas Chabanoles and Nathalie Cotte from Bonitasoft presented on their integration of AI with process applications, specifically for predictive analytics for automating decisions and making recommendations. They use an extension of process mining to examine case data and activity times in order to predict, for example, if a specific case will finish on time; in the future, they hope to be able to accurately predict the end time for individual cases for better feedback to internal users and customers. The demo was a loan origination application built on Bonita BPM, which was fairly standard, with the process mining and machine learning coming in with how the processes are monitored. Log data is polled from the BPM system into an elastic search database, then machine learning is applied to instance data; configuration of the machine learning is based (at this point) only on the specification of an expected completion time for each instance type to build the predictions model. At that point, predictions can be made for in-flight instances as to whether each one will complete on time, or its probability of completing on time for those predicted to be late — for example, if key documents are missing, or the loan officer is not responding quickly enough to review requests. The loan officer is shown what tasks are likely to be causing the late prediction, and completing those tasks will change the prediction for that case. Priority for cases can be set dynamically based on the prediction, so that cases more likely to be late are set to higher priority in order to be worked earlier. Future plans are to include more business data and human resource data, which could be used to explicitly assign late cases to individual users. The use of process mining algorithms, rather than simpler prediction techniques, will allow suggestions on state transitions (i.e., which path to take) in addition to just setting instance priority.

Understanding Your Models and What They Are Trying To Tell You, KnowProcess

Tim Stephenson of KnowProcess spoke about models and standards, particularly applied to their main use case of marketing automation and customer onboarding. Their ModelMinder application ingests BPMN, CMMN and DMN models, and can be used to search the models for activities, resources and other model components, as well as identify and understand extensions such as calling a REST service from a BPMN service task. The demo showed a KnowProcess repository initially through the search interface; searching for “loan” or “send memo” returned links to models with those terms; the model (process, case or decision) can be displayed directly in their viewer with the location of the search term highlighted. The repository can be stored as files or an engine can be directly indexed. He also showed an interface to Slack that uses a model-minder bot that can handle natural language requests for certain model types and content such as which resources do the work as specified in the models or those that call a specific subprocess, providing a link directly back to the models in the KnowProcess repository. Finishing up the demo, he showed how the model search and reuse is attached to a CRM application, so that a marketing person sees the models as functions that can be executed directly within their environment.

Instead of a third demo, we had a more free-ranging discussion that had started yesterday during one of the Q&As about a standardized modeling language for RPA, led by Max Young from Capital BPM and with contributions of a number of others in the audience (including me). Good starting point but there’s obviously still a lot of work to do in this direction, starting with getting some of the major RPA vendors on board with standardization efforts. The emerging ideas seem to center around defining a grammar for the activities that occur in RPA (e.g., extract data from an Excel file, write data to a certain location in an application screen), then an event and flow language to piece together those primitives that might look something like BPMN or CMMN. I see this as similar to the issue of defining page flows, which are often done as a black box function that is performed within a human activity in a BPMN flow: exposing and standardizing that black box is what we’re talking about. This discussion is a prime example of what makes bpmNEXT great, and keeps me coming back year after year.

bpmNEXT 2018: Intelligence and robots with ITESOFT, K2, BeeckerCo

We’re finishing up day 2 of bpmNEXT with a last section of demos.

Robotics, Customer Interactions and BPM, ITESOFT

Francois Bonnet from ITESOFT presented on customer interactions and automation, and the use of BPMN-driven robots to guide customer experience. In a first for bpmNEXT, the demo included an actual physical human-shaped robot (which was 3D-printed from an open source project) that can do voice recognition, text to speech, video capture, movement tracking and facial recognition. The robot’s actions were driven by a BPMN process model, with activities such as searching for humans, recognizing faces, speaking phrases, processing input and making branching decisions. The process model was shown simultaneously, with the execution path updated in real time as it moved through the process, with robot actions shown as service activities. The scenario was the robot interacting with a customer in a mobile phone shop, recognizing the customer or training a new facial recognition, asking what service is required, then stepping through acquiring a new phone and plan. He walked through how the BPMN model was used, with both synchronous and asynchronous services for controlling the robot and invoking functions such as classifier training, and human activities for interacting with the customer. Interesting use of BPMN as a driver for real robot actions, showing integration of recognition, RPA, AI, image capture and business services such as customer enrolment and customer ID validation.

The Future of Voice in Business Process Automation, K2

Brandon Brown from K2 looked at a more focused use case for voice recognition, and some approaches to voice-first design that is more than just speech-to-text by adding cognitive services through commodity AI services from Google, Amazon and Microsoft. Their goal is to make AI more accessible through low/no-code application builders like K2, creating voice recognition applications such as chatbots. He demonstrated a chatbot on a mobile phone that was able to not just recognize the words that he spoke, but recognize the intent of the interaction and request additional data: essentially a replacement for filling out a form. This might be a complete interaction, or just an assist for starting a more involved process based on the original voice input. He switched over to a computer browser interface to show more of the capabilities, including sentiment analysis based on form input that could adjust the priority of a task or impact process execution. From within their designer environment, cognitive text analytics such as sentiment analysis can be invoked as a step in a process using their Smart Objects, which are effectively wrappers around one or more services and data mapping actions that allow less-technical process designers include cognitive services in their process applications. Good demo of integrating voice-related cognitive services into processes, showing how third-party services make this much more accessible to any level of developer skill.

State Machine Applied to Corporate Loans Process, BeeckerCo

Fernando Leibowich Beker from BeeckerCo finished up the day with a presentation on their process app suite BeBOP based on IBM BPM/ODM focused on financial services customers, followed by a “demo” of mostly prerecorded screencams. Their app generates state tables for processes using ODM business rules, then allows business users to change the state table in order to drive the process execution. The demo showed a typical IBM BPM application for processing a loan origination, but the steps are defined as ad hoc tasks so not part of a process flow; instead, the process flow is driven by the state table to determine which task to execute in which order, and the only real flow is to check the state table, then either invoke the next task or complete the process. Table-driven processes aren’t a new concept — we’ve been doing this since the early days of workflow — although using an ODM decision table to manage the state transition table is an interesting twist. This does put me in mind of the joke I used to tell when I first started giving process-focused presentations at the Business Rules Forum, about how a process person would model an entire decision tree in BPMN, while a rules person would have a single BPMN node that called a decision tree to execute all of the process logic: just because you can do something using a certain method doesn’t mean that you should do it.

We’re done with day 2; tomorrow is only a half-day of sessions with the awards after lunch (which I’ll probably have to monitor remotely since I’ll be headed for the airport by mid-afternoon).

bpmNEXT 2018: All about bots with Cognitive Technology, PMG.net, Flowable

We’re into the afternoon of day 2 of bpmNEXT 2018, with another demo section.

RPA Enablement: Focus on Long-Term Value and Continuous Process Improvement, Cognitive Technology

Massimiliano Delsante of Cognitive Technology presented their myInvenio product for analyzing processes to determine where gaps exist and create models for closing those gaps through RPA task automation. The demo started with loading historical process data for process mining, which created a process model from the data together with activity resources, counts and other metrics; then comparing the model for conformance with a reference model to determine the frequency and performance of conformant and non-conformant cases. The process discovery model can be transformed to a BPMN model, and simulated performance. With a baseline data set of all manual activities, the system identified the cost of each activity, helping to identify which activities would result in the greatest savings if automated, and fed the data for actual resources used into the simulation scenario; adjusting the resources required by specifying the number of RPA robots that could be deployed at specific tasks allows for a what-if simulation for the process performance with an RPA implementation. An analytics dashboard provides visualization of the original process discovery and the simulated changes, with performance trends over time. Predictive analytics can be applied to running processes to, for example, predict which cases will not meet their deadlines, and some root cause analysis for the problems. Doing this analysis requires that you have information about the cost of the RPA robots as well as being able to identify which tasks could be automated with RPA. Good integration of process discovery, simulation, analysis and ongoing monitoring.

Integration is Still Cool, and Core in your BPM Strategy, PMG.net

Ben Alexander from PMG.net focused on integration within BPM as a key element for driving innovation by increasing the speed of application development: integrating services for RPA, ML, AI, IoT, blockchain, chatbots and whatever other hot new technologies can be brought together in a low-code environment such as PMG. His demo showed a vendor onboarding application, adding a function/subprocess for assessing probability of vendor approval using machine learning by calling AzureML, user task assignment using Slack integration or SMS/phone support through a Twilio connector, and RPA bot invocation using a generic REST API. Nice demo of how to put all of these third-party services together using a BPM platform as the main application development and orchestration engine.

Making Process Personal, Flowable

Paul Holmes-Higgin and Micha Keiner from Flowable presented on their Engage product for customer engagement via chat, using chatbots to augment rather than replace human chat, and modeling the chatbot behavior using standard modeling tools. In particular, they have found that a conversation can be modeled as a case with dynamic injection of processes, with the ability to bring intelligence into conversations, and the added benefit of the chat being completely audited. The demo was around the use case of a high-wealth banking client talking to their relationship manager using chat, with simultaneous views of both the client and relationship manager UI in the Flowable Engage chat interface. The client mentioned that she moved to a new home, and the RM initiated the change address process by starting a new case right in the chat by invoking a context-sensitive digital assistant. This provided advice to the RM about address change regulatory rules, and provided a form in situ to collect the address data. The case is now progressed through a combination of chat message to collaborate between human players, forms filled directly in the chat window, and confirmation by the client via chat by presenting them with information to be updated. Potential issues, such as compliance regulations due to a country move, are raised to the RM, and related processes execute behind the scenes that include a compliance officer via a more standard task inbox interface. Once the compliance process completes, the RM is informed via the chat interface. Behind the scenes, there’s a standard address change BPMN diagram, where the chat interface is integrated through service activities. They also showed replacing the human compliance decision with a decision table that was created (and manually edited if necessary) based on a decision tree generated by machine learning on 200,000 historical address change cases; rerunning the scenario skipped the compliance officer step and approved the change instantaneously. Other chat automated tasks that the RM can invoke include setting reminders, retrieving customer information and more using natural language processing, as well as other types of more structured cases and processes. Great demo, and an excellent look at the future of chat interfaces in process and case management.

Insurance technology: is this very conservative industry finally ready for its close-up?

I’ve worked with insurance clients for a long time, first helping them with automation in their underwriting, policy administration and claims processes, and now helping them with digital transformation to create new business models and platforms. One thing that has always struck me is how behind the time most insurance companies are: usually old companies (by today’s standards), they trend far on the conservative end of the business and technology innovation scale. However, new entrants to the market have been stirring the pot for a couple of years – such as Lemonade for the urban consumer property insurance market – and it seems that everywhere I look, there’s something popping up about innovation in insurance.

Capgemini has a significant insurance practice, and writes an annual World Insurance Report that is about to be updated for 2018; a couple of their consultants write about different aspects of how insurance is changing and the technology enabling that change. They’ve just started a three-part series on the insurance customer of the future, which echoes some of the points that I made in my recent post on the Alfresco blog about transforming insurance with cloud BPM, and although they use the apocryphal “millennial” definition to describe who these customers are in their first post, they point out four main characteristics:

  • Smart shoppers
  • Lower loyalty
  • Self-centred
  • Caring consumers – which appears contrary to the previous point, but check out their post for a description

They have another post on how new InsurTech models can decrease risk for the insurer, which explains more about the social risk pool models that are used by companies like Lemonade, and how risk can be proactively mitigated through the use of connected devices.

We’re also seeing platform innovation for some insurers, such as Liberty Mutual moving their documents to Alfresco on AWS cloud. As I’ve experienced for many years, just getting insurance companies to move from paper to digital files can provide huge operational benefits, and moving those files to the cloud allows a global insurer to allow access wherever required. There are a lot of regulatory issues with data sovereignty, that is, where the content is actually stored and what laws/regulations apply to it because of that, but the vendors are starting to solve those problems with regional data centers and secure, encrypted transport. With digital content comes the issue of digital preservation, which John Mancini on the AIIM blog points out is a big issue for financial and insurance companies because of the typically long time span that they are dealing with customers: consider that a personal injury insurance claim can go on for years, requiring that all documents be retained for future review. After hearing about one former insurance customer of mine that had a flood in their basement storage, destroying years of customer files, I wished that they had decided to move a bit faster on my advice about digital documents.

Cutting edge technologies such as blockchain are also getting into the insurance mix: blockchain can be used to show proof of insurance, improve transparency and reduce risk of fraud, and speed up claims with smart contracts. I can also imagine that as cars get smarter and insurance companies can tie in directly to the on-board systems, there may be less opportunity for auto repair shop fraud, which reduces overall costs to the insurer and consumer.

If you work in insurance and know that you’re behind the curve, there are a lot of things that you can do to help bring yourself into at least the last century, if not this one:

  • Convert all of your files to digital format at the front end of the process, that is, when they arrive (or are created). This will allow you to automatically extract data from the files, which can then be used for classifying and routing content as it arrives. Files can now be shared by anyone who needs to see them, and there will be no piles of completed documents/files waiting to be scanned at the end of a process. This is a big cultural shift as your workers move from working on paper to working on the screen, but if you give them a couple of big screens and a properly-designed workspace, it can be just as productive as paper.
  • With all of your content arriving in digital form, or being converted to digital immediately on arrival, you can now automate your processes:
    • New policy application? Look up any previous information for this customer, create a new business case, and route to the appropriate underwriter if required. If this is a simple policy, such as consumer renter insurance, it can usually be automatically adjudicated and issued immediately.
    • Policy changes? Extract information from the policy administration system, classify the type of change, and either complete the change automatically or forward to a policy administration clerk.
    • A first notice of loss arriving for a claim? Use that to automatically extract information from your policy administration system, set up a claim in your claims system, and route the claim to the appropriate claims manager. Simple claims, such as auto windshield replacement, can be settled automatically and immediately.
    • Additional documents arriving for a claim? Automatically recognize the document type and claim number, and add to the claim case directly.
  • Find the best ways to integrate your digital content and processes with your legacy systems. This is a huge part of what I do with any insurance customer (really, with any customer at all), and it’s not trivial but can result in huge rewards. This will be some combination of exposing APIs, digging directly into operational databases, RPA to integrate “at the glass”, and other methods that are specific to your environment. In the end, you want to be sure that no one is re-entering data manually from one system to another, even by copy and paste.
  • Automate, automate, automate. In case I haven’t made that clear already. There should be no such thing as manual work assignment or routing, except in special cases. Data exchange with legacy systems should be automated. Decisions should be automated where possible, or at least used to make recommendations to workers. Incorporate artificial intelligence and machine learning to learn how your most skilled workers make decisions, align that with your policies and regulatory compliance, and use as input to automated decisions and recommendations. The workers will be left doing the work that actually requires a person to do it, not all of the low-level administrative work.
  • Use some type of low-code application development platform that allows semi-technical business analysts – there are a ton of these working in insurance business areas – to create their own situational apps.
  • Now that you have your operational processes sorted out, start looking for new ways to leverage your digital content and processes:
    • Interact with reinsurers and other business partners using digital content and processes, not paper files and faxes.
    • Provide customers with the option for completely paperless policy application, issuance and renewal. Although I’m far from being a millennial in age, the huge stack of paper sent by my previous home insurer on renewal was a key reason that I ran directly towards an online insurer that could do it all without paper.
    • Streamline claims processes, automating where possible. Many insurance companies don’t spend a lot of time fixing their claims processes, preferring to spend their time on attracting new customers; however, in this age of online consumer reviews, an inefficient claims process is going to hit hard. Automating claims also reduces operational costs: claims managers are highly skilled, and it can take 6-12 months to train a new one.
    • Automate and streamline your ancillary processes that support the main processes, such as recovery of assets, and negotiating contracts with preferred repair vendors.
    • Build in the process monitoring, and provide automated dashboards and reports to different levels of management. As well as giving management a real-time view of operations, this reduces the time of line supervisors spent manually compiling reports. It also, amazingly, will reduce the amount of time that individual workers spend tracking their own work: in many of the insurance companies that I visit, claims managers and other front-line workers keep a manual log of their work because they don’t trust the system to do it for them.
  • Tie your process performance back to business goals: loss ratio, customer satisfaction, regulatory SLAs (such as communicating with customer in a timely manner), net promoter score, fraud rate, closure rate. It’s too easy to get bogged down in making a particular activity or process more efficient when it shouldn’t even be done at all. Although you can use your existing procedures guides as a starting point for your new digital processes, you really need to link everything back to the high-level goals rather than just paving the cow paths.

This started out as a short post because I was seeing a flurry of insurance-related items in my news feed, and grew into a bit of a monster as I thought of my own experiences with insurance customers over the past couple of years. Nonetheless, likely some useful tidbits in here.

Pairing @UiPath and ABBYY for image capture within RPA

Andrew Rayner of UiPath presented at the ABBYY Technology Summit on robotic process automation powered by ABBYY’s FineReader Engine (FRE). He started with a basic definition of RPA — emulating human execution of repetitive processes with existing applications — and the expected benefits in high scalability and reduction in errors, costs and cycle time. RPA products work really well with text on the screen, copying and pasting data between applications, and many are using machine learning to train and improve their automated actions so that it’s more than the simpler old-school “screen scraping” that was dependent purely on field locations on the screen.

What RPA doesn’t do, however, is work with images; that’s where ABBYY FRE comes in. UiPath provides developers using their UiPath Studio the ability to OCR images as part of the RPA flow: an image is passed to FineReader for recognition, then an XML data file of the recognized data is returned in order to complete the next robotic steps. Note that “images” may be scanned documents, but can also be virtualized screens that don’t transfer data fields directly, just display the screen as an image, such as you might have with an application running in Citrix — this is a pretty important capability that is eluding standard RPA.

Rayner walked through an example of invoice processing (definitely the most common example used in all presentations here, in part because of ABBYY’s capabilities in invoice recognition): UiPath grabs the scanned documents and drops them in a folder for ABBYY; FRE does the recognition pass and creates the output XML files as well as managing the human verification step, including applying machine learning on the human interaction to continuously improve the recognition as we heard about yesterday; then finally, UiPath pushes the results into SAP for completing the payment process.

For solution developers working with RPA and needing to integrate data captured from images or virtualized screens, this is a pretty compelling advantage for UiPath.

ABBYY partnerships in ECM, BPM, RPA and ERP

It’s the first session of the last morning of the ABBYY Technology Summit 2017, and the crowd is a bit sparse — a lot of people must have had fun at the evening event last night — and I’m in a presentation by another ex-FileNet colleague of mine, Carl Hillier.

He discussed how capture isn’t just a discrete operation any more, where you just capture, index and store in a content management repository, but is now the front end to business processes that have the potential for digital transformation. To that end, since ABBYY has no plans to expand their side of the business, they have made strategic partnerships with a number of vendors that push into downstream processes: M-Files and Laserfiche for content management, Appian and Pega (still in the works) for BPM, and Acumatica for ERP. As with many technology partnerships, there can be overlap in capabilities but that usually sorts out in favor of the specialist vendor: for example, with Laserfiche, ABBYY is being used to replace Laserfiche’s simpler OCR capabilities for customers with more complex capture capabilities. Both BPM vendors have RPA capabilities — Appian through a partnership with Blue Prism, Pega through their OpenSpan acquisition — and there’s a session following by RPA vendor UiPath on using ABBYY for RPA that likely has broader implications for working with these other partners.

For the solution builders who use ABBYY’s FlexiCapture, the connectors to these other products gives them fast path to implementation, although they can also use the ABBYY SDK directly to create solutions that include competing products. We saw a bit about each of the ABBYY connectors to the five strategic partners, and how they take advantage of those platforms’ capabilities: with Appian, for example, a capture operator uses FlexiCapture to scan/import and verify documents, then the connector maps the structured data directly into Appian’s data objects (records), whereas for one of the content management platforms, they may transfer a smaller subset of document indexing data. The Acumatica integration is a bit different, in that FlexiCapture isn’t seen as a separate application for the capture front end, but it’s embedded within the Acumatica interface as an invoice capture service.

ABBYY’s plan is to create more of these connectors, making it easier for their VARs and solution partners (who are the primary attendees at this conference) to quickly build solutions with ABBYY and a variety of platforms.

RPA just wants to be free: @WorkFusion RPA Express

Last week, WorkFusion announced that their robotic process automation product, RPA Express, will be released in 2017 as a free product; they published a blog post as well as the press release, and today they hosted a webinar to talk more about it. They are taking requests to join the beta program now, with a plan to launch publicly at the end of Q1 2017.

WorkFusion has a lot of interesting features in their RPA Express and Smart Process Automation (SPA) products, but today’s webinar was really about their business model for RPA Express. This is not a limited-time trial, it’s a free enterprise-ready product that can generate business benefit. Free to purchase and no annual maintenance fees, although you obviously have infrastructure costs for the servers/desktops on which RPA Express runs. Their goal in making it free is to bypass the whole RFP-POC-ROI dance that goes on in most organizations, where a decision to implement RPA – which typically can show a pretty good ROI within a matter of weeks – can take months. With a free product, one major barrier to implementation has been removed.

So what’s the catch? WorkFusion has a more intelligent automation tool, SPA, and they’re hoping that by seeing the benefits of using RPA Express, organizations will want to try out SPA on their more complex automation needs. RPA Express uses deterministic, rules-based automation, which requires explicit training or specification of each action to be taken; SPA uses machine learning to learn from user actions in order to perform automation of tasks that would typically require human intervention, such as handling unstructured and dynamic data. WorkFusion envisions a “stairway to digital operations” that starts with RPA, then steps up the intelligence with cognitive processing, chatbots and crowdsourcing to a full set of cognitive services in SPA.

This doesn’t mean that RPA Express is just a “starter edition” for SPA: there are entire classes of processes that can be handled with deterministic automation, such as synchronizing data between systems that may not talk to each other, such as SAP and Salesforce. This replaces having a worker copy and paste information between screens, or (horrors!) re-type the information in two or more systems; it can result in a huge reduction in cost and time, and remove the tedious work from people to free them up for more complex decision-making or direct customer interaction.

RPA Express bots can also be called from other orchestration and automation tools, including a BPMS, and can run on a server or on individual desktops. We didn’t get a rundown of the technology, so more to come on that as they get closer to the release. We did see one or two screens, and it’s based on modeling processes using a subset of BPMN (start and end events, activities, XOR gateways) that can be easily handled by a business user/analyst to create the automation flows, plus using recorder bots to capture actions while users are running through the processes to be automated. There was a mention of coding on the platform as well, although it was clear that this was not required in many cases, hence development skills are not essential.

Removing the cost of software changes the game, allowing more organizations to get started with this technology without having to do an internal cost justification for the licensing costs. There’s still training and implementation costs, but WorkFusion plans to provide some of this through online video courses, as well as having the large SIs and other partners trained to have this in their toolkit when they are working with organizations. More daunting is the architectural review that most new software needs to go through before being installed within a larger organization: this can still block the implementation even if the software is free.

I’m looking forward to seeing a more complete technical when the product is closer to launch date. I’m also looking forward to see how this changes the price point of RPA from other vendors.

OpenSpan at Pegaworld 2016: RPA meets BPM

Less than two months ago, Pega announced their acquisition of OpenSpan, a software vendor in the robotic process automation (RPA) market. That wasn’t my first exposure to OpenSpan, however: I looked at them eight years ago in the context of mashups. Here at PegaWorld 2016, we’re getting a first peek at the unified roadmap on how Pega and OpenSpan will fit together. Also, a whole new mess of acronyms.

I’m at the OpenSpan session at Pegaworld 2016, although some of these notes date from the time of the analyst briefing back in April. Today’s presentation featured Anna Convery of Pega (formerly OpenSpan); Robin Gomez, Director of Operational Intelligence at Radial (a BPO) providing an introduction to RPA; and Girish Arora, Senior Information Oficer at AIG, on their use of OpenSpan.

Back in the 1990’s, a lot of us who were doing integration of BPM systems into enterprises used “screen scraping” to push commands to and pull data from the screens of legacy systems; since the legacy systems didn’t support any sort of API calls, our apps had to pretend to be a human worker to allow us to automate integration between systems and even hide those ugly screens. Gomez covered a good history of this, including some terms that I had hoped to never see again (I’m looking at you, HLLAPI). RPA is like the younger, much smarter offspring of screen scraping: it still pushes and pulls commands and data, automating desktop activities by simulating user interaction, but it’s now event-driven, incorporating rules and machine learning.

As with BPM and other process automation, Gomez talked about how the goal of RPA is to automate repeatable tasks, reduce error rates, improve standardization, reduce requirement for knowledge about multiple systems, shorten worker onboarding time, and create a straight-through process. At Radial, they were looking for the combination of robotic desktop automation (RDA) that provides personal robots to assist workers’ repetitive tasks, and RPA that completely replaces the worker on an unattended desktop. I’m not sure if every vendor makes a distinction between what OpenSpan calls RDA and RPA; it’s really the same technology, although there are some additional monitoring and virtualization bits required for the headless version.

OpenSpan provides the usual RPA desktop automation capabilities, but also includes the (somewhat creepy) ability to track and analyze worker behavior: basically, what they’re typing into what application in what context, and present it in their Opportunity Finder. This information can be mined for patterns in order to understand how people do their job — much the way that process mining works, but based on user interactions rather than system log files — and automate the parts that are done the same way each time. This can be an end in itself, or a stepping stone to a replacement of the desktop apps entirely, providing interim relief while a full Pega BPM/CRM implementation is being developed, for example. Furthermore, the analytics about the user activities on the desktop can feed into requirements for any replacement initiative, both the general flow as well as an analysis of the decisions made based on what data was presented.

OpenSpan and Pega aren’t (exactly) competitive technologies: OpenSpan can be used for desktop automation where replacement is not an option, or can be used to as a quick fix while performing desktop process discovery to accelerate a full Pega desktop replacement project. OpenSpan paves the cowpaths, while a Pega implementation is usually a more fundamental innovation that may not be warranted in all situations. I can also imagine scenarios where a current Pega customer uses OpenSpan to automate the interaction between Pega and legacy applications that still exist on the desktop. From a Pega sales standpoint, OpenSpan may also act as the camel’s nose in the tent to get into net new clients.

IMG_9784There are a wide variety of use cases, some of them saving just a few minutes but applicable to thousands of workers (e.g., logging in to multiple systems each morning), others replacing a significant portion of knowledge work for a smaller number of workers (e.g., financial reconciliations). Arora talked about what they have done at AIG, in the context of processes that require a mix of human-required and fully automatable steps; he sees their opportunity as moving from RDA (where people are still involved, gaining 10-20% in efficiency) to RPA (fully automated, gaining 40-50% efficiency). Of course, they could just swap out their legacy systems for something that was built this century, but that’s just too difficult to change — expensive, risky and time-consuming — so they are filling in the automation gaps using OpenSpan. They have RDA running on every desktop to assist workers with a variety of tasks ranging from simple to complex, and want to start moving some of those to RPA to roll out unattended automation.

OpenSpan is typically deployed without automation to start gathering user analytics, with initial automation of manual procedures within a few weeks. As Pega cognitive technologies are added to OpenSpan, it should be possible for the RPA processes to continue to recognize patterns and recommend optimizations to a worker’s flow, becoming a sort of virtual personal assistant. I look forward to seeing some of that as OpenSpan is integrated into the Pega technology family.

OpenSpan is Windows-only .NET technology, with no plans to change that at the time of our original analyst briefing in April. We’ll see.