Automating claims and improving lives in Africa – aYo at #CommunityLIVE

After hearing Heidi Badenhorst of aYo Holdings speak this morning at the Hyland CommunityLIVE 2002 general session, I knew that I wanted to see her breakout session for more details on what they’re doing. I use microinsurance as an example of a new business model that insurance companies can consider once they’ve automated a lot of their processes (otherwise, it’s not cost-effective), but this is the first chance that I‘ve really had to hear more about microinsurance in action.

Ayo provides low-cost hospital and life insurance (as well as a few other types) for more than 17M people across several African countries, with the goal to scale up to more than 100M customers. As with a lot of other businesses spreading into developing countries, the customers use their mobile phones to interact with aYo’s insurance products through mobile money for receiving payments and WhatsApp chatbots for gathering information and submitting documents. aYo is owned by MTN, the largest mobile provider in Africa, and the insurance service was first started as a loyalty benefit for mobile customers.

Microinsurance is about tiny premiums and small payouts — small amounts in our rich countries, but a day’s pay in many African markets — and the only way to do this effectively is to maximize the amount of automation. Medical records are rudimentary, often hand-written and without standard treatment or claim codes, making it difficult to automate and subject to fraud.

They have been managing all of this with manual processes (including manual downloads of documents) and spreadsheets, but are moving to a greater degree of automation using Alfresco Process Automation (APA) and other components to pay 80% of the claims without human intervention. Obviously, they need content management and intelligent capture as well, but the content-centric process orchestration and AI for fraud detection are key to automation. They also needed a cloud solution to support their multi-national operations, and something that integrated well with their claims system. Since their solution is tightly integrated with the phone network, they can use location data from the claim to correlate with hospital locations as another potential anti-fraud check. They’re also using behavioral data from how their customers interact with WhatsApp to optimize their customer experience.

We saw a video of what a claim looks like from the customer side — WhatsApp chatbot with links for uploading documents — as well as the internal aYo operations side in more conventional Alfresco workspaces and dashboards. This was really inspirational on a number of levels. First of all, just from a business and technology standpoint, they’re doing a great job of improving their business through automation. More importantly, they are using this to allow for cost-effective processing of very small claims, and thereby enabling coverage for millions of people who have never previously had access to insurance. Truly, a transformational business model for insurance.

CommunityLIVE 2022 Day 2 general session

I’ll be heading home this afternoon, but wanted to grab a couple of the morning sessions while I’m here in Nashville. Nashville is really a music city, and we’ve started of each day with live music from the main stage, plus at the evening event last night. Susan deCathelineau, Hyland’s Chief Customer Success Officer, kicked things off with a review of some of the customer support and services improvements that they have made in response to customer feedback, and how the recent acquisitions and product improvements have resonated with customers. Sticking with the “voice of the customer” theme, Ed McQuiston, Chief Commercial Officer, hosted a panel of customers in a “Late Morning Show” format.

His guests were Heidi Badenhorst, Group Head of Strategy and Special Projects at aYo Holdings (South African micro-insurance provider); Adam Podber, VP of Digital Experience at PVH (a fashion company that owns brands such as Tommy Hilfiger and Calvin Klein); and Kim Ferren, Senior AP Manager at Match (the online dating company).

Badenhorst spoke first about how aYo is trying to bridge the financial gap by providing insurance to the low end of the market, especially health insurance for people who have no other support network in situations when they can’t work (and therefore feed their families). They use Alfresco to automatically capture and store medical documents directly from customers (via WhatsApp), and plan to automate the (still manual) claims processing using rules and process in the future. This is such an exciting application of automation, and exactly the type of thing that I spoke about yesterday in my presentation: what new business models are possible once we automate processes. I’m definitely going to hit her breakout session later this morning.

Podber talked about their experience with Nuxeo for digital asset management, moving from 17 DAMs across different regions to a consolidated environment that has different user experience depending on the user’s role and interests. With a number of different brands and a huge number of products within each brand, this provides them with a much more effective way to manage their product information.

Ferren was there to talk about accounts payable, but there was a hilarious Match.com ad shown first where Satan and 2020 go on a date in all the empty places that we couldn’t go back then, plus stole some toilet paper and ended up posing in front of a dumpster fire. Match is an OnBase customer, and although AP isn’t necessarily a sexy application, it’s a critical part of any business — one of my first imaging and workflow project implementations back in the 1990s was AP and I learned a lot about it how it works. Match used to combine Workday, Great Plains, NetSuite and several other local systems across their different geographic regions; now it’s primarily Workday with Hyland providing integrated support and Brainware intelligent capture.

There was a good conversation amongst the panelists about lessons learned and what they are planning to do going forward; expect some good breakout sessions from each of these companies with more details about what they’re doing with Hyland products.

Using Digital Intelligence to Navigate the Insurance Industry’s Perfect Storm: my upcoming webinar with @ABBYY_Software

I have a long history working with insurance companies on their digitization and process automation initiatives, and there’s a lot of interesting things happening in insurance as a result of the pandemic and associated lockdown: more automation of underwriting and claims, increased use of digital documents instead of paper, and trying to discover the “new normal” in insurance processes as we move to a world that will remain, at least in part, with a distributed workforce for some time in the future. At the same time, there is an increase in some types of insurance business activity, and decreases in other areas, requiring reallocation of resources.

On June 17, I’ll be presenting a webinar for ABBYY on some of the ways that insurance companies can navigate this perfect storm of business and societal disruption using digital intelligence technologies including smarter content capture and process intelligence. Here’s what we plan to cover:

  • Helping you understand how to transform processes, instead of falling into the trap of just automating existing, often broken processes
  • Getting your organization one step further of your competition with the latest content intelligence capabilities that help transform your customer experience and operational effectiveness
  • Completely automating your handling of essential documents used in onboarding, policy underwriting, claims, adjudication, and compliance
  • Having direct overview of your processes as living in real time to discover where bottlenecks and repetitions occur, where content needs to be processed, and where automation can be most effective

Register at the link, and see you on the 17th.

OpenText Analyst Summit 2020 day 2: content services

Fred Sass, Marc Diefenbruch and Michael Cybala presented a breakout session on the content services portfolio. OpenText has two main content services platforms: their original Content Suite and the 2016 acquisition of Documentum, both of which appear to be under active development. They also list Extended ECM as a “content services platform”, although my understanding is that it’s a layer that abstracts and links Content Suite (and to a lesser extent, Documentum) to exist within other business workplaces. I’m definitely not the best source of information on OpenText content services platform architecture.

In many cases, their Content Suite is not accessed via an OpenText UI, but is served up as part of some other digital workplace — e.g., SAP, Salesforce or Microsoft Teams — with deep integration into that environment rather than just a simple link to a piece of content. This is done via their Extended ECM product line, which includes connectors for SAP, Microsoft and other environments. They are starting to build out Extended ECM Documentum to allow the same type of access via other business environments, but to Documentum D2 rather than Content Suite. They are integrating Core Share in the same way with Salesforce, allowing for secure sharing of content with external participants.

They discussed the various cloud options for OpenText content (off cloud, public cloud, managed services on OpenText private cloud, managed services on public cloud, SaaS cloud), as well as some general benefits of containerization. They use Docker containers on Kubernetes, which means that they can deploy on any cloud platform as well as an on-premise environment. They also have a number of content-related services available in the OT2 SaaS microservices environment, including Core Share and Core Capture applications and the underlying capture and content services. Core has been integrated with a number of different SaaS applications (e.g., SAP SuccessFactors) for document capture, storage and generation.

The third topic covered in the session was intelligent automation, including the type of AI-powered intelligent categorization and filing of documents with Magellan. We saw a demo of Core Capture with machine learning, where document classification and field recognition on the first pass of a document type were corrected manually, then the system performed correct recognition on a subsequent similar document. A second demo showed a government use case, where a captured document created a case management scenario on Extended ECM that is essentially a template-based document approval workflow with a few case management features including the ability to dynamically add steps and participants. As we get a bit deeper into the workflow, it’s revealed to be OpenText Process Suite, as part of AppWorks.

Lastly we looked at information governance, with a renewed interest due to privacy concerns and compliance-related legislation. They have a new solution, Core for Federated Compliance, that provides centralized records oversight and policy management over multiple platforms and repositories. It’s currently only linking to their own content repositories, but have some plans to extend this to other content sources such as file shares.

There’s another breakout plus a wrap-up Q&A with the executive leadership team, but this is the end of my coverage of the 2020 OpenText Analyst Summit. If something extraordinary happens in either of those sessions, I’ll tweet about it.

Upcoming webinar on digital transformation in financial services featuring @BPMdotcom and @ABBYY_USA – and my white paper

Something strange about receiving an email about an upcoming webinar, featuring two people who I know well…

 …then scrolling down to see that ABBYY is featuring the paper that I wrote for them as follow-on bonus material!

Nathaniel Palmer and Carl Hillier are both intelligent speakers with long histories in the industry, tune in to hear them talk about the role that content capture and content analytics play in digital transformation.

Intelligent Capture für die digitale Transformation: my intelligent capture paper for @ABBYY_Software, now in German

A little over a year ago, I wrote a paper on intelligent capture for digital transformation, sponsored by ABBYY, and gave a keynote at their conference on the same topic. The original English version is on their site here, and if you read German (or want to pass it along to German-speaking colleagues), you can find the German version here. As usual, this paper is not about ABBYY’s products, but about how intelligent capture is the on-ramp for any type of automated processes and hence required for digital transformation. From the abstract:

Data capture from paper or electronic documents is an essential step for most business processes, and often is the initiator for customer-facing business processes. Capture has traditionally required human effort – data entry workers transcribing information from paper documents, or copying and pasting text from electronic documents – to expose information for downstream processing. These manual capture methods are inefficient and error-prone, but more importantly, they hinder customer engagement and self-service by placing an unnecessary barrier between customers and the processes that serve them.

Intelligent capture – including recognition, document classification, data extraction and text analytics – replaces manual capture with fully-automated conversion of documents to business-ready data. This streamlines the essential link between customers and your business, enhancing the customer journey and enabling digital transformation of customer-facing processes.

Or, in German:

Die Erfassung von Daten aus papierbasierten oder elektronischen Dokumenten steht als
zentraler Schritt am Anfang zahlreicher kundenorientierter Geschäftsprozesse. Dies ist üblicherweise
mit großem manuellen Aufwand verbunden – Mitarbeiter übertragen und kopieren
per Hand Daten und Texte, um sie so nachgelagerten Systemen und Prozessen zur Verfügung
zu stellen. Diese manuelle Vorgehensweise ist jedoch nicht nur ineffizient und fehleranfällig,
sie bremst auch den Kundendialog aus und verhindert Self-Service-Szenarien durch unnötige
Barrieren zwischen Kunden und Dienstleistern. Intelligent-Capture-Lösungen – mit Texterkennung,
Dokumentenklassifizierung, Datenextraktion und Textanalyse – ersetzen die manuelle
Datenerfassung. Dokumente werden vollautomatisch in geschäftlich nutzbare Daten umgewandelt.
So können Unternehmen die Beziehung zu ihren Kunden stärken, das Benutzererlebnis
steigern und die digitale Transformation kundenorientierter Prozesse vorantreiben.

Recently, I was interviewed by KVD, a major European professional association for customer service professionals. Although most of their publication is in German, the interview was in English, and you can find it on their site here.

ITESOFT | W4 Secure Capture and Process Automation digital business platform

It’s been three years since I looked at ITESOFT | W4’s BPMN+ product, which was prior to W4’s acquisition by ITESOFT. At that time, I had just seen W4 for the first time at bpmNEXT 2014, and had this to say about it:

For the last demo of this session, Jean-Loup Comeliau of W4 on their BPMN+ product, which provides model-driven development using BPMN 2, UML 2, CMIS and other standards to generate web-based process applications without generating code: the engine interprets and executes the models directly. The BPMN modeling is pretty standard compared to other process modeling tools, but they also allow UML modeling of the data objects within the process model; I see this in more complete stack tools such as TIBCO’s, but this is less common from the smaller BPM vendors. Resources can be assigned to user tasks using various rules, and user interface forms are generated based on the activities and data models, and can be modified if required. The entire application is deployed as a web application. The data-centricity is key, since if the models change, the interface and application will automatically update to match. There is definitely a strong message here on the role of standards, and how we need more than just BPMN if we’re going to have fully model-driven application development.

A couple of weeks ago, I spoke with Laurent Hénault and François Bonnet (the latter whom I met when he demoed at bpmNEXT in 2015 and 2016) about what’s happened in their product since then. From their company beginnings over 30 years ago in document capture and workflow, they have expanded their platform capabilities and relabelled it as digital process automation since it goes beyond BPM technology, a trend I’m seeing with many other BPM vendors. It’s not clear how many of their 650+ customers are using many of the capabilities of the new platform versus just their traditional imaging and workflow functions, but they seem to be expanding on the original capabilities rather than replacing them, which will make transitioning customers easier.

31 ITESOFT W4 platform as part of enterprise architectureThe new platform, Secure Capture and Process Automation (SCPA), provides capabilities for capture, business automation (process, content and decisions), analytics and collaborative modeling, and adds some nice extras in the area of document recognition, fraud detection and computer-aided process design. Using the three technology pillars of omni-channel capture, process automation, and document fraud detection, they offer several solutions including eContract for paperless customer purchase contracts, including automatic fraud detection on documents uploaded by the customer; and the cloud-based Streamline for Invoices for automated invoice processing.

Their eContract solution provides online forms with e-signature, document capture, creation of an eIDAS-compliant contract and other services required to complete a complex purchase contract bundled into a single digital case. The example shown was an online used car purchase with the car loan offered as part of the contract process: by bundling all components of the contract and the loan into a single online transaction, they were able to double the purchase close rate. Their document fraud detection comes into play here, using graphometric handwriting analysis and content verification to detect if a document uploaded by a potential customer has been falsified or modified. Many different types of documents can be analyzed for potential fraud based on content: government ID, tax forms, pay slips, bank information, and public utility invoices may contain information in multiple formats (e.g., plain text plus encoded barcode); other documents such as medical records often contain publicly-available information such as the practitioner’s registration ID. They have a paper available for more information on combatting incoming document fraud.

07 ITESOFT W4 Verifiable documentsTheir invoice processing solution also relies heavily on understanding certain types of documents: 650,000 different supplier invoice types are recognized, and they maintain a shared supplier database in their cloud capture environment to allow these formats to be added and modified for use by all of their invoice processing customers. There’s also a learning environment to capture new invoice types as they occur. Keep in mind that the heavy lifting in invoice processing is all around interpreting the vendor invoice: once you have that sorted out, the rest of the process of interacting with the A/P system is straightforward, and the payment of most invoices that relate to a purchase order can be fully automated. Streamline for Invoices won the Accounts Payable/Invoicing product of the year at the 2017 Document Manager Awards.

After a discussion of their solutions and some case studies, we dug into a more technical demo. A few highlights:

  • 09 ITESOFT W4 Web Modeler - concurrent updates of modelThe Web Modeler provides a fully BPMN-compliant collaborative process modeling environment, with synchronous model changes and (persistent) discussion thread between users. This is a standalone business analyst tool, and the model must be exported as a BPMN file for import to the engine for execution, so there’s no round-tripping. A cool feature is the ability to scroll back through the history of changes to the model by dragging a timeline slider: each changed snapshot is shown with the specific author.
  • Once the business analyst’s process model has been imported into the BPMN+ Composer tool, the full application can be designed: data model, full process model, low code forms-based user experience, and custom code (if required). This allows a more complex BPMN model to be integrated into a low code application – something that isn’t allowed by many of the low code platforms that provide only simple linear flows – as well as developer code for “beyond the norm” integration such as external portals.
  • Supervisor dashboards provide human task monitoring, including task assignment rules and skills matrix that can be changed in real time, and performance statistics.

The applications developed with their tools generally fall into the case management category, although they are document/data based rather than CMMN. Like many BPM vendors, they are finding that there is not the same level of customer demand for CMMN as there was for BPMN, and data-driven case management paradigms are often more understandable to business people.

They’ve OEM’d some of the components (the capture OCR, which is from ABBYY, and the web modeler from another French company) but put them together into a seamless offering. The platform is built on a standard Java stack; some of the components can be scaled independently and containerized (using Microsoft Azure), allowing customers to choose which data should exist on which private and public cloud infrastructure.

ITESOFT | W4 SCPA 2017-12 briefing

28 ITESOFT W4 timeline demoThey also showed some of the features that they demoed at the 2017 bpmNEXT (which I unfortunately missed): process guidance and correction that goes beyond just BPMN validation to attempt to add data elements, missing tasks, missing pathways and more; a GANTT-type timeline model of a process (which I’ve seen in BPLogix for years, but is sadly absent in many products) to show expected completion times and bottlenecks, and the same visualization directly in a live instance that auto-updates as tasks are completed within the instance. I’m not sure if these features are fully available in the commercial product, but they show some interesting views on providing automated assistance to process modeling.


Machine learning in ABBYY FlexiCapture

Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session after this to close the conference, but I have to catch a plane.

He started with a basic definition of machine learning: a method of data analysis that automates analytical model building, allowing computers to find insights in data and execute logic without being explicitly programmed for where to look or what to do. It’s based on pattern recognition and computational statistics, and it’s popping up in areas such as biology, search and recommendations (e.g., Netflix), and spam detection. Machine learning is an iterative process that uses sample data and one or more machine learning algorithms: the training data set is used by the algorithm to build an analytical model, which is then applied to attempt to analyze or classify new data. Feedback on the correctness of the model for the new data is fed back to refine the learning and therefore the model. In many cases, users don’t even know that they’re providing feedback to train machine learning: every time you click “Spam” on a message in Gmail (or “Not Spam” for something that was improperly classified), or thumbs up/down for a movie in Netflix, you’re providing feedback to their machine learning models.

He walked us through several different algorithms, and their specific applicability: Naive Bayes, Support Vector Machine (SVM), and deep learning; then a bit about machine learning scenarios inclunition rulesding supervised, unsupervised and reinforcement learning. In FlexiCapture, machine learning can be used to sort documents into categories (classification), and for training on field-level recognition. The reason that this is important for ABBYY customers (partners and end customers) is that it radically compresses the time to develop the rules required for any capture project, which typically consumes most of the development time. For example, instead of just training a capture application for the most common documents since that’s all you have time for, it can be trained for all document types, then the model will continue to self-improve as verification users correct errors made by the algorithm.

Although VonBurg was unsure if the machine learning capabilities are available yet in the SDK — he works in the FlexiCapture application team, which is based on the same technology stack but runs independently — the session on robotic information capture yesterday seems to indicate that it is in the SDK, or will be very soon.

Capture microservices for BPO with iCapt and ABBYY

Claudio Chaves Jr. of iCapt presented a session at ABBYY Technology Summit on how business process outsourcing (BPO) operations are improving efficiencies through service reusability. iCapt is a solutions provider for a group of Brazilian companies, including three BPOs in specific verticals, a physical document storage company, and a scanner distributor. He walked through a typical BPO capture flow — scan, recognize, classify, extract, validate, export — and how each stage can be implemented using standalone scan products, OCR SDKs, custom UIs and ECM platforms. Even though this capture process only outputs data to the customer’s business systems at the end, such a solution needs to interact with those systems throughout for data validation; in fact, the existing business systems may provide some overlapping capabilities with the capture process. iCapt decided to turn this traditional capture process around by decoupling each stage into independent, reusable microservices that can be invoked from the business systems or some other workflow capability, so that the business system is the driver for the end-to-end capture flow. The microservices can be invoked in any order, and only the ones that are required are invoked. As independent services, each of them can be scaled up and distributed independently without having to scale the entire capture process.

The recognize, classify and extract steps are typically unattended, and became immediate candidates to be implemented as microservices. This allows them to be reusable across processes, scaled independently, and deployed on-premise or in the cloud. For example, a capture process that is used for a single type of document doesn’t require the classification service, but only uses the recognize and extract services; another process that uses all three may reuse the same recognize and extract services when it encounters the same type of document as the first process handles, but also uses the classify service to determine the document type for heterogeneous batches of documents. iCapt is using ABBYY FineReader as a core component in their iCaptServices Cloud offering, embedded within their own web APIs that offer higher-level services on top of the FRE core functions; the entire package can be deployed as a container or serverless function to be called from other applications. They provide services for mobile client development to allow these business applications to have capture on mobile devices.

He gave an example of a project that they did for recovering old accounting records by scanning and recognizing paper books; this was a one-time conversion project, not an ongoing BPO operation, making it crucial that they be able to build the data capture application quickly without developing an excessive amount of custom code that would have been discarded after the 10-week project duration. They’re currently using the Windows version of ABBYY which increases their container/cloud costs somewhat, and are interested in trying out the Linux version that we heard about yesterday.