I’m back in my office after the European tour — three weeks, four countries and three conferences — and will be presenting on a webinar this Thursday hosted by Alfresco. I’ll be having a conversation with Dave Giordano, founder and insurance practice lead at Technology Services Group, on how to make insurance claims work better for insurance companies and their customers.
Our expected topics of conversation include:
How claims have become a competitive differentiator in insurance
Challenges in claims processing
Streamlining the ingestion and recognition of digital media and other content
Customer use cases for improving efficiency and automation
You can sign up for the webinar here. As always, if you have any particular questions or comments that you want to send to me ahead of time, just comment on this post or send me a tweet.
A few months ago at bpmNEXT, I saw Keith Swenson give an update on the DMN Technology Compatibility Kit, and we’re seeing a bit of a repeat of that presentation here at DecisionCAMP. The TCK defines a set of test cases (as DMN decision models, input data and expected results) that assure conformance to the specification, plus a sample runner application that will pass the models and data to the vendor’s engine and evaluate the results.
There are about 120 test models and 1600 test cases, supporting only DMN 1.2; these tests come from examining the specification as well as cases from practice. It’s easy for a vendor to get involved in the the TCK, both in terms of running it against their engine and in terms of participating through submitting new test models and cases. You can see the vendors that have submitted their results; although many more vendors claim that they “have DMN”, their actual level of compatibility may be suspect.
The TCK committee is getting ready for DMN 1.3, and considering tests for modeling tools in addition to the current tests for the engine. He also floated the idea of a standardized API for DMN as a service, so that the calling application doesn’t need to know which engine it’s calling — possibly something that’s not going to be a big hit with vendors.
Business innovation of BPO realized by Task Center and AI and Rule Engine. Yoshihito Nakayama, NTT DATA INTRAMART
Yoshihito Nakayama presented on the current challenges of BPO with respect to improving productivity, and how they are resolving this using AI and a rules engine to aggregate and assign human tasks from multiple systems to different team members. This removes the requirement to manually review and assign work, and also provides a dashboard for visualizing work in progress and future forecasts.
AI is used to predict and optimize task classification and assignment, based on time required to complete the task and the individual worker skill level and productivity. It is also used to predict workload for task types and individual workers.
Their visualization dashboard shows drilldowns on current and past productivity, plus future forecasts. The simulation models for forecasting can be fine-tuned to optimize for cost, performance and other factors. It brings together work monitoring from all systems, including RPA processes. They’re also using process mining on a variety of systems to create a digital twin of the organization for better tracking and predictions, as well as other tools such as voice and image identification to recognize what tasks are being done that are not being recorded in any system logs.
They have a variety of case studies across industries, looking at automating non-routine work using case management, BPM, RPA, AI and rules.
Spaghetti Spreadsheets Untangled – Benefits of decision modeling when uncovering complex business logic hidden in spreadsheets. Charlotte Bouvy, M.C. Bouvy Consultancy
Charlotte Bouvy presented on her work done with SVB, the Netherlands social insurance administrator, on implementing business rules management. They are using DMN-based wizards for supporting 1,500 case workers, and the specific case was around the operational control and audit departments and the “lawfulness” of how the assessment work is done. Excel spreadsheets were used to do this, which had obvious problems in terms of being error prone and lacking domain-specific business logic. They implemented their SARA system to replace the spreadsheets with Oracle OPA, which allowed them to more accurately represent knowledge, as well as separate the data from the decision model while creating an executable model.
These type of audit processes require sampling over a wide variety of case files to compare actual payments against expected amounts, with some degree of aggregation within specific laws being applied. Moving to a rules engine allowed them to model calculations and decisions, and separate data and model to avoid errors that occurred when copying and pasting data in spreadsheets. The executable model is now a single source of truth to which version control and change management can be applied. They are trying out different ways of using the SARA system: directly in Oracle Policy Modeler for building and debugging; via a web interview and an RPA robot for data input; and eventually via direct integration with the SVB’s case management system to load data.
The Decision Model for Gate Allocation. Silvie Spreeuwenberg, Librt
Day 3 of DecisionCAMP 2019 started with three use cases from industry. First, Silvie Spreeuwenberg presented on decision models for allocating airport gates, specifically at Schiphol airport in Amsterdam. Although gate plans are made a day in advance based on flight schedules, they change constantly due to early arrivals, late departures and other unexpected disruptions to the schedule. Any given day, there are 50-100 gate changes one hour before an aircraft arrival; although this was seen as a disruption, this could also be considered an opportunity for optimization.
There were a lot of rules used for the planning and reassignment that had more to do with preferences than actual optimization; they really wanted to drive towards the objective of optimizing asset usage and therefore airport capacity. There are a lot of factors involved, such as having sufficient gate area capacity to handle the number of passengers for a flight, or having buses available to offload flights that can’t be assigned a gate. They have created a policy for aircraft stand allocation which includes some identifiable decision tables, although these are just at the strategy documentation phase.
Definitely a complex problem that has applicability at every major airport around the world.
A hybrid implementation of multi-channel, multi-modal, high volume financial risk monitoring. Martijn Tromm and Marten Schokking, Oracle
Marten Schokking and Martijn Tromm presented a use case from Rabobank using decision management and machine learning for customer risk assessment in terms of KYC (know your client) and AML (anti-money laundering). This is used during client onboarding, but also during periodic reviews as well as reviews triggered by specific events. There are scoring rules that use data input from a variety of sources, including client information from a CRM, interview responses and policies.
There are government regulations requiring that this be done for all clients at certain times. A triggering event, such as a change in the customer’s circumstances, will cause a customer interview and other data analysis to recalculate the risk; this may result in a more detailed manual review of the risk. At this point, there is still a lot of employee work which is creating a challenge in completing the customer risk assessments within the regulatory deadlines; they are looking at how to automate the basic assessment using machine learning in order to reduce the manual work required.
The risk model has been built using Oracle Policy Automation rules engine integrated with the Siebel CRM. They are reusing rules across channels where possible, and the use of natural language in the rules definition helps with traceability to the policies. They are continuing to innovate with rules, such as having context-driven rules based on user behavior on specific channels, and having a fast two-day turnaround for rule changes related to certain types of policy changes. The ability to predict the impact of policy changes based on actual data allows for operational planning to accommodate those changes.
The Role of DMN and BPMN in the Design of Composite Materials. Dario Campagna, ESTECO
Dario Campagna presented on how the COMPOSELECTOR project is integrating material modeling and business process management in a decision support system for composite material design; this type of design can have complex requirements, business decisions and simulation workflows. Using application cases from Dow, Airbus and Goodyear, they modeled the business flow using BPMN and DMN. ESTECO, which creates software tools for engineering design, is a contributor to the COMPOSELECTOR project.
While BPMN is used to model the flow at the business level, DMN decision tables are used to make decisions on the class of materials and manufacturing process, then on the simulation workflows to use based on business and engineering KPIs. DMN provides the link from the business layer to the engineering layer, then to the simulation layer. Using DMN provides a higher level of consistency in decision-making, which leads to better design and lower costs.
We saw a brief video of a demo of the system in use: a business-level manager selects high-level parameters and KPIs for the proposed design; this selects one or more simulation models for the material design, which is then confirmed or decided by an engineer; the results of the simulation are passed back to the manager for final decision-making. This has the effect of integrating the business and technical sides of the design process, and include modeling and simulation results in the business-level (human) decisions in a standardized way.
Trends in Enterprise AI and Digital Decisions. Mike Gualtieri, Forrester
Day 2 of DecisionCAMP 2019 in beautiful Bolzano started out with Mike Gualtieri giving the Forrester view of trends in the market around AI and automated decisions. This was a typical analyst presentation — sorry, no notes — presented as part of the larger BRAIN 2019 (Bolzano Rules and Artificial INtelligence Summit) of which DecisionCAMP is a part.
Ron Ross presented on the current state of business rules and opportunities moving forward. To start, we have made progress in this area — DMN for one thing is an amazing leap forward — but business rules are not yet universally accepted and adopted within organizations despite the provable benefits.
One opportunity for business rules tools is to reduce developer workload, and to reduce rule programming errors. In alignment with Semantics Of Business Vocabulary and Rules (SBVR) standard, there are two types of rules: definitional rules and behavioral rules. Definitional rules may be incorrect or misapplied, but they can’t be directly violated since they are evaluated in the course of a process; declarative behavioral rules, on the other hand, require a “watcher” to track other events that may cause the state of another process or transaction to change. If implemented properly, behavioral rules can reduce developer workload since the event-driven watcher updates state constantly based on these rules firing. DMN does not allow modeling of these types of decisions, since there needs to be more awareness of state as well as the events that may cause it to change; there is no concept of a watcher daemon that can constantly evaluate rules and update state.
There is also a need to better address sentiment and human discretion in rules. With behavioral rules that are enforced by humans, there are levels of enforcement; these nuances are not captured in most rules/decision systems.
Rules tools also need to tie in more directly with business governance in order to enforce regulatory and other rules under which an organization needs to operate. Many of these are behavioral rules, which are not handled adequately by DMN and most decision management systems due to the lack of an event-driven watcher; there is also a gap caused by the lack of natural language support in defining executable rules.
ML, Conversational UX, and Intelligence in BPM, with Andre Hofeditz and Seshadri Sreeniva of SAP plus DMN TCK update
We’re at the end of bpmNEXT for another year, and we have one last demo. Seshadri showed a demo of their intelligent BPM for an employee onboarding process (integrated with SuccessFactors), where the process can vary widely depending on level, location and other characteristics. This exposes the pre-defined business processes in SuccessFactors, with configuration tools for customizing the process by adding and modifying building blocks to create a process variant for a special case. Decisions involved in the processes can also be configured, as well as dashboards for viewing the processes in flight. Extension workflows can be created by selecting a standard process “recipe” from a SuccessFactors library, then configuring it for the specific use; he showed an example here for adding an equipment provisioning extension that can be added as a service task to one of the top-level process models. He demonstrated a voice-controlled chatbot interface for interacting with processes, allowing a manager to ask what’s happening for them today, and get back information on the new employee onboardings in progress, and expected delays and a link to his task inbox. Tasks can be displayed in the chat interface, and approvals accepted via voice or typed chat. The chatbot is using AI for determining the intent of the input and providing a precise and accurate response, and using ML to provide predictions on the time required to complete processes that are in flight if asked about completion times and possible delays. The chatbot can also make decision table-based recommendations such as creating an IT ticket to assign roles to the new employee and find a desk location. He showed the interface for designing and training the bot capabilities, where a designer can create a new conversational AI skill based on conditions, triggers and actions to take. This is currently a lab preview, but will be rolled out as part of their cloud platform workflow (not unique to the SuccessFactors environment) in the coming months.
Decision Model and Notation Technology Compatibility Kit update with Keith Swenson
We finished off bpmNEXT 2019 with an update on the DMN TCK, that is, the set of tools provided for free for vendors to test their implementation of DMN. The TCK provides DMN 1.2 models plus sets of input data and expected results; a runner app calls the vendor engine, compares the results and exports them as a CSV file to show compliance. In the three years since this was kicked off, there are eight vendors showing results and over 1000 test cases, with another vendor about to join the list and add another 600 test cases. The test cases are determined through manual examination of the standard specification, so represents a significant amount of work to create this robust set of compliance tests. The TCK group is not creating the standard, but testing it; however, Keith identified some opportunities for the TCK to be more proactive in defining some things such as error handling behavior that the revision task force (RTF) at OMG are unlikely to address in the near term. He also pointed out that there are many more vendors claiming DMN compatibility than have demonstrated that compatibility with the TCK.
That’s it for bpmNEXT 2019 – always feels like it’s over too soon, yet I leave with my brain stuffed full of so many good ideas. We’ve done the wrapup survey and heading off to lunch, but the results on Best in Show won’t come out until I’m already on my way to the airport.