DecisionCAMP 2019: DMN TCK, BPO with AI and rules, and business logic hidden in spreadsheets

Close Is Not Close Enough. Keith Swenson, Fujitsu

A few months ago at bpmNEXT, I saw Keith Swenson give an update on the DMN Technology Compatibility Kit, and we’re seeing a bit of a repeat of that presentation here at DecisionCAMP. The TCK defines a set of test cases (as DMN decision models, input data and expected results) that assure conformance to the specification, plus a sample runner application that will pass the models and data to the vendor’s engine and evaluate the results.

DMN TCK. From Keith Swenson’s presentation.

There are about 120 test models and 1600 test cases, supporting only DMN 1.2; these tests come from examining the specification as well as cases from practice. It’s easy for a vendor to get involved in the the TCK, both in terms of running it against their engine and in terms of participating through submitting new test models and cases. You can see the vendors that have submitted their results; although many more vendors claim that they “have DMN”, their actual level of compatibility may be suspect.

The TCK committee is getting ready for DMN 1.3, and considering tests for modeling tools in addition to the current tests for the engine. He also floated the idea of a standardized API for DMN as a service, so that the calling application doesn’t need to know which engine it’s calling — possibly something that’s not going to be a big hit with vendors.

Business innovation of BPO realized by Task Center and AI and Rule Engine. Yoshihito Nakayama, NTT DATA INTRAMART

Yoshihito Nakayama presented on the current challenges of BPO with respect to improving productivity, and how they are resolving this using AI and a rules engine to aggregate and assign human tasks from multiple systems to different team members. This removes the requirement to manually review and assign work, and also provides a dashboard for visualizing work in progress and future forecasts.

Intramart’s Task Center for aggregating and assigning work. From Yoshihito Nakayama’s presentation.

AI is used to predict and optimize task classification and assignment, based on time required to complete the task and the individual worker skill level and productivity. It is also used to predict workload for task types and individual workers.

Their visualization dashboard shows drilldowns on current and past productivity, plus future forecasts. The simulation models for forecasting can be fine-tuned to optimize for cost, performance and other factors. It brings together work monitoring from all systems, including RPA processes. They’re also using process mining on a variety of systems to create a digital twin of the organization for better tracking and predictions, as well as other tools such as voice and image identification to recognize what tasks are being done that are not being recorded in any system logs.

They have a variety of case studies across industries, looking at automating non-routine work using case management, BPM, RPA, AI and rules.

Spaghetti Spreadsheets Untangled – Benefits of decision modeling when uncovering complex business logic hidden in spreadsheets. Charlotte Bouvy, M.C. Bouvy Consultancy

Charlotte Bouvy presented on her work done with SVB, the Netherlands social insurance administrator, on implementing business rules management. They are using DMN-based wizards for supporting 1,500 case workers, and the specific case was around the operational control and audit departments and the “lawfulness” of how the assessment work is done. Excel spreadsheets were used to do this, which had obvious problems in terms of being error prone and lacking domain-specific business logic. They implemented their SARA system to replace the spreadsheets with Oracle OPA, which allowed them to more accurately represent knowledge, as well as separate the data from the decision model while creating an executable model.

Decision model to determine lawfulness. From Charlotte Bouvy’s presentation.

These type of audit processes require sampling over a wide variety of case files to compare actual payments against expected amounts, with some degree of aggregation within specific laws being applied. Moving to a rules engine allowed them to model calculations and decisions, and separate data and model to avoid errors that occurred when copying and pasting data in spreadsheets. The executable model is now a single source of truth to which version control and change management can be applied. They are trying out different ways of using the SARA system: directly in Oracle Policy Modeler for building and debugging; via a web interview and an RPA robot for data input; and eventually via direct integration with the SVB’s case management system to load data.

bpmNEXT 2019 demos focused on creating smarter processes: decisions, RPA, emergent processes and machine learning with Serco, @FujitsuAmerica and @RedHat

A Well-Mixed Cocktail: Blending Decision and RPA Technologies in 1st Gen Design Patterns, with Lloyd Dugan of Serco

Lloyd showed a scenario of using decision management to determine if a step could be done by RPA or a human operator, then modeling the RPA “operator” as a role (performer) for a specific task and dynamically assigning work – this is instead of refactoring the BPMS process to include specific RPA robot service tasks. This is shown from an actual case study that uses Sapiens for decision management and Appian for case/process management, with Kapow for RPA. The focus here is on the work assignment decisioning, since the real-world scenario is managing work for thousands of heads-down users, and the redirection of work to RPA can have huge overall cost savings and efficiency improvement even for small tasks such as logging in to the multiple systems required for a user to do work. The RPA flow was created, in part, via the procedural documentation wiki that is provided to train and guide users, and if the robot can’t work a task through to completion then it is passed off to a human operator. The “demo” was actually a pre-recorded screen video, so more like a presentation with a few dynamic bits, but gave an insight into how DM and RPA can be added to an existing complex process in a BPMS to improve efficiency and intelligence. Using this method, work can gradually be carved off and performed by robots (either completely or partially) without significantly refactoring the BPMS process for specific robot tasks.

Emergent Synthetic Process, with Keith Swenson of Fujitsu

Keith’s demo is based on the premise that although business processes can appear to be simple on the surface when you look at that original clean model, the reality is considerably messier. Instead of predefining a process and forcing workers to follow that in order, he shows defining service descriptions as tasks with their required participants and predecessor tasks. From that, processes can be synthesized at any point during execution that meet the requirements of the remaining tasks; this means that any given process instance may have the tasks in a different order and still be compliant. He showed a use case of a travel authorization process from within Fujitsu, where a travel request automatically generates an initial process – all processes are a straight-through series of steps – but any changes to the parameters of the request may modify the model. This is all based on satisfying the conditions defined by the dependency graph (e.g., departmental manager requires that the manager approve before they can approve it), starting with the end point and chaining backwards through the graph to create the series of steps that have to be performed. Different divisions had different rules around their processes, specifically the Mexico group did not have departmental levels so did not have one of the levels of approval. Adding a step to a process is a matter of adding it as a prerequisite for another task; the new step will then be added to the process and the underlying dependency graph. As an instance executes, the completed tasks become fixed as history but the future tasks can change if there are changes to the tasks dependencies or participants. This methodology allows multiple stakeholders to define and change service descriptions without having a single process owner controlling the end-to-end process orchestration, and have new and in-flight processes generate the optimal path forward.

Automating Human-Centric Processes with Machine Learning, with Kris Verlaenen of Red Hat

Kris demonstrated working towards an automated process using machine learning (random forest model) in incremental small steps: first, augmenting data, then recommending the next step, and finally learning from what happened in order to potentially automate a task. The scenario was provisioning a new laptop inside an organization through their IT department, including approval, ordering and deployment to the employee. He started with the initial manual process for the first part of this – order by employee, quote provided by vendor, and approval by manager – and looked at  how ML could monitor this process over many execution instances, then start providing recommendations to the manager on whether to approve a purchase or not based on parameters such as the requester and the laptop brand. Very consistent history will result in high confidence levels of the recommendation, although more realistic history may have lower confidence levels; the manager can be presented with the confidence level and the parameters on which that was based along with the recommendation itself. In case management scenarios with dynamic task creation, the ML can also make recommendations about creating tasks at a certain stage, such as creating a new task to notify the legal department when the employee is in a certain country. Eventually, this can make recommendations about how to change the initial process/case model to encode that knowledge as new rules and activities, such as adding ad hoc tasks for the tasks that were being added manually, triggered based on new rules detected in the historical instances. Kris finished with the caveat that machine learning algorithms can be biased by the training data and may not learn the correct behavior; this is why they look at using ML to assist users before incorporating this learned behavior into the pre-defined process or case models.