CamundaCon 2023 Day 2: Process and Decision Standards

Falko Menge and Marco Lopes from Camunda gave a presentation on the involvement of Camunda with the development of OMG’s core process and decision standards, BPMN and DMN. Camunda (and Falko in particular) has been involved in OMG standards for a long time, and embrace these two standards in their products. Sadly, at least to me, they gave up support for the case management standard, CMMN, due to a lackluster market adoption; other vendors such as Flowable support all three of the standards in their products and have viable use cases for CMMN.

Falko and Marco gave a shout out to universities and the BPM academic research conference that I attended recently as promoters of both the concepts of standards and future research into the standards. Camunda has not only participated in the standards efforts, but the co-founders wrote a book on Real-Life BPMN as they discovered the ways that it can best be used.

They gave a good history of the development of the BPMN standard and also of Camunda’s implementation of it, from the early days of the Eclipse-based BPMN modeler to the modern web-based modelers. Camunda became involved in the BPMN Model Interchange Working Group (MIWG) to be able to exchange models between different modeling platforms, because they recognized that a lot of organizations do much broader business modeling in tools aimed at business analysts, then want to transfer the models to a process execution platform like Camunda. Different vendors choose to participate in the BPMN MWIG tests, and the results are published so that the level of interoperability is understood.

DMN is another critical standard, allowing modelers to create standardized decision models and also supports the Friendly-Enough Expression Language (FEEL) for scripting within the models. The DMN Technolgy Compatibility Kit (TCK) is a set of decision models and expected results that provides test results similar to that of the BPMN MIWG tests: information about the vendors’ products test coverage are published so that their implementation of DMN can be assessed by potential customers.

Although standards are sometimes decried as being too difficult for business people to understand and use (they’re really not), they create an environment where common executable models of processes and decisions can be created and exchanged across many different vendor platforms. Although there are many other parts of a technology stack that can create vendor lock-in, process and decision models don’t need to be part of that. Also, someone working at a company that uses BPMN and DMN modeling tools can easily move to a different organization that uses different tools without having to relearn a proprietary modeling language. From a model richness standpoint, many vendors and researchers working together towards a common goal can create a much better and more extensive standard (as long as they’re not squabbling over details).

They went on to discuss some of the upcoming new standards: BPM+ Harmonization Model and Notation (BHMN), Shared Data Model and Notation (SDMN), and Knowledge Package Model and Notation (KPMN), all of which are in some way involved in integrating BPMN and DMN due to the high degree of overlap between these standards in many organizations. Since these standards aren’t close enough to release, they’re not planned for a near-future version of Camunda, but they’ll be added to the product management roadmap as the standards evolve and the customer requirements for the standards becomes clear.

My writing on the Trisotech blog: better analysis and design of processes

I’ve been writing some guest posts over on the Trisotech blog, but haven’t mentioned them here for a while. Here’s a recap of what I’ve posted over there the past few months:

In May, I wrote about designing loosely-coupled processes to reduce fragility. I had previously written about Conway’s Law and the problems of functional silos within an organization, but then the pandemic disruption hit and I wanted to highlight how we can avoid the sorts of cascading supply chain process failures that we saw early on. A big part of this is not having tightly-coupled end-to-end processes, but separating out different parts of the process so that they can be changed and scaled independently of each other, but still form part of an overall process.

In July, I helped to organize the DecisionCAMP conference, and wrote about the BPMN-CMMN-DMN “triple crown”: not just the mechanics of how the three standards work together, but why you would choose one over the other in a specific design situation. There are some particular challenges with the skill sets of business analysts who are expected to model organizations using these standards, since they will end up using more of the one that they’re most familiar with regardless of its suitability to the task at hand, as well as challenges for the understandability of multi-model representations that require a business operations reader of the models to be able to see how this BPMN diagram, that CMMN model and this other DMN definition all fit together.

In August, I focused on better process analysis using analytical techniques, namely process mining, and gave a quick intro to process mining for those who haven’t seen it in action. For several months now, we haven’t been able to do a lot of business “as is” analysis through job shadowing and interviews, and I put forward the idea that this is the time for business analysts to start learning about process mining as another tool in their kit of analysis techniques.

In early September, I wrote about another problem that can arise due to the current trend towards distributed (work from home) processes: business email compromise fraud, and how to foil it with better processes. I don’t usually write about cybersecurity topics, but I have my own in-home specialist, and this topic overlapped nicely with my process focus and the need for different types of compliance checks to be built in.

Then, at the end of September, I finished up the latest run of posts with one about the process mining research that I had seen at the (virtual) academic BPM 2020 conference: mining processes out of unstructured emails, and queue mining to see the impact of queue congestion on processes.

Recently, I gave a keynote on aligning intelligent automation with incentives and business outcomes at the Bizagi Catalyst virtual conference, and I’ve been putting together some more detailed thoughts on that topic for this months’ post. Stay tuned.

Disclosure: Trisotech is my customer, and I am compensated for writing posts for publication on their site. However, they have no editorial control or input into the topics that I wrote about, and no input into what I write here on my own blog.

DecisonCAMP 2019: Decision test tools, complex payroll decisioning, and decisions as a service

Modeling Decisions With Embedded Testing. Daniel Schmitz-Hübsch and Ulrich Striffler, Materna

Daniel Schmitz-Hübsch and Ulrich Striffler of Materna, who presented earlier this week on whether FEEL is friendly enough, returned to discuss testing of decision models using a tool that they have developed. The typical life cycle for developing and testing decision models has a business analyst modeling the decisions and creating test cases, but having to pass it off to a developer for executing the test cases and drawing conclusions to feed back into the design. To cut the developer out of the cycle — and therefore shorten the lifecycle — they have developed declab, a browser-based test harness for decision models and test cases. Business analysts can perform ad hoc tests, or build a tree of test cases.

Live demo of declab

This includes FEEL testing, and the business analyst can enter and test and variety of constructs to test out a field function without having to deploy an entire model — envision an analyst with their modeler on one screen and declab on another screen to allow them to do micro-testing as they design decision models.

Materna has released the tool as open source, and it’s based on Red Hat’s DROOLS engine performing the tests in the background. You can try it out online here. Lots of great suggestions and comments from the audience; hopefully some of them will contribute to the open source project.

Exploiting payroll knowledge with Viren. Tim Stuyckens, Teal Partners

Tinm Stuyckens presented on their Viren decision-based tool for modeling and executing knowledge, specifically for calculating expat tax in payroll software. Payroll tax in Belgium is particularly complex, and sometimes it’s difficult to know which statutes to apply to make the most beneficial calculation.

Payroll tax calculation. From Tim Stuyckens’ presentation.

In addition to straightforward tax calculations, the tool can work backwards from a desired point to the necessary conditions, such as how many days to work in order to earn a certain income, or the optimal day rate to minimize taxes and earn a certain income. Business analysts can enter and modify the knowledge rules and data, while the platform handles versioning, compilation and deployment.

They use declarative rules and structured data to represent knowledge in the system, and apply constraint solvers for optimization with non-linear constraints. They only discovered DMN earlier this year and have embraced it in their tool, providing a unified DRD and decision tables to allow business analysts to more easily step through the decision logic.

Decision Management as a Service. Dennis Aarts, The Business Analysts

The last presentation in this session was by Dennis Aarts on a use case of decision management shared services model at Informatie Vlaanderen, an entity of the Flemish government in Belgium. They provide digital services to other parts of the government, and they were looking at ways to improve the quality and consistency of their services. The solution is Automatisch Advies (Automated Advice) which includes authentic and authorized data sources, orchestration using Camunda BPM, and business rules using IBM ODM. It has an extensible architecture to allow other capabilities to be integrated in the future, such as AI/ML.

There were several goals for the project, including productivity (reducing cycle time, reuse of data), regulatory (GDPR requirements) and ease of use (business can make modifications). The solution provides a centralized platform where rules can be developed and used by multiple entities.

Automatisch Advies decision as a service platform. From Dennis Aarts’ presentation.

Having decisions as a shared service amongst many government entities has many benefits in terms of reuse across entities, and not requiring expertise or maintenance skills for the platform in each entity. Like any shared services IT, however, there are complexities in allocating costs, governance of the decision models, security of models specific to a subset of entities, and maintenance of the rule sets.

This was my last session of DecisionCAMP 2019 — I’m skipping the final vendor statements and the closing remarks to head off and have a few days of vacation before I have to return to real life sometime next week. It’s been a great experience, and thanks to Jacob Feldman for inviting me. It’s been several years since I’ve been in Bolzano, and it’s just as beautiful as I remember.

Beautiful Bolzano!

This has been a bit of an epic trip, having left home almost three weeks ago to attend the academic BPM conference in Vienna, give a keynote at CamundaCon in Berlin, then here for DecisionCAMP. You can find my coverage for each of those events at the links.

DecisionCAMP 2019: DMN TCK, BPO with AI and rules, and business logic hidden in spreadsheets

Close Is Not Close Enough. Keith Swenson, Fujitsu

A few months ago at bpmNEXT, I saw Keith Swenson give an update on the DMN Technology Compatibility Kit, and we’re seeing a bit of a repeat of that presentation here at DecisionCAMP. The TCK defines a set of test cases (as DMN decision models, input data and expected results) that assure conformance to the specification, plus a sample runner application that will pass the models and data to the vendor’s engine and evaluate the results.

DMN TCK. From Keith Swenson’s presentation.

There are about 120 test models and 1600 test cases, supporting only DMN 1.2; these tests come from examining the specification as well as cases from practice. It’s easy for a vendor to get involved in the the TCK, both in terms of running it against their engine and in terms of participating through submitting new test models and cases. You can see the vendors that have submitted their results; although many more vendors claim that they “have DMN”, their actual level of compatibility may be suspect.

The TCK committee is getting ready for DMN 1.3, and considering tests for modeling tools in addition to the current tests for the engine. He also floated the idea of a standardized API for DMN as a service, so that the calling application doesn’t need to know which engine it’s calling — possibly something that’s not going to be a big hit with vendors.

Business innovation of BPO realized by Task Center and AI and Rule Engine. Yoshihito Nakayama, NTT DATA INTRAMART

Yoshihito Nakayama presented on the current challenges of BPO with respect to improving productivity, and how they are resolving this using AI and a rules engine to aggregate and assign human tasks from multiple systems to different team members. This removes the requirement to manually review and assign work, and also provides a dashboard for visualizing work in progress and future forecasts.

Intramart’s Task Center for aggregating and assigning work. From Yoshihito Nakayama’s presentation.

AI is used to predict and optimize task classification and assignment, based on time required to complete the task and the individual worker skill level and productivity. It is also used to predict workload for task types and individual workers.

Their visualization dashboard shows drilldowns on current and past productivity, plus future forecasts. The simulation models for forecasting can be fine-tuned to optimize for cost, performance and other factors. It brings together work monitoring from all systems, including RPA processes. They’re also using process mining on a variety of systems to create a digital twin of the organization for better tracking and predictions, as well as other tools such as voice and image identification to recognize what tasks are being done that are not being recorded in any system logs.

They have a variety of case studies across industries, looking at automating non-routine work using case management, BPM, RPA, AI and rules.

Spaghetti Spreadsheets Untangled – Benefits of decision modeling when uncovering complex business logic hidden in spreadsheets. Charlotte Bouvy, M.C. Bouvy Consultancy

Charlotte Bouvy presented on her work done with SVB, the Netherlands social insurance administrator, on implementing business rules management. They are using DMN-based wizards for supporting 1,500 case workers, and the specific case was around the operational control and audit departments and the “lawfulness” of how the assessment work is done. Excel spreadsheets were used to do this, which had obvious problems in terms of being error prone and lacking domain-specific business logic. They implemented their SARA system to replace the spreadsheets with Oracle OPA, which allowed them to more accurately represent knowledge, as well as separate the data from the decision model while creating an executable model.

Decision model to determine lawfulness. From Charlotte Bouvy’s presentation.

These type of audit processes require sampling over a wide variety of case files to compare actual payments against expected amounts, with some degree of aggregation within specific laws being applied. Moving to a rules engine allowed them to model calculations and decisions, and separate data and model to avoid errors that occurred when copying and pasting data in spreadsheets. The executable model is now a single source of truth to which version control and change management can be applied. They are trying out different ways of using the SARA system: directly in Oracle Policy Modeler for building and debugging; via a web interview and an RPA robot for data input; and eventually via direct integration with the SVB’s case management system to load data.

DecisionCAMP 2019: Industry use cases in airport gate allocation, financial risk monitoring, and composite material design

The Decision Model for Gate Allocation. Silvie Spreeuwenberg, Librt

Day 3 of DecisionCAMP 2019 started with three use cases from industry. First, Silvie Spreeuwenberg presented on decision models for allocating airport gates, specifically at Schiphol airport in Amsterdam. Although gate plans are made a day in advance based on flight schedules, they change constantly due to early arrivals, late departures and other unexpected disruptions to the schedule. Any given day, there are 50-100 gate changes one hour before an aircraft arrival; although this was seen as a disruption, this could also be considered an opportunity for optimization.

Day-ahead decision model for assigning gates. From Silvie Spreeuwenberg’s presentation.

There were a lot of rules used for the planning and reassignment that had more to do with preferences than actual optimization; they really wanted to drive towards the objective of optimizing asset usage and therefore airport capacity. There are a lot of factors involved, such as having sufficient gate area capacity to handle the number of passengers for a flight, or having buses available to offload flights that can’t be assigned a gate. They have created a policy for aircraft stand allocation which includes some identifiable decision tables, although these are just at the strategy documentation phase.

Definitely a complex problem that has applicability at every major airport around the world.

A hybrid implementation of multi-channel, multi-modal, high volume financial risk monitoring. Martijn Tromm and Marten Schokking, Oracle

Marten Schokking and Martijn Tromm presented a use case from Rabobank using decision management and machine learning for customer risk assessment in terms of KYC (know your client) and AML (anti-money laundering). This is used during client onboarding, but also during periodic reviews as well as reviews triggered by specific events. There are scoring rules that use data input from a variety of sources, including client information from a CRM, interview responses and policies.

Risk model for customer risk assessment. From Marten Schokking and Martijn Tromm’s presentation.

There are government regulations requiring that this be done for all clients at certain times. A triggering event, such as a change in the customer’s circumstances, will cause a customer interview and other data analysis to recalculate the risk; this may result in a more detailed manual review of the risk. At this point, there is still a lot of employee work which is creating a challenge in completing the customer risk assessments within the regulatory deadlines; they are looking at how to automate the basic assessment using machine learning in order to reduce the manual work required.

The risk model has been built using Oracle Policy Automation rules engine integrated with the Siebel CRM. They are reusing rules across channels where possible, and the use of natural language in the rules definition helps with traceability to the policies. They are continuing to innovate with rules, such as having context-driven rules based on user behavior on specific channels, and having a fast two-day turnaround for rule changes related to certain types of policy changes. The ability to predict the impact of policy changes based on actual data allows for operational planning to accommodate those changes.

The Role of DMN and BPMN in the Design of Composite Materials. Dario Campagna, ESTECO

Dario Campagna presented on how the COMPOSELECTOR project is integrating material modeling and business process management in a decision support system for composite material design; this type of design can have complex requirements, business decisions and simulation workflows. Using application cases from Dow, Airbus and Goodyear, they modeled the business flow using BPMN and DMN. ESTECO, which creates software tools for engineering design, is a contributor to the COMPOSELECTOR project.

Subprocess in Dow flow showing decision invocation. From Dario Campagna’s presentation.

While BPMN is used to model the flow at the business level, DMN decision tables are used to make decisions on the class of materials and manufacturing process, then on the simulation workflows to use based on business and engineering KPIs. DMN provides the link from the business layer to the engineering layer, then to the simulation layer. Using DMN provides a higher level of consistency in decision-making, which leads to better design and lower costs.

Decision table used to select simulation workflow. From Dario Campagna’s presentation.

We saw a brief video of a demo of the system in use: a business-level manager selects high-level parameters and KPIs for the proposed design; this selects one or more simulation models for the material design, which is then confirmed or decided by an engineer; the results of the simulation are passed back to the manager for final decision-making. This has the effect of integrating the business and technical sides of the design process, and include modeling and simulation results in the business-level (human) decisions in a standardized way.

DecisionCAMP 2019: the evolving DMN standard and the quality of decision models

DMN 2.0? Gary Hallmark, Oracle

Gary Hallmark presented on the next major version of DMN that’s in the works, starting with a timeline of what’s happened so far since the original RFP in 2011 up to the expected 1.3 release later this year. He added the question mark to his title because whether to issue a major (fix the mistakes of the past) or minor (patch and live with the mistakes of the past) release is still under consideration. He went through the top 10 requested features, half of which can be backward-compatible with DMN 1.x (i.e., DMN 1.x models can be ingested and executed in the new version) and half of which can’t.

He mentioned the issue of harmonization of DMN, BPMN and CMMN, a topic that I am especially interested in, and plan to ask the vendors about later this afternoon when I am moderating the panel. That can include a common item definition model that is used by all three notations, tighter integration of DMN with BPMN gateways and CMMN sentries, and easier reference and interchange between the model types. This could also include the use of FEEL and decision tables for expressions and logic in BPMN and CMMN, such as at gateways and in data associations. We already have the problem of keeping a collection of different model types sorted out, and there may need to be a “model of models” concept to tie these together.

Concept for how to use a DMN-style decision table directly in a BPMN gateway to model the logic. From Gary Hallmark’s presentation.

Better model validation is another request for the next version of DMN; I don’t have a lot of experience with DMN, but judging by the comments here, the “null” returned for all types of errors is definitely a touchy issue. This could be improved with a “required” property in item definitions and model validation with respect to the item definitions. Additional datatypes would also be useful, such as integers and some types of ranges. There are suggestions for better ways to deal with iteration and recursion in a new version of DMN, some of which are already being done by vendors such as Oracle in their products to make it easier for business analysts to understand and model.

Something that seems simple but would break compatibility is moving to case insensitive names (indicating just how IT-centric the original, and possibly the current, committee was), and handling some things such as single-quoted strings unambiguously. Moving to Xpath-like sequences instead of the current lists also wouldn’t be compatible, nor are many types of recursion and cyclic information requirements.

As mentioned earlier, half of the top requested features could be done in 1.x because they don’t break compatibility; one option is to implement those in a 1.x version and leave the others for now. The alternative is to start the DMN 2.x RFP process, which is a much larger undertaking, which will delay the implementation of those features but will open the door (or Pandora’s box) for a completely new version of DMN. Lots of great discussion at the end of this presentation: many of the people in the room are active contributors to the standard and/or vendors who implement the standard, so they definitely have both knowledge and opinions on the subject.

Quality of Decision Models. Jan Vanthienen, KU Leuven

Jan Vanthienen presented on measuring quality of decision models, starting with notions of information quality, resulting in measures such as complexity and traceability. For example, you can look at complexity of a DRD based on number of decision, number of elements and cyclomatic comlexity; complexity of a decision table can be based on hit policy usage and total number of input variables.

Consistency and interpretability in decision tables can be measured by a unique hit table (no overlapping rules) and a natural order for easy visual reading of the rules — it is more important to be correct and consistent than compact.

There’s a contextual quality factor when we look at a decision model that is related to a process model, where the connections between the two models can be fairly complex. He presented a set of guidelines for integrating processes and decision, including avoiding embedding decisions in gateways: something that happens all the time, in my experience with process modeling.

Integration between a process model and a decision model From Jan Vanthienen’s presentation.

He covered some ideas on decision modeling methodology for creating the models of highest quality: usually this will involve working back and forth between the DRD and the decision tables, rather than trying to do a pure top-down or bottom-up approach. There’s a lot of past research that covers many of the issues of creating quality models, most of which predates BPMN and DMN but the same principles apply. The DMN and BPMN standards embody some of these principles, such as separating decision and process logic.

DecisionCAMP 2019: Modeling helpers for business analysts, and extending DMN to include performance metrics

Business Rules — Focus on “Business”. Guilhem Molines, IBM

Guilhem Molines presented on what business analysts can do in the creation of automated decisions before developers need to get involved (or take over), and what the modeling tools can provide to help them get further in the process independently.

Clearly, business analysts can model business decisions at the level of a decision requirements diagram (e.g., DMN DRD) that shows the entities and information required to make a decision. Then, the data and decision models required for implementation can be created by the business analyst or co-authored together with a developer. IBM’s ODM tool has authoring assistance such as smart predictions to guide an analyst while they are modeling, plus suggestions to define terms in the model as they are used in a rule definition.

In authoring decision tables, assistance may include Excel-like functions for copy and paste or smart drag to extend a range; as a table is being defined, there can be guidance to check for gaps and overlaps in the parameter ranges. Being able to do instant validation for a decision table by entering data values and seeing the calculation result (without deploying the rule) builds confidence in the logic implementation.

A more comprehensive “unit testing” approach allows the analyst to provide a number of input parameter sets in a spreadsheet or similar tabular form and see the results. A further step in testing decisions is simulation based on a large quantity of production data; then integration testing in a full test environment before promotion to production.

Much of this presentation was based on IBM ODM capabilities, although some good ideas here for any modeling environment.

Combining Decision Models for Better Decision Management. Fernando Donati Jorge, FICO

Fernando Donati Jorge presented on whether we have the right decision model to solve mysteries in addition to puzzles — an interesting distinction, where puzzles can be solved given the right information, while mysteries may not have a well-defined answer and can depend on future interactions.

The challenges with mysteries is that there is too much data but no indication of the most relevant bits; they are full of uncertainties; and they depend on future known and unknown interactions. Decision models can properly contextualize data, but don’t measure relevance. Predictive analytics can quantify uncertainties, and decision models can contextualize the use of different types of knowledge models. An analytic decision model (a Gartner term, which does not include DMN decision models) can represent how different decision outcomes can lead to different future interactions.

Typically, there are two separate types of decision models: one that models the input data and business knowledge model that results in a decision (a DMN model), and the other that models how a decision impacts business performance. If you want to combine these into a single model, an extension to DMN is required to be able to model performance metrics, which in turn have models, decisions and data as inputs.

Modeling performance metrics with a DMN extension. From Fernando Donati Jorge’s presentation.

This DMN extension is available in the FICO Analytic Cloud for adding KPIs to decision-making. Good discussion about how they handle latency for calculating the performance metrics, and the issue of metrics aggregation over multiple decision instances. They don’t currently allow a performance metric to inform (provide input to) a decision, but that’s obviously open for future discussion.

Both of the presentations in this section have looked at how vendors are adding value to their DMN-based modeler: IBM in terms of interactive to assist the modelers in creating better models, and FICO in extending DMN to include performance metrics directly in a decision model.

DecisionCAMP 2019: Decisions on demand in the mortgage industry, and combining DMN and IDP

Beyond Decision Models – Using Technical and Business Standards to Transform Financial Services. Brian Stucky, Quicken Loans

Although having recently joined Quicken Loans as a senior enterprise architect, Brian Stucky is also involved in the MISMO mortgage standards organization, which was the focus of his presentation. Earlier this year, MISMO recommended DMN as an official standard for documenting, implementing, exchanging and executing decision models in the mortgage industry; they are also working on officially recognizing BPMN too. The idea is to create a DMN data structure based on the existing MISMO XSD to allow these mortgage-related decision models to be shared, but the industry is still rife with paper-based processes and legacy systems that hinder adoption.

There’s a history of business rules in the mortgage industry, but it didn’t really allow for business control of the rules, didn’t have the agility required, and was expensive. DMN is changing that game — especially with decisions as a service instead of on-premise systems — and allowing mortgage companies to better meet some of the new regulations such as Ability-To-Repay, where the written government regulation can be translated into a standard DMN model to ensure that all parties are using the same evaluation criteria. They’ve proven that time required to change the DMN model for a specific rule can take as little as a couple of hours to analyze and modify the model, which is a huge push for moving from MISMO XSD to the DMN model.

Ability-To-Repay DMN model. From Brian Stucky’s presentation.

In the future, this could mean that DMN plus the MISMO data model could be used directly to disseminate a regulatory rule change, rather than the 800-page text document used now. That brings up other issues, such as versioning of the model or even of DMN, and engine compliance in executing the DMN models as distributed. A better way to do it may be to roll out the model as a service with an open API, where every mortgage provide uses the same decision service; this guarantees that it will be evaluated identically everywhere. The ultimate goal may be a digital mortgage, potentially using blockchain to ensure the chain of events in this smart contract.

Meeting the Expectations with DMN and Constraint Solving: The Notary Case. Marjolein Deryck, KU Leuven

Marjolein Deryck presented on research in decision modeling and knowledge representation, and how she applied it to property registration taxes in Belgium, which is typically calculated by a notary. The use case was to support a notary when performing these calculations, using DMN for collaborative analysis with the notary and an executable prototype; then IDP to go beyond DMN capabilities in a constraint approach.

An interesting requirement was that the support application be non-intrusive: the notary felt that if he was spending too much time typing on a computer while figuring this out with a client, he would be seen as less of an expert. A tablet-based app with minimal requirement for data entry, plus interactive in terms of presenting the next best question rather than following a fixed script were seen as essential.

In her initial evaluation, DMN was seen as lacking script interactivity/adaptability (although I saw a really interesting way to use DMN and BPMN to resolve this last week at CamundaCon), and she instead considered IDP as a more powerful implementation. This provides a better solution, although the models are less understandable by the notaries, and required enhancements to be able to provide an explanation of a specific calculation.

IDP configuration for property tax calculations. From Marjolein Deryck’s presentation.

The lessons learned included the use of DMN as an intermediate model — for gathering and analyzing requirements together with the business user — as well as how to combine DMN and IDP in a project.

Panel: DMN and Beyond

We closed off day 1 of DecisionCAMP 2019 with a panel that included Mike Gualtieri, Alan Fish, Jan Vanthienen, Jan Purchase, Gary Hallmark and Brian Stucky, moderated by Jacob Feldman.

A few points that came up during the panel (unattributed to the specific speaker):

  • Many buyers of decision management systems don’t known enough about DMN to evaluate it or even ask for it.
  • DMN still falls short in complex representations, although works well to represent static hierarchical information from decision tables. It has the potential to include other representations and other models such as machine learning. Making it more powerful could, however, have DMN lose the simplicity that makes it more likely to be adopted.
  • DMN is difficult to debug, making it hard to figure out logic flaws.
  • The diagram/graph level of DMN is very understandable to business users/analysts, but by the time you’re doing more complex nested expression logic at the FEEL execution level, you’ve lost most of them.
  • Highly-regulated industries such as lending, where rules are already documented in spreadsheets, are a good target for DMN implementation.
  • Being able to follow the execution path is not the same as an explanation of the decision logic. The DMN standard includes support for remarks/annotations to improve explainability but that may not be sufficient.
  • Knowledge sources in DMN models have no programmatic representation, putting the onus on the modeler to ensure explainability and traceability.
  • Ethics are important to decision management in terms of decision fairness and consistency. DMN model-based decision making can improve that as long as the models are based on the right rules and data.
  • There’s a need to be able to integrate DMN and machine learning while still providing decision explainability.
  • Models are always fit for purpose: there is no all-encompassing model that is suitable for everything. As an aside, that’s definitely true in the BPMN realm too.

That’s it for day 1; I’m off to find a gelato and an Aperol Spritz on this warm evening in Bolzano.

DecisionCAMP 2019: Serverless DROOLS and the Digital Engineer

How and Why I Turned a Rule Engine into a First-Class Serverless Component. Mario Fusco and Matteo Mortari, Red Hat

Mario Fusco, who heads up the Drools project within Red Hat, presented on modernization of the Drools architecture to support serverless execution, using GraalVM and Quarkus. He discussed Kogito, a cloud-native, open source business automation project that uses Red Hat process and decision management along with Quarkus.

I’m not a JAVA developer and likely did not appreciate many of the details in the presentation, hence the short post. You can check out his slides here.

Combining DMN, First Order Logic and Machine Learning: The creation of Saint-Gobain Seals’ Digital Engineer. Nicholas Decleyre, Saint-Gobain and Bram Aerts, KU Leuven

The seals design and manufacturing unit of Saint-Gobain had the goal to create a “digital engineer” to capture knowledge, with the intent to standardize global production processes, reduce costs and time to market, and aid in training new engineers. They create an engineering automation tool to automatically generate solutions for standard designs, and an engineering support tool to provide information and other support to engineers while they are working on a solutions.

Engineering automation and support systems at Saint-Gobain Seals. From Nicholas Decleyre and Bram Aerts’ presentation.

Automation for known solutions is fairly straightforward in execution: given the input specifications, determine a standard seal that can be used as a solution. This required quite a bit of knowledge elicitation from design engineers and management, which could then be represented in decision tables and FEEL for readability by the domain experts. Not only the solution selection is automated, however: the system also generates a bill of materials and pricing details.

The engineering support system is for when the solution is not known: a design engineer uses the support system to experiment on possible solutions and compare designs. This required building a knowledge base in first-order logic to define physical constraints and preferences, represented in IDP, then allowing the system to make recommendations about a partial or complete solution or set of solutions. They built a standalone tool for engineers to use this system, presenting a set of design constraints for the engineer to apply to narrow down the possible solutions. They compared the merits of DMN versus IDP representations, where DMN is easier to model and understand, but has limitations in what it can represent as well as being more cumbersome to maintain. At RuleML yesterday, they presented a proposal for extended DMN for better representing constraints.

They finished up talking about potential applications of machine learning on the design database: searching for “similar” existing solutions, learning new constraints, and checking data consistency. They have several automated engineering tools in development, with one in testing and one in production. Their engineering support tool has working core functionality although need to expand the knowledge base and prototype the UI. On the ML work, they are expecting to have a prototype by the end of this year.

DecisionCAMP 2019: Standards-based machine learning and the friendliness of FEEL

Machine Learning and Decision Management:
A standards-based approach. Edson Tirelli and Matteo Mortari, Red Hat

DecisionCAMP Day 1 morning sessions continue with Edson Tirelli and Mateo Mortari presenting on the integration of machine learning and decision management to address predictive decision automation. The problem to date is that integrating machine learning into business automation (either process or decision) has required proprietary interfaces and APIs, although there is an existing standard (PMML, Predictive Model Markup Language) for specifying and exchanging many types of executable machine learning models. The entry of the DMN standard provides a potential bridge between PMML and both BPMN and CMMN, allowing for an end-to-end standards-based representation for cases, processes, decisions and predictive models.

Linking business automation and machine learning with standards. From Edson Tirelli’s presentation.

They gave a demo of how they have implemented this using RedHat decision and process engines along with open source tools Prometheus and Grafana, with a credit card dispute use case that uses BPMN, DMN and PMML to model the process and decisions. They started with a standard use of BPMN and DMN, where the DMN decision tables and graphs calculate the risk factors of the dispute and the customer, and make a decision on whether or not the dispute process can be automated. They added a predictive model for better calculation of the risk factors, positioning this in the DMN DRD as a business knowledge model that can then drive the decision model instead of a hard-coded decision table.

They finished their demo by importing the same PMML and DMN models in the Trisotech modeler to show interoperability of the integrated model types, with the predictive models providing knowledge sources for the decision models.

Coming from the process side, this is really exciting: we’re already seeing a lot of proprietary plug-ins and APIs to add machine learning to business processes, but this goes beyond that to allow standards-based tools to be plugged together easily. There’s still obviously work to be done to make this a seamless integration, but the idea that it can be all standards-based is pretty significant.

FEEL, Is It Really Friendly Enough? Daniel Schmitz-Hübsch and Ulrich Striffler, Materna

Materna has a number of implementation projects (mostly German government) that involve decision automation, where logic is modeled by business users and require that the decision justification be able to be explained to all users for transparency of decision automation. They use both decision tables and FEEL — decision tables are easier for business users to understand, but can’t represent everything — and some of the early adopters are using DMN. Given that most requirements are documented by business users in natural language, there are some obstacles to moving that initial representation to DMN instead.

Having the business users model the details of decisions in FEEL is the biggest issue: basically, you’re asking business people to write code in a script language, with the added twist that in their case, the business users are not native English speakers but the FEEL keywords are in English. In my experience, it’s hard enough to get business people to create syntactically-correct visual models in BPMN, moving to a scripting language would be a daunting task, and doing that in a foreign language would make most business people’s heads explode in frustration.

They are trying some different approaches for dealing with this: allowing the users to read and write the logic in their native natural language (German), or replacing some FEEL elements (text statements) with graphical representations. They believe that this is a good starting point for a discussion on making FEEL a bit friendlier for business users, especially those whose native language is not English.

Graphical representation of FEEL elements. From Daniel Schmitz-Hübsch and Ulrich Striffler’s presentation.

Good closing discussion on the use of different tools for different levels of people doing the modeling.