Trends in Enterprise AI and Digital Decisions. Mike Gualtieri, Forrester
Day 2 of DecisionCAMP 2019 in beautiful Bolzano started out with Mike Gualtieri giving the Forrester view of trends in the market around AI and automated decisions. This was a typical analyst presentation — sorry, no notes — presented as part of the larger BRAIN 2019 (Bolzano Rules and Artificial INtelligence Summit) of which DecisionCAMP is a part.
Ron Ross presented on the current state of business rules and opportunities moving forward. To start, we have made progress in this area — DMN for one thing is an amazing leap forward — but business rules are not yet universally accepted and adopted within organizations despite the provable benefits.
One opportunity for business rules tools is to reduce developer workload, and to reduce rule programming errors. In alignment with Semantics Of Business Vocabulary and Rules (SBVR) standard, there are two types of rules: definitional rules and behavioral rules. Definitional rules may be incorrect or misapplied, but they can’t be directly violated since they are evaluated in the course of a process; declarative behavioral rules, on the other hand, require a “watcher” to track other events that may cause the state of another process or transaction to change. If implemented properly, behavioral rules can reduce developer workload since the event-driven watcher updates state constantly based on these rules firing. DMN does not allow modeling of these types of decisions, since there needs to be more awareness of state as well as the events that may cause it to change; there is no concept of a watcher daemon that can constantly evaluate rules and update state.
There is also a need to better address sentiment and human discretion in rules. With behavioral rules that are enforced by humans, there are levels of enforcement; these nuances are not captured in most rules/decision systems.
Rules tools also need to tie in more directly with business governance in order to enforce regulatory and other rules under which an organization needs to operate. Many of these are behavioral rules, which are not handled adequately by DMN and most decision management systems due to the lack of an event-driven watcher; there is also a gap caused by the lack of natural language support in defining executable rules.
Beyond Decision Models – Using Technical and Business Standards to Transform Financial Services. Brian Stucky, Quicken Loans
Although having recently joined Quicken Loans as a senior enterprise architect, Brian Stucky is also involved in the MISMO mortgage standards organization, which was the focus of his presentation. Earlier this year, MISMO recommended DMN as an official standard for documenting, implementing, exchanging and executing decision models in the mortgage industry; they are also working on officially recognizing BPMN too. The idea is to create a DMN data structure based on the existing MISMO XSD to allow these mortgage-related decision models to be shared, but the industry is still rife with paper-based processes and legacy systems that hinder adoption.
There’s a history of business rules in the mortgage industry, but it didn’t really allow for business control of the rules, didn’t have the agility required, and was expensive. DMN is changing that game — especially with decisions as a service instead of on-premise systems — and allowing mortgage companies to better meet some of the new regulations such as Ability-To-Repay, where the written government regulation can be translated into a standard DMN model to ensure that all parties are using the same evaluation criteria. They’ve proven that time required to change the DMN model for a specific rule can take as little as a couple of hours to analyze and modify the model, which is a huge push for moving from MISMO XSD to the DMN model.
In the future, this could mean that DMN plus the MISMO data model could be used directly to disseminate a regulatory rule change, rather than the 800-page text document used now. That brings up other issues, such as versioning of the model or even of DMN, and engine compliance in executing the DMN models as distributed. A better way to do it may be to roll out the model as a service with an open API, where every mortgage provide uses the same decision service; this guarantees that it will be evaluated identically everywhere. The ultimate goal may be a digital mortgage, potentially using blockchain to ensure the chain of events in this smart contract.
Meeting the Expectations with DMN and Constraint Solving: The Notary Case. Marjolein Deryck, KU Leuven
Marjolein Deryck presented on research in decision modeling and knowledge representation, and how she applied it to property registration taxes in Belgium, which is typically calculated by a notary. The use case was to support a notary when performing these calculations, using DMN for collaborative analysis with the notary and an executable prototype; then IDP to go beyond DMN capabilities in a constraint approach.
An interesting requirement was that the support application be non-intrusive: the notary felt that if he was spending too much time typing on a computer while figuring this out with a client, he would be seen as less of an expert. A tablet-based app with minimal requirement for data entry, plus interactive in terms of presenting the next best question rather than following a fixed script were seen as essential.
In her initial evaluation, DMN was seen as lacking script interactivity/adaptability (although I saw a really interesting way to use DMN and BPMN to resolve this last week at CamundaCon), and she instead considered IDP as a more powerful implementation. This provides a better solution, although the models are less understandable by the notaries, and required enhancements to be able to provide an explanation of a specific calculation.
The lessons learned included the use of DMN as an intermediate model — for gathering and analyzing requirements together with the business user — as well as how to combine DMN and IDP in a project.
Panel: DMN and Beyond
We closed off day 1 of DecisionCAMP 2019 with a panel that included Mike Gualtieri, Alan Fish, Jan Vanthienen, Jan Purchase, Gary Hallmark and Brian Stucky, moderated by Jacob Feldman.
A few points that came up during the panel (unattributed to the specific speaker):
Many buyers of decision management systems don’t known enough about DMN to evaluate it or even ask for it.
DMN still falls short in complex representations, although works well to represent static hierarchical information from decision tables. It has the potential to include other representations and other models such as machine learning. Making it more powerful could, however, have DMN lose the simplicity that makes it more likely to be adopted.
DMN is difficult to debug, making it hard to figure out logic flaws.
The diagram/graph level of DMN is very understandable to business users/analysts, but by the time you’re doing more complex nested expression logic at the FEEL execution level, you’ve lost most of them.
Highly-regulated industries such as lending, where rules are already documented in spreadsheets, are a good target for DMN implementation.
Being able to follow the execution path is not the same as an explanation of the decision logic. The DMN standard includes support for remarks/annotations to improve explainability but that may not be sufficient.
Knowledge sources in DMN models have no programmatic representation, putting the onus on the modeler to ensure explainability and traceability.
Ethics are important to decision management in terms of decision fairness and consistency. DMN model-based decision making can improve that as long as the models are based on the right rules and data.
There’s a need to be able to integrate DMN and machine learning while still providing decision explainability.
Models are always fit for purpose: there is no all-encompassing model that is suitable for everything. As an aside, that’s definitely true in the BPMN realm too.
That’s it for day 1; I’m off to find a gelato and an Aperol Spritz on this warm evening in Bolzano.
How and Why I Turned a Rule Engine into a First-Class Serverless Component. Mario Fusco and Matteo Mortari, Red Hat
Mario Fusco, who heads up the Drools project within Red Hat, presented on modernization of the Drools architecture to support serverless execution, using GraalVM and Quarkus. He discussed Kogito, a cloud-native, open source business automation project that uses Red Hat process and decision management along with Quarkus.
I’m not a JAVA developer and likely did not appreciate many of the details in the presentation, hence the short post. You can check out his slides here.
Combining DMN, First Order Logic and Machine Learning: The creation of Saint-Gobain Seals’ Digital Engineer. Nicholas Decleyre, Saint-Gobain and Bram Aerts, KU Leuven
The seals design and manufacturing unit of Saint-Gobain had the goal to create a “digital engineer” to capture knowledge, with the intent to standardize global production processes, reduce costs and time to market, and aid in training new engineers. They create an engineering automation tool to automatically generate solutions for standard designs, and an engineering support tool to provide information and other support to engineers while they are working on a solutions.
Automation for known solutions is fairly straightforward in execution: given the input specifications, determine a standard seal that can be used as a solution. This required quite a bit of knowledge elicitation from design engineers and management, which could then be represented in decision tables and FEEL for readability by the domain experts. Not only the solution selection is automated, however: the system also generates a bill of materials and pricing details.
The engineering support system is for when the solution is not known: a design engineer uses the support system to experiment on possible solutions and compare designs. This required building a knowledge base in first-order logic to define physical constraints and preferences, represented in IDP, then allowing the system to make recommendations about a partial or complete solution or set of solutions. They built a standalone tool for engineers to use this system, presenting a set of design constraints for the engineer to apply to narrow down the possible solutions. They compared the merits of DMN versus IDP representations, where DMN is easier to model and understand, but has limitations in what it can represent as well as being more cumbersome to maintain. At RuleML yesterday, they presented a proposal for extended DMN for better representing constraints.
They finished up talking about potential applications of machine learning on the design database: searching for “similar” existing solutions, learning new constraints, and checking data consistency. They have several automated engineering tools in development, with one in testing and one in production. Their engineering support tool has working core functionality although need to expand the knowledge base and prototype the UI. On the ML work, they are expecting to have a prototype by the end of this year.
Machine Learning and Decision Management: A standards-based approach. Edson Tirelli and Matteo Mortari, Red Hat
DecisionCAMP Day 1 morning sessions continue with Edson Tirelli and Mateo Mortari presenting on the integration of machine learning and decision management to address predictive decision automation. The problem to date is that integrating machine learning into business automation (either process or decision) has required proprietary interfaces and APIs, although there is an existing standard (PMML, Predictive Model Markup Language) for specifying and exchanging many types of executable machine learning models. The entry of the DMN standard provides a potential bridge between PMML and both BPMN and CMMN, allowing for an end-to-end standards-based representation for cases, processes, decisions and predictive models.
They gave a demo of how they have implemented this using RedHat decision and process engines along with open source tools Prometheus and Grafana, with a credit card dispute use case that uses BPMN, DMN and PMML to model the process and decisions. They started with a standard use of BPMN and DMN, where the DMN decision tables and graphs calculate the risk factors of the dispute and the customer, and make a decision on whether or not the dispute process can be automated. They added a predictive model for better calculation of the risk factors, positioning this in the DMN DRD as a business knowledge model that can then drive the decision model instead of a hard-coded decision table.
They finished their demo by importing the same PMML and DMN models in the Trisotech modeler to show interoperability of the integrated model types, with the predictive models providing knowledge sources for the decision models.
Coming from the process side, this is really exciting: we’re already seeing a lot of proprietary plug-ins and APIs to add machine learning to business processes, but this goes beyond that to allow standards-based tools to be plugged together easily. There’s still obviously work to be done to make this a seamless integration, but the idea that it can be all standards-based is pretty significant.
FEEL, Is It Really Friendly Enough? Daniel Schmitz-Hübsch and Ulrich Striffler, Materna
Materna has a number of implementation projects (mostly German government) that involve decision automation, where logic is modeled by business users and require that the decision justification be able to be explained to all users for transparency of decision automation. They use both decision tables and FEEL — decision tables are easier for business users to understand, but can’t represent everything — and some of the early adopters are using DMN. Given that most requirements are documented by business users in natural language, there are some obstacles to moving that initial representation to DMN instead.
Having the business users model the details of decisions in FEEL is the biggest issue: basically, you’re asking business people to write code in a script language, with the added twist that in their case, the business users are not native English speakers but the FEEL keywords are in English. In my experience, it’s hard enough to get business people to create syntactically-correct visual models in BPMN, moving to a scripting language would be a daunting task, and doing that in a foreign language would make most business people’s heads explode in frustration.
They are trying some different approaches for dealing with this: allowing the users to read and write the logic in their native natural language (German), or replacing some FEEL elements (text statements) with graphical representations. They believe that this is a good starting point for a discussion on making FEEL a bit friendlier for business users, especially those whose native language is not English.
Good closing discussion on the use of different tools for different levels of people doing the modeling.
Collaborative decisions: coordinating automated and human decision-making. Alan Fish, FICO
Alan Fish presented on the coordination of decisions between automation, individuals and groups. He considered how DMN isn’t enough to model these interactions, since it doesn’t allow for modeling certain characteristics; for example, partitioning decisions over time is best done with a combination of BPMN and DMN, where temporal dependencies can be represented, while combining CMMN and DMN can represent the partitioning decisions between decision-makers.
He also looked at how to represent the partition between decisions and meta-decisions — which is not currently covered in DMN — where meta-decisions may be an analytical human activity that then determines some of the rules around how decisions are made. He defines an organization as a network of decision-making entities passing information to each other, with the minimum requirement for success based on having models of processes, case management, decisions and data. The OMG “Triple Crown” of DMN, BPMN and CMMN figure significantly in his ideas on a certain level of organizational modeling, and the success of the organizations that embrace them as part of their overall modeling and improvement efforts.
He sees radical process reengineering as being a risky operation, and posits that doing process reengineering once then constantly updating decision models to adapt to changing conditions. An interesting discussion on organizational models and how decision management fits into larger representations of organizations. Also some good follow-on Q&A about whether to consider modeling state in decision models, or leaving that to the process and case models; and about the value of modeling human decisions along with automated ones.
Making the Right Decision at the Right Time: Introducing Temporal Reasoning to DMN. Denis Gagné, Trisotech
Denis Gagné covered the concepts of temporal reasoning in DMN, including a new proposal to the DMN RTF for adding temporal reasoning concepts. Temporal logic is “any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time”, that is, representing events in terms of whether they happened sequentially or concurrently, or what time that a particular event occurred.
The proposal will be for an extension to FEEL — which already has some basic temporal constructs with date and time types — that provides a more comprehensive representation based on Allen’s interval algebra and Zaidi’s point-interval logic. This would have built-in functions regarding intervals and points, with two levels of abstraction for expressiveness and business friendliness, allowing for DMN to represent temporal relationships between points, between points and intervals, and between intervals.
The proposal also includes a more “business person common sense” interpretation for interval overlaps and other constructs: note that 11 of the possible interval-interval relationships fall into this category, which makes this into a simpler before/after/overlap designation. Given all of these representations, plus more robust temporal functions, the standard can then allow expressions such as “interval X starts 3 days before interval Y” or “did this happen in September”.
This is my first time at DecisionCAMP (formerly RulesFest), and I’m totally loving it. It’s full of technology practitioners — vendors, researchers and consultants — who more interested in discussing interesting ways to improve decision management and the DMN standard rather than plugging their own products. I’m not as much of a decision management expert as I am in process management, so great learning opportunities for me.
I’m finishing up a European tour of three conferences with DecisionCAMP in Bolzano, which has a focus on business rules and decision management technology. This is really a technology conference, with sessions intended to be more discussions about what’s happening with new advances rather than the business or marketing side of products. Jacob Feldman of OpenRules was kind enough to invite me to attend when he heard that I was going to be with striking distance at CamundaCon last week in Berlin, and I’ll be moderating a panel tomorrow afternoon in return.
Feldman opened the conference with an overview of operational decision services for decision-making applications, such as smart processes, and the new requirements for decision services regarding performance, security and architectural models. He sees operational decision services as breaking down into three components: business knowledge (managed by business subject matter experts), business decision models (managed by business analysts) and deployed decision services (managed by developers/devops) — the last of these is what is triggered by decision-making applications when they pass data and request a decision. There are defined standards for the business decision models (e.g., DMN) and transferring those to execution engines for the deployed services, but issues arise in standardizing how SMEs capture business knowledge and pass it on the to BAs for the creation of the decision models; definitely an area requiring more work from both standards groups and vendors.
I’ll do some blog posts that combine multiple presentations; you can see copies of most of the presentations here.
Friedbert Samland from Deutsche Telekom IT and Willm Tüting from their technology partner conology presented on Telekom IT (the internal IT provider for Deutsche Telekom) migrating from monolithic systems to a microservices architecture while also moving from waterfall to Agile development methodologies. In 2017, they had a number of significant problems with their monolithic system for wholesale orders: time to market for new features was 12+ months, lots of missing functionality that required manual steps, vendor lock-in, large (therefore risky and time-consuming) releases, and more.
They tried a variety of approaches to alleviate these problems, such as a partial Agile environment, but needed something more radical to make a difference. They identified four major drivers: microservices, cloud, SAFe (Scaled Agile Framework) and devops. I’m sure everyone in the audience was familiar with those concepts, but they went through how this actually works in a large organization like this, where it’s not always as easy as the providers say it will be. They learned a lot of lessons the hard way, such as the long learning curve of moving to cloud.
They were migrating a system built on the Oracle BPEL engine, starting by partitioning the monolith in terms of data and functionality (logic and processes) in order to identify three categories of microservices: business process microservices, data microservices, and domain-specific microservices. They balanced orchestration and choreography with a “choreographed orchestration” of the microservices, where the (Camunda) process orchestrations were embedded within the microservices for handling processes and inter-service communication. By having separate Camunda instances with separate databases for each microservice (which provides a high degree of scalability), they had to enhance the monitoring to get an aggregated view of all of the process flows.
This is a great example of a real-world large-scale implementation where a proprietary and monolithic iBPMS just would not work for the architecture that Telekom IT needed: Camunda BPM is embedded in the services, it doesn’t pre-suppose fixed orchestration at the top level of an application.
Although we’re just halfway through the last day, this was my last session at CamundaCon, I’m headed south for a short weekend break then DecisionCamp in Bolzano next week. Thanks to the entire Camunda team for putting on a great event, and inviting me to give a keynote yesterday.
Camunda co-founder Bernd Rücker presented on some of the implementation issues with microservices, in particular following on from Susanne Kaiser’s keynote with the theme of having small delivery teams spend more of their time developing business capabilities and less on the “undifferentiated heavy lifting” infrastructure bits required to support those. This significantly reduces the cognitive load for the team, allowing them to build the best possible business capabilities without worrying about arcane configuration details. Interestingly, this is not that different from the argument to move from a business process embedded within a business system logic to an externalized process in a BPMS — something that Bernd has a long history with.
He went through an example of the services behind a train ticket booking, which requires payment, seat reservation and ticket generation services; there are issues of latency and uptime as well as the user experience of how the results of those services are presented to the customer. He referenced the Reactive Manifesto as a guideline for software design patterns that are “more robust, more resilient, more flexible and better positioned to meet modern demands”.
Event-driven choreography is a common pattern these days, but has the problem of not being able to visualize the overall process flow between services. This can be alleviated somewhat by using event monitoring overlaid on a process model — effectively process discovery if the flow is not standardized or when it changes — or even more so by orchestrating standard parts of the flow to combine event-driven and orchestration patterns. Orchestration has the advantage of relocating the coupling between services in an event-driven flow to the orchestration layer: although event choreography is seen as loosely-coupled, there’s a lot of event listening that has to be built into the services, which couples them more closely. It’s not that one is good and the other bad: there’s a place for both choreography and choreography patterns in software development.
He finished with a discussion of monolithic legacy software and how to deal with it: from the initial step of just adding APIs to access functionality, you gradually chip away at the monolith’s capabilities, ripping them out replacing with externalized services.
Susanne Kaiser, former CTO of Just Social and now an independent technology consultant, opened the second day of CamundaCon 2019 with a presentation on moving to a microservices architecture, particularly for a small team. Similar to the message in my presentation yesterday, she advises building the processes that are your competitive differentiator, then outsourcing the rest to external vendors.
She walked through some of the things to consider when designing microservices, such as the ideas of bounded context, local data persistence, API discovery and management, linkage with message brokers, and more. There’s a lot of infrastructure complexities in building a single microservice, which makes it particularly challenging for small teams/companies — that’s part of what drives her recommendation to outsource anything that’s not a competitive differentiator.
She showed the use of Wardley maps for the evolution of a value chain, showing how components are mapped relative to their visibility to users and their level of maturity. Components of a solution/system are first identified by their visibility, usually in a top-down manner based on the functional requirements. They are then plotted along the evolution axis, to identify which will be custom-built versus those that are third-party products or outsourced commodities/utilities. This includes identifying all of the infrastructure to support those components; initially, this may include a lot of the infrastructure components as requiring custom build, but use of third-party products (including open source) can shift many of these components along the evolution axis.
She then showed how Camunda BPM would fit into this map of the solution, and how it can abstract away some of the activities and components that were previously explicit. In short, Camunda BPM is a higher-level piece of infrastructure that can handle service orchestration including complexities of retries and more. I haven’t worked with Wardley Maps previously, and there are definitely some good concepts in here for helping to identify buy versus build components in a technical architecture.
Derek Vandivere of ING Netherlands finished up the first day of CamundaCon 2019 here in Berlin taking about how they moved from a regional to global platform migration — strange, because he’s actually talking about their Pega implementation although they’re also implementing Camunda — and how to work around the monoliths. I know that Derek’s wife is an art restorer, and this has obvious rubbed off on him since all of his slides were photos of Dutch Masters paintings that were (however peripherally) related to his subject matter.
Different ING regional operations selected different BPM engines: the Netherlands went with Pega, while Germany went with Camunda, with other areas building their own thing or using legacy TIBCO. They’re attempting to build some standards around how people talk about BPM and case management internally, as well as how applications are developed. As a global bank, they need to have some data, rules and processes that span countries, making it necessary to consider how to bring all of these disparate systems together.
He went through a number of the best practices and lessons learned that they discovered along the way as they rolled out a regional solution globally. Although his experience with the Dutch implementation was based on Pega, there are many transferrable lessons here, since a lot of it is about higher-level architecture, bottlenecks in processes and decision-making (often human bottlenecks, although some technical as well), and how to interact with the business areas.
He discussed with the current pressures on their monolithic iBPMS (Pega) platform that echoed some of what I talked about this morning: proprietary developer training, container-based microservices architecture, and multiple distributed deployment models (support for both cloud and regionally-mandated on-premise). Replacing or upgrading any sufficient complex IT is going to be a challenging task, but doing that with a monolithic iBPMS is considerably more challenging than a more distributed microservices architecture.
We’re about to spill out onto the Spree-side patio here at Radialsystem V for a BBQ and well-deserved refreshments, so that’s it for my coverage of this first day at CamundaCon 2019. I’ll be back for a couple of sessions tomorrow morning before heading south to Bolzano for next week’s DecisionCamp.