DecisionCAMP 2019: the evolving DMN standard and the quality of decision models

DMN 2.0? Gary Hallmark, Oracle

Gary Hallmark presented on the next major version of DMN that’s in the works, starting with a timeline of what’s happened so far since the original RFP in 2011 up to the expected 1.3 release later this year. He added the question mark to his title because whether to issue a major (fix the mistakes of the past) or minor (patch and live with the mistakes of the past) release is still under consideration. He went through the top 10 requested features, half of which can be backward-compatible with DMN 1.x (i.e., DMN 1.x models can be ingested and executed in the new version) and half of which can’t.

He mentioned the issue of harmonization of DMN, BPMN and CMMN, a topic that I am especially interested in, and plan to ask the vendors about later this afternoon when I am moderating the panel. That can include a common item definition model that is used by all three notations, tighter integration of DMN with BPMN gateways and CMMN sentries, and easier reference and interchange between the model types. This could also include the use of FEEL and decision tables for expressions and logic in BPMN and CMMN, such as at gateways and in data associations. We already have the problem of keeping a collection of different model types sorted out, and there may need to be a “model of models” concept to tie these together.

Concept for how to use a DMN-style decision table directly in a BPMN gateway to model the logic. From Gary Hallmark’s presentation.

Better model validation is another request for the next version of DMN; I don’t have a lot of experience with DMN, but judging by the comments here, the “null” returned for all types of errors is definitely a touchy issue. This could be improved with a “required” property in item definitions and model validation with respect to the item definitions. Additional datatypes would also be useful, such as integers and some types of ranges. There are suggestions for better ways to deal with iteration and recursion in a new version of DMN, some of which are already being done by vendors such as Oracle in their products to make it easier for business analysts to understand and model.

Something that seems simple but would break compatibility is moving to case insensitive names (indicating just how IT-centric the original, and possibly the current, committee was), and handling some things such as single-quoted strings unambiguously. Moving to Xpath-like sequences instead of the current lists also wouldn’t be compatible, nor are many types of recursion and cyclic information requirements.

As mentioned earlier, half of the top requested features could be done in 1.x because they don’t break compatibility; one option is to implement those in a 1.x version and leave the others for now. The alternative is to start the DMN 2.x RFP process, which is a much larger undertaking, which will delay the implementation of those features but will open the door (or Pandora’s box) for a completely new version of DMN. Lots of great discussion at the end of this presentation: many of the people in the room are active contributors to the standard and/or vendors who implement the standard, so they definitely have both knowledge and opinions on the subject.

Quality of Decision Models. Jan Vanthienen, KU Leuven

Jan Vanthienen presented on measuring quality of decision models, starting with notions of information quality, resulting in measures such as complexity and traceability. For example, you can look at complexity of a DRD based on number of decision, number of elements and cyclomatic comlexity; complexity of a decision table can be based on hit policy usage and total number of input variables.

Consistency and interpretability in decision tables can be measured by a unique hit table (no overlapping rules) and a natural order for easy visual reading of the rules — it is more important to be correct and consistent than compact.

There’s a contextual quality factor when we look at a decision model that is related to a process model, where the connections between the two models can be fairly complex. He presented a set of guidelines for integrating processes and decision, including avoiding embedding decisions in gateways: something that happens all the time, in my experience with process modeling.

Integration between a process model and a decision model From Jan Vanthienen’s presentation.

He covered some ideas on decision modeling methodology for creating the models of highest quality: usually this will involve working back and forth between the DRD and the decision tables, rather than trying to do a pure top-down or bottom-up approach. There’s a lot of past research that covers many of the issues of creating quality models, most of which predates BPMN and DMN but the same principles apply. The DMN and BPMN standards embody some of these principles, such as separating decision and process logic.

DecisionCAMP 2019: Modeling helpers for business analysts, and extending DMN to include performance metrics

Business Rules — Focus on “Business”. Guilhem Molines, IBM

Guilhem Molines presented on what business analysts can do in the creation of automated decisions before developers need to get involved (or take over), and what the modeling tools can provide to help them get further in the process independently.

Clearly, business analysts can model business decisions at the level of a decision requirements diagram (e.g., DMN DRD) that shows the entities and information required to make a decision. Then, the data and decision models required for implementation can be created by the business analyst or co-authored together with a developer. IBM’s ODM tool has authoring assistance such as smart predictions to guide an analyst while they are modeling, plus suggestions to define terms in the model as they are used in a rule definition.

In authoring decision tables, assistance may include Excel-like functions for copy and paste or smart drag to extend a range; as a table is being defined, there can be guidance to check for gaps and overlaps in the parameter ranges. Being able to do instant validation for a decision table by entering data values and seeing the calculation result (without deploying the rule) builds confidence in the logic implementation.

A more comprehensive “unit testing” approach allows the analyst to provide a number of input parameter sets in a spreadsheet or similar tabular form and see the results. A further step in testing decisions is simulation based on a large quantity of production data; then integration testing in a full test environment before promotion to production.

Much of this presentation was based on IBM ODM capabilities, although some good ideas here for any modeling environment.

Combining Decision Models for Better Decision Management. Fernando Donati Jorge, FICO

Fernando Donati Jorge presented on whether we have the right decision model to solve mysteries in addition to puzzles — an interesting distinction, where puzzles can be solved given the right information, while mysteries may not have a well-defined answer and can depend on future interactions.

The challenges with mysteries is that there is too much data but no indication of the most relevant bits; they are full of uncertainties; and they depend on future known and unknown interactions. Decision models can properly contextualize data, but don’t measure relevance. Predictive analytics can quantify uncertainties, and decision models can contextualize the use of different types of knowledge models. An analytic decision model (a Gartner term, which does not include DMN decision models) can represent how different decision outcomes can lead to different future interactions.

Typically, there are two separate types of decision models: one that models the input data and business knowledge model that results in a decision (a DMN model), and the other that models how a decision impacts business performance. If you want to combine these into a single model, an extension to DMN is required to be able to model performance metrics, which in turn have models, decisions and data as inputs.

Modeling performance metrics with a DMN extension. From Fernando Donati Jorge’s presentation.

This DMN extension is available in the FICO Analytic Cloud for adding KPIs to decision-making. Good discussion about how they handle latency for calculating the performance metrics, and the issue of metrics aggregation over multiple decision instances. They don’t currently allow a performance metric to inform (provide input to) a decision, but that’s obviously open for future discussion.

Both of the presentations in this section have looked at how vendors are adding value to their DMN-based modeler: IBM in terms of interactive to assist the modelers in creating better models, and FICO in extending DMN to include performance metrics directly in a decision model.

DecisionCAMP 2019: AI and DM trends, and the future of rules

Trends in Enterprise AI and Digital Decisions. Mike Gualtieri, Forrester

Day 2 of DecisionCAMP 2019 in beautiful Bolzano started out with Mike Gualtieri giving the Forrester view of trends in the market around AI and automated decisions. This was a typical analyst presentation — sorry, no notes — presented as part of the larger BRAIN 2019 (Bolzano Rules and Artificial INtelligence Summit) of which DecisionCAMP is a part.

Brainstorming Next-Generation Rule Platforms. Ron Ross, Business Rules Solutions

Ron Ross presented on the current state of business rules and opportunities moving forward. To start, we have made progress in this area — DMN for one thing is an amazing leap forward — but business rules are not yet universally accepted and adopted within organizations despite the provable benefits.

One opportunity for business rules tools is to reduce developer workload, and to reduce rule programming errors. In alignment with Semantics Of Business Vocabulary and Rules (SBVR) standard, there are two types of rules: definitional rules and behavioral rules. Definitional rules may be incorrect or misapplied, but they can’t be directly violated since they are evaluated in the course of a process; declarative behavioral rules, on the other hand, require a “watcher” to track other events that may cause the state of another process or transaction to change. If implemented properly, behavioral rules can reduce developer workload since the event-driven watcher updates state constantly based on these rules firing. DMN does not allow modeling of these types of decisions, since there needs to be more awareness of state as well as the events that may cause it to change; there is no concept of a watcher daemon that can constantly evaluate rules and update state.

There is also a need to better address sentiment and human discretion in rules. With behavioral rules that are enforced by humans, there are levels of enforcement; these nuances are not captured in most rules/decision systems.

Enforcement levels for behavioral rules. From Ron Ross’ presentation.

Rules tools also need to tie in more directly with business governance in order to enforce regulatory and other rules under which an organization needs to operate. Many of these are behavioral rules, which are not handled adequately by DMN and most decision management systems due to the lack of an event-driven watcher; there is also a gap caused by the lack of natural language support in defining executable rules.

DecisionCAMP 2019: Decisions on demand in the mortgage industry, and combining DMN and IDP

Beyond Decision Models – Using Technical and Business Standards to Transform Financial Services. Brian Stucky, Quicken Loans

Although having recently joined Quicken Loans as a senior enterprise architect, Brian Stucky is also involved in the MISMO mortgage standards organization, which was the focus of his presentation. Earlier this year, MISMO recommended DMN as an official standard for documenting, implementing, exchanging and executing decision models in the mortgage industry; they are also working on officially recognizing BPMN too. The idea is to create a DMN data structure based on the existing MISMO XSD to allow these mortgage-related decision models to be shared, but the industry is still rife with paper-based processes and legacy systems that hinder adoption.

There’s a history of business rules in the mortgage industry, but it didn’t really allow for business control of the rules, didn’t have the agility required, and was expensive. DMN is changing that game — especially with decisions as a service instead of on-premise systems — and allowing mortgage companies to better meet some of the new regulations such as Ability-To-Repay, where the written government regulation can be translated into a standard DMN model to ensure that all parties are using the same evaluation criteria. They’ve proven that time required to change the DMN model for a specific rule can take as little as a couple of hours to analyze and modify the model, which is a huge push for moving from MISMO XSD to the DMN model.

Ability-To-Repay DMN model. From Brian Stucky’s presentation.

In the future, this could mean that DMN plus the MISMO data model could be used directly to disseminate a regulatory rule change, rather than the 800-page text document used now. That brings up other issues, such as versioning of the model or even of DMN, and engine compliance in executing the DMN models as distributed. A better way to do it may be to roll out the model as a service with an open API, where every mortgage provide uses the same decision service; this guarantees that it will be evaluated identically everywhere. The ultimate goal may be a digital mortgage, potentially using blockchain to ensure the chain of events in this smart contract.

Meeting the Expectations with DMN and Constraint Solving: The Notary Case. Marjolein Deryck, KU Leuven

Marjolein Deryck presented on research in decision modeling and knowledge representation, and how she applied it to property registration taxes in Belgium, which is typically calculated by a notary. The use case was to support a notary when performing these calculations, using DMN for collaborative analysis with the notary and an executable prototype; then IDP to go beyond DMN capabilities in a constraint approach.

An interesting requirement was that the support application be non-intrusive: the notary felt that if he was spending too much time typing on a computer while figuring this out with a client, he would be seen as less of an expert. A tablet-based app with minimal requirement for data entry, plus interactive in terms of presenting the next best question rather than following a fixed script were seen as essential.

In her initial evaluation, DMN was seen as lacking script interactivity/adaptability (although I saw a really interesting way to use DMN and BPMN to resolve this last week at CamundaCon), and she instead considered IDP as a more powerful implementation. This provides a better solution, although the models are less understandable by the notaries, and required enhancements to be able to provide an explanation of a specific calculation.

IDP configuration for property tax calculations. From Marjolein Deryck’s presentation.

The lessons learned included the use of DMN as an intermediate model — for gathering and analyzing requirements together with the business user — as well as how to combine DMN and IDP in a project.

Panel: DMN and Beyond

We closed off day 1 of DecisionCAMP 2019 with a panel that included Mike Gualtieri, Alan Fish, Jan Vanthienen, Jan Purchase, Gary Hallmark and Brian Stucky, moderated by Jacob Feldman.

A few points that came up during the panel (unattributed to the specific speaker):

  • Many buyers of decision management systems don’t known enough about DMN to evaluate it or even ask for it.
  • DMN still falls short in complex representations, although works well to represent static hierarchical information from decision tables. It has the potential to include other representations and other models such as machine learning. Making it more powerful could, however, have DMN lose the simplicity that makes it more likely to be adopted.
  • DMN is difficult to debug, making it hard to figure out logic flaws.
  • The diagram/graph level of DMN is very understandable to business users/analysts, but by the time you’re doing more complex nested expression logic at the FEEL execution level, you’ve lost most of them.
  • Highly-regulated industries such as lending, where rules are already documented in spreadsheets, are a good target for DMN implementation.
  • Being able to follow the execution path is not the same as an explanation of the decision logic. The DMN standard includes support for remarks/annotations to improve explainability but that may not be sufficient.
  • Knowledge sources in DMN models have no programmatic representation, putting the onus on the modeler to ensure explainability and traceability.
  • Ethics are important to decision management in terms of decision fairness and consistency. DMN model-based decision making can improve that as long as the models are based on the right rules and data.
  • There’s a need to be able to integrate DMN and machine learning while still providing decision explainability.
  • Models are always fit for purpose: there is no all-encompassing model that is suitable for everything. As an aside, that’s definitely true in the BPMN realm too.

That’s it for day 1; I’m off to find a gelato and an Aperol Spritz on this warm evening in Bolzano.

DecisionCAMP 2019: Serverless DROOLS and the Digital Engineer

How and Why I Turned a Rule Engine into a First-Class Serverless Component. Mario Fusco and Matteo Mortari, Red Hat

Mario Fusco, who heads up the Drools project within Red Hat, presented on modernization of the Drools architecture to support serverless execution, using GraalVM and Quarkus. He discussed Kogito, a cloud-native, open source business automation project that uses Red Hat process and decision management along with Quarkus.

I’m not a JAVA developer and likely did not appreciate many of the details in the presentation, hence the short post. You can check out his slides here.

Combining DMN, First Order Logic and Machine Learning: The creation of Saint-Gobain Seals’ Digital Engineer. Nicholas Decleyre, Saint-Gobain and Bram Aerts, KU Leuven

The seals design and manufacturing unit of Saint-Gobain had the goal to create a “digital engineer” to capture knowledge, with the intent to standardize global production processes, reduce costs and time to market, and aid in training new engineers. They create an engineering automation tool to automatically generate solutions for standard designs, and an engineering support tool to provide information and other support to engineers while they are working on a solutions.

Engineering automation and support systems at Saint-Gobain Seals. From Nicholas Decleyre and Bram Aerts’ presentation.

Automation for known solutions is fairly straightforward in execution: given the input specifications, determine a standard seal that can be used as a solution. This required quite a bit of knowledge elicitation from design engineers and management, which could then be represented in decision tables and FEEL for readability by the domain experts. Not only the solution selection is automated, however: the system also generates a bill of materials and pricing details.

The engineering support system is for when the solution is not known: a design engineer uses the support system to experiment on possible solutions and compare designs. This required building a knowledge base in first-order logic to define physical constraints and preferences, represented in IDP, then allowing the system to make recommendations about a partial or complete solution or set of solutions. They built a standalone tool for engineers to use this system, presenting a set of design constraints for the engineer to apply to narrow down the possible solutions. They compared the merits of DMN versus IDP representations, where DMN is easier to model and understand, but has limitations in what it can represent as well as being more cumbersome to maintain. At RuleML yesterday, they presented a proposal for extended DMN for better representing constraints.

They finished up talking about potential applications of machine learning on the design database: searching for “similar” existing solutions, learning new constraints, and checking data consistency. They have several automated engineering tools in development, with one in testing and one in production. Their engineering support tool has working core functionality although need to expand the knowledge base and prototype the UI. On the ML work, they are expecting to have a prototype by the end of this year.

DecisionCAMP 2019: Standards-based machine learning and the friendliness of FEEL

Machine Learning and Decision Management:
A standards-based approach. Edson Tirelli and Matteo Mortari, Red Hat

DecisionCAMP Day 1 morning sessions continue with Edson Tirelli and Mateo Mortari presenting on the integration of machine learning and decision management to address predictive decision automation. The problem to date is that integrating machine learning into business automation (either process or decision) has required proprietary interfaces and APIs, although there is an existing standard (PMML, Predictive Model Markup Language) for specifying and exchanging many types of executable machine learning models. The entry of the DMN standard provides a potential bridge between PMML and both BPMN and CMMN, allowing for an end-to-end standards-based representation for cases, processes, decisions and predictive models.

Linking business automation and machine learning with standards. From Edson Tirelli’s presentation.

They gave a demo of how they have implemented this using RedHat decision and process engines along with open source tools Prometheus and Grafana, with a credit card dispute use case that uses BPMN, DMN and PMML to model the process and decisions. They started with a standard use of BPMN and DMN, where the DMN decision tables and graphs calculate the risk factors of the dispute and the customer, and make a decision on whether or not the dispute process can be automated. They added a predictive model for better calculation of the risk factors, positioning this in the DMN DRD as a business knowledge model that can then drive the decision model instead of a hard-coded decision table.

They finished their demo by importing the same PMML and DMN models in the Trisotech modeler to show interoperability of the integrated model types, with the predictive models providing knowledge sources for the decision models.

Coming from the process side, this is really exciting: we’re already seeing a lot of proprietary plug-ins and APIs to add machine learning to business processes, but this goes beyond that to allow standards-based tools to be plugged together easily. There’s still obviously work to be done to make this a seamless integration, but the idea that it can be all standards-based is pretty significant.

FEEL, Is It Really Friendly Enough? Daniel Schmitz-Hübsch and Ulrich Striffler, Materna

Materna has a number of implementation projects (mostly German government) that involve decision automation, where logic is modeled by business users and require that the decision justification be able to be explained to all users for transparency of decision automation. They use both decision tables and FEEL — decision tables are easier for business users to understand, but can’t represent everything — and some of the early adopters are using DMN. Given that most requirements are documented by business users in natural language, there are some obstacles to moving that initial representation to DMN instead.

Having the business users model the details of decisions in FEEL is the biggest issue: basically, you’re asking business people to write code in a script language, with the added twist that in their case, the business users are not native English speakers but the FEEL keywords are in English. In my experience, it’s hard enough to get business people to create syntactically-correct visual models in BPMN, moving to a scripting language would be a daunting task, and doing that in a foreign language would make most business people’s heads explode in frustration.

They are trying some different approaches for dealing with this: allowing the users to read and write the logic in their native natural language (German), or replacing some FEEL elements (text statements) with graphical representations. They believe that this is a good starting point for a discussion on making FEEL a bit friendlier for business users, especially those whose native language is not English.

Graphical representation of FEEL elements. From Daniel Schmitz-Hübsch and Ulrich Striffler’s presentation.

Good closing discussion on the use of different tools for different levels of people doing the modeling.

DecisionCAMP 2019: collaborative decision making and temporal reasoning in DMN

Collaborative decisions: coordinating automated and human decision-making. Alan Fish, FICO

Alan Fish presented on the coordination of decisions between automation, individuals and groups. He considered how DMN isn’t enough to model these interactions, since it doesn’t allow for modeling certain characteristics; for example, partitioning decisions over time is best done with a combination of BPMN and DMN, where temporal dependencies can be represented, while combining CMMN and DMN can represent the partitioning decisions between decision-makers.

Partitioning decisions over time, modeled with BPMN and DMN. From Alan Fish’s presentation.

He also looked at how to represent the partition between decisions and meta-decisions — which is not currently covered in DMN — where meta-decisions may be an analytical human activity that then determines some of the rules around how decisions are made. He defines an organization as a network of decision-making entities passing information to each other, with the minimum requirement for success based on having models of processes, case management, decisions and data. The OMG “Triple Crown” of DMN, BPMN and CMMN figure significantly in his ideas on a certain level of organizational modeling, and the success of the organizations that embrace them as part of their overall modeling and improvement efforts.

He sees radical process reengineering as being a risky operation, and posits that doing process reengineering once then constantly updating decision models to adapt to changing conditions. An interesting discussion on organizational models and how decision management fits into larger representations of organizations. Also some good follow-on Q&A about whether to consider modeling state in decision models, or leaving that to the process and case models; and about the value of modeling human decisions along with automated ones.

Making the Right Decision at the Right Time: Introducing Temporal Reasoning to DMN. Denis Gagné, Trisotech

Denis Gagné covered the concepts of temporal reasoning in DMN, including a new proposal to the DMN RTF for adding temporal reasoning concepts. Temporal logic is “any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time”, that is, representing events in terms of whether they happened sequentially or concurrently, or what time that a particular event occurred.

The proposal will be for an extension to FEEL — which already has some basic temporal constructs with date and time types — that provides a more comprehensive representation based on Allen’s interval algebra and Zaidi’s point-interval logic. This would have built-in functions regarding intervals and points, with two levels of abstraction for expressiveness and business friendliness, allowing for DMN to represent temporal relationships between points, between points and intervals, and between intervals.

Proposed DMN syntax for temporal relationships. From Denis Gagné‘s presentation.

The proposal also includes a more “business person common sense” interpretation for interval overlaps and other constructs: note that 11 of the possible interval-interval relationships fall into this category, which makes this into a simpler before/after/overlap designation. Given all of these representations, plus more robust temporal functions, the standard can then allow expressions such as “interval X starts 3 days before interval Y” or “did this happen in September”.

This is my first time at DecisionCAMP (formerly RulesFest), and I’m totally loving it. It’s full of technology practitioners — vendors, researchers and consultants — who more interested in discussing interesting ways to improve decision management and the DMN standard rather than plugging their own products. I’m not as much of a decision management expert as I am in process management, so great learning opportunities for me.

DecisionCAMP 2019 kicks off – business rules and decision management technology conference

I’m finishing up a European tour of three conferences with DecisionCAMP in Bolzano, which has a focus on business rules and decision management technology. This is really a technology conference, with sessions intended to be more discussions about what’s happening with new advances rather than the business or marketing side of products. Jacob Feldman of OpenRules was kind enough to invite me to attend when he heard that I was going to be with striking distance at CamundaCon last week in Berlin, and I’ll be moderating a panel tomorrow afternoon in return.

Feldman opened the conference with an overview of operational decision services for decision-making applications, such as smart processes, and the new requirements for decision services regarding performance, security and architectural models. He sees operational decision services as breaking down into three components: business knowledge (managed by business subject matter experts), business decision models (managed by business analysts) and deployed decision services (managed by developers/devops) — the last of these is what is triggered by decision-making applications when they pass data and request a decision. There are defined standards for the business decision models (e.g., DMN) and transferring those to execution engines for the deployed services, but issues arise in standardizing how SMEs capture business knowledge and pass it on the to BAs for the creation of the decision models; definitely an area requiring more work from both standards groups and vendors.

I’ll do some blog posts that combine multiple presentations; you can see copies of most of the presentations here.

CamundaCon 2019: Monolith to microservices at Deutsche Telekom

Friedbert Samland from Deutsche Telekom IT and Willm Tüting from their technology partner conology presented on Telekom IT (the internal IT provider for Deutsche Telekom) migrating from monolithic systems to a microservices architecture while also moving from waterfall to Agile development methodologies. In 2017, they had a number of significant problems with their monolithic system for wholesale orders: time to market for new features was 12+ months, lots of missing functionality that required manual steps, vendor lock-in, large (therefore risky and time-consuming) releases, and more.

Willm Tüting and Friedbert Samland presenting on the problems with Telekom IT’s monolithic wholesale ordering system

They tried a variety of approaches to alleviate these problems, such as a partial Agile environment, but needed something more radical to make a difference. They identified four major drivers: microservices, cloud, SAFe (Scaled Agile Framework) and devops. I’m sure everyone in the audience was familiar with those concepts, but they went through how this actually works in a large organization like this, where it’s not always as easy as the providers say it will be. They learned a lot of lessons the hard way, such as the long learning curve of moving to cloud.

They were migrating a system built on the Oracle BPEL engine, starting by partitioning the monolith in terms of data and functionality (logic and processes) in order to identify three categories of microservices: business process microservices, data microservices, and domain-specific microservices. They balanced orchestration and choreography with a “choreographed orchestration” of the microservices, where the (Camunda) process orchestrations were embedded within the microservices for handling processes and inter-service communication. By having separate Camunda instances with separate databases for each microservice (which provides a high degree of scalability), they had to enhance the monitoring to get an aggregated view of all of the process flows.

This is a great example of a real-world large-scale implementation where a proprietary and monolithic iBPMS just would not work for the architecture that Telekom IT needed: Camunda BPM is embedded in the services, it doesn’t pre-suppose fixed orchestration at the top level of an application.

Although we’re just halfway through the last day, this was my last session at CamundaCon, I’m headed south for a short weekend break then DecisionCamp in Bolzano next week. Thanks to the entire Camunda team for putting on a great event, and inviting me to give a keynote yesterday.

CamundaCon 2019: @berndruecker on Zeebe and microservices

Camunda co-founder Bernd Rücker presented on some of the implementation issues with microservices, in particular following on from Susanne Kaiser’s keynote with the theme of having small delivery teams spend more of their time developing business capabilities and less on the “undifferentiated heavy lifting” infrastructure bits required to support those. This significantly reduces the cognitive load for the team, allowing them to build the best possible business capabilities without worrying about arcane configuration details. Interestingly, this is not that different from the argument to move from a business process embedded within a business system logic to an externalized process in a BPMS — something that Bernd has a long history with.

He went through an example of the services behind a train ticket booking, which requires payment, seat reservation and ticket generation services; there are issues of latency and uptime as well as the user experience of how the results of those services are presented to the customer. He referenced the Reactive Manifesto as a guideline for software design patterns that are “more robust, more resilient, more flexible and better positioned to meet modern demands”.

Event-driven choreography is a common pattern these days, but has the problem of not being able to visualize the overall process flow between services. This can be alleviated somewhat by using event monitoring overlaid on a process model — effectively process discovery if the flow is not standardized or when it changes — or even more so by orchestrating standard parts of the flow to combine event-driven and orchestration patterns. Orchestration has the advantage of relocating the coupling between services in an event-driven flow to the orchestration layer: although event choreography is seen as loosely-coupled, there’s a lot of event listening that has to be built into the services, which couples them more closely. It’s not that one is good and the other bad: there’s a place for both choreography and choreography patterns in software development.

He finished with a discussion of monolithic legacy software and how to deal with it: from the initial step of just adding APIs to access functionality, you gradually chip away at the monolith’s capabilities, ripping them out replacing with externalized services.