DecisionCAMP 2019: Serverless DROOLS and the Digital Engineer

How and Why I Turned a Rule Engine into a First-Class Serverless Component. Mario Fusco and Matteo Mortari, Red Hat

Mario Fusco, who heads up the Drools project within Red Hat, presented on modernization of the Drools architecture to support serverless execution, using GraalVM and Quarkus. He discussed Kogito, a cloud-native, open source business automation project that uses Red Hat process and decision management along with Quarkus.

I’m not a JAVA developer and likely did not appreciate many of the details in the presentation, hence the short post. You can check out his slides here.

Combining DMN, First Order Logic and Machine Learning: The creation of Saint-Gobain Seals’ Digital Engineer. Nicholas Decleyre, Saint-Gobain and Bram Aerts, KU Leuven

The seals design and manufacturing unit of Saint-Gobain had the goal to create a “digital engineer” to capture knowledge, with the intent to standardize global production processes, reduce costs and time to market, and aid in training new engineers. They create an engineering automation tool to automatically generate solutions for standard designs, and an engineering support tool to provide information and other support to engineers while they are working on a solutions.

Engineering automation and support systems at Saint-Gobain Seals. From Nicholas Decleyre and Bram Aerts’ presentation.

Automation for known solutions is fairly straightforward in execution: given the input specifications, determine a standard seal that can be used as a solution. This required quite a bit of knowledge elicitation from design engineers and management, which could then be represented in decision tables and FEEL for readability by the domain experts. Not only the solution selection is automated, however: the system also generates a bill of materials and pricing details.

The engineering support system is for when the solution is not known: a design engineer uses the support system to experiment on possible solutions and compare designs. This required building a knowledge base in first-order logic to define physical constraints and preferences, represented in IDP, then allowing the system to make recommendations about a partial or complete solution or set of solutions. They built a standalone tool for engineers to use this system, presenting a set of design constraints for the engineer to apply to narrow down the possible solutions. They compared the merits of DMN versus IDP representations, where DMN is easier to model and understand, but has limitations in what it can represent as well as being more cumbersome to maintain. At RuleML yesterday, they presented a proposal for extended DMN for better representing constraints.

They finished up talking about potential applications of machine learning on the design database: searching for “similar” existing solutions, learning new constraints, and checking data consistency. They have several automated engineering tools in development, with one in testing and one in production. Their engineering support tool has working core functionality although need to expand the knowledge base and prototype the UI. On the ML work, they are expecting to have a prototype by the end of this year.

DecisionCAMP 2019: Standards-based machine learning and the friendliness of FEEL

Machine Learning and Decision Management:
A standards-based approach. Edson Tirelli and Matteo Mortari, Red Hat

DecisionCAMP Day 1 morning sessions continue with Edson Tirelli and Mateo Mortari presenting on the integration of machine learning and decision management to address predictive decision automation. The problem to date is that integrating machine learning into business automation (either process or decision) has required proprietary interfaces and APIs, although there is an existing standard (PMML, Predictive Model Markup Language) for specifying and exchanging many types of executable machine learning models. The entry of the DMN standard provides a potential bridge between PMML and both BPMN and CMMN, allowing for an end-to-end standards-based representation for cases, processes, decisions and predictive models.

Linking business automation and machine learning with standards. From Edson Tirelli’s presentation.

They gave a demo of how they have implemented this using RedHat decision and process engines along with open source tools Prometheus and Grafana, with a credit card dispute use case that uses BPMN, DMN and PMML to model the process and decisions. They started with a standard use of BPMN and DMN, where the DMN decision tables and graphs calculate the risk factors of the dispute and the customer, and make a decision on whether or not the dispute process can be automated. They added a predictive model for better calculation of the risk factors, positioning this in the DMN DRD as a business knowledge model that can then drive the decision model instead of a hard-coded decision table.

They finished their demo by importing the same PMML and DMN models in the Trisotech modeler to show interoperability of the integrated model types, with the predictive models providing knowledge sources for the decision models.

Coming from the process side, this is really exciting: we’re already seeing a lot of proprietary plug-ins and APIs to add machine learning to business processes, but this goes beyond that to allow standards-based tools to be plugged together easily. There’s still obviously work to be done to make this a seamless integration, but the idea that it can be all standards-based is pretty significant.

FEEL, Is It Really Friendly Enough? Daniel Schmitz-Hübsch and Ulrich Striffler, Materna

Materna has a number of implementation projects (mostly German government) that involve decision automation, where logic is modeled by business users and require that the decision justification be able to be explained to all users for transparency of decision automation. They use both decision tables and FEEL — decision tables are easier for business users to understand, but can’t represent everything — and some of the early adopters are using DMN. Given that most requirements are documented by business users in natural language, there are some obstacles to moving that initial representation to DMN instead.

Having the business users model the details of decisions in FEEL is the biggest issue: basically, you’re asking business people to write code in a script language, with the added twist that in their case, the business users are not native English speakers but the FEEL keywords are in English. In my experience, it’s hard enough to get business people to create syntactically-correct visual models in BPMN, moving to a scripting language would be a daunting task, and doing that in a foreign language would make most business people’s heads explode in frustration.

They are trying some different approaches for dealing with this: allowing the users to read and write the logic in their native natural language (German), or replacing some FEEL elements (text statements) with graphical representations. They believe that this is a good starting point for a discussion on making FEEL a bit friendlier for business users, especially those whose native language is not English.

Graphical representation of FEEL elements. From Daniel Schmitz-Hübsch and Ulrich Striffler’s presentation.

Good closing discussion on the use of different tools for different levels of people doing the modeling.

DecisionCAMP 2019: collaborative decision making and temporal reasoning in DMN

Collaborative decisions: coordinating automated and human decision-making. Alan Fish, FICO

Alan Fish presented on the coordination of decisions between automation, individuals and groups. He considered how DMN isn’t enough to model these interactions, since it doesn’t allow for modeling certain characteristics; for example, partitioning decisions over time is best done with a combination of BPMN and DMN, where temporal dependencies can be represented, while combining CMMN and DMN can represent the partitioning decisions between decision-makers.

Partitioning decisions over time, modeled with BPMN and DMN. From Alan Fish’s presentation.

He also looked at how to represent the partition between decisions and meta-decisions — which is not currently covered in DMN — where meta-decisions may be an analytical human activity that then determines some of the rules around how decisions are made. He defines an organization as a network of decision-making entities passing information to each other, with the minimum requirement for success based on having models of processes, case management, decisions and data. The OMG “Triple Crown” of DMN, BPMN and CMMN figure significantly in his ideas on a certain level of organizational modeling, and the success of the organizations that embrace them as part of their overall modeling and improvement efforts.

He sees radical process reengineering as being a risky operation, and posits that doing process reengineering once then constantly updating decision models to adapt to changing conditions. An interesting discussion on organizational models and how decision management fits into larger representations of organizations. Also some good follow-on Q&A about whether to consider modeling state in decision models, or leaving that to the process and case models; and about the value of modeling human decisions along with automated ones.

Making the Right Decision at the Right Time: Introducing Temporal Reasoning to DMN. Denis Gagné, Trisotech

Denis Gagné covered the concepts of temporal reasoning in DMN, including a new proposal to the DMN RTF for adding temporal reasoning concepts. Temporal logic is “any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time”, that is, representing events in terms of whether they happened sequentially or concurrently, or what time that a particular event occurred.

The proposal will be for an extension to FEEL — which already has some basic temporal constructs with date and time types — that provides a more comprehensive representation based on Allen’s interval algebra and Zaidi’s point-interval logic. This would have built-in functions regarding intervals and points, with two levels of abstraction for expressiveness and business friendliness, allowing for DMN to represent temporal relationships between points, between points and intervals, and between intervals.

Proposed DMN syntax for temporal relationships. From Denis Gagné‘s presentation.

The proposal also includes a more “business person common sense” interpretation for interval overlaps and other constructs: note that 11 of the possible interval-interval relationships fall into this category, which makes this into a simpler before/after/overlap designation. Given all of these representations, plus more robust temporal functions, the standard can then allow expressions such as “interval X starts 3 days before interval Y” or “did this happen in September”.

This is my first time at DecisionCAMP (formerly RulesFest), and I’m totally loving it. It’s full of technology practitioners — vendors, researchers and consultants — who more interested in discussing interesting ways to improve decision management and the DMN standard rather than plugging their own products. I’m not as much of a decision management expert as I am in process management, so great learning opportunities for me.

DecisionCAMP 2019 kicks off – business rules and decision management technology conference

I’m finishing up a European tour of three conferences with DecisionCAMP in Bolzano, which has a focus on business rules and decision management technology. This is really a technology conference, with sessions intended to be more discussions about what’s happening with new advances rather than the business or marketing side of products. Jacob Feldman of OpenRules was kind enough to invite me to attend when he heard that I was going to be with striking distance at CamundaCon last week in Berlin, and I’ll be moderating a panel tomorrow afternoon in return.

Feldman opened the conference with an overview of operational decision services for decision-making applications, such as smart processes, and the new requirements for decision services regarding performance, security and architectural models. He sees operational decision services as breaking down into three components: business knowledge (managed by business subject matter experts), business decision models (managed by business analysts) and deployed decision services (managed by developers/devops) — the last of these is what is triggered by decision-making applications when they pass data and request a decision. There are defined standards for the business decision models (e.g., DMN) and transferring those to execution engines for the deployed services, but issues arise in standardizing how SMEs capture business knowledge and pass it on the to BAs for the creation of the decision models; definitely an area requiring more work from both standards groups and vendors.

I’ll do some blog posts that combine multiple presentations; you can see copies of most of the presentations here.

Goals and metrics

I’ve been spending some time recently helping a few companies think about how their corporate goals are aligned with key performance indicators (KPIs) at all levels of their organization, like this:

image

Top-level goals, or what keeps the corporate executives awake at night, usually fall into the following categories:

  • Revenue growth
  • Competitive differentiation
  • Product agility
  • Customer retention

As we move down the hierarchy, different levels of business managers are also concerned with operating margin/profitability, service time, compliance, and operational scalability; you can see a pretty direct line between these KPIs and the top-level corporate goals. For example, improved profitability is likely going to improve (net) revenue, while better service time means happier customers. When we reach the level of front-line workers, their KPIs are usually based on individual performance and skills advancement.

The problem arises when those worker-level KPIs are not aligned with the corporate goals; I’ve written about this in several presentations and papers in the past, in particular about how we need to change worker metrics in more collaborative work environments so that they’re rewarded for more than just personal performance. In doing some research on this, I came across Goodhart’s Law (via the book The Tyranny of Metrics), which is basically about how people will game measurement systems to their own benefit, particularly when goals are complex and the metrics are crude. That’s so true. In other words, given the choice between maximizing a poorly-designed metric that will benefit them personally, or doing the right thing for the customer/company, people will almost always choose the former.

Examples:

  • An organization has a “same day” SLA for incoming customer inquiries, except if the inquiry needs to be reviewed by the legal or accounting departments. Business units are measured on how well they meet the SLA, so everyone forwards all of their unfinished work to legal or accounting at the end of the day in order to they meet their SLA, even if the inquiry does not require it. This decreases productivity and increases customer service time, but maximizes the departmental time-based SLA.
  • An HR department is measured by the number of candidates that are hired, but not on the quality of the candidates. I don’t need to explain how that goes wrong, but suffice it to say that it has a big impact on customer satisfaction as well as productivity.

Any metric that is based on individual (or departmental) performance but can’t be aligned up the hierarchy to a corporate goal is probably going to be detrimental to overall performance, or at least neutral. If you can’t show how a task is contributing to the good of the enterprise, then why are you doing it?

Spreadsheets and email

I had a laugh at the xkcd comic from a few days ago:

Spreadsheets

It made me think of my standard routine when I’m walking through a business operations area and want to pinpoint where the existing systems aren’t doing what the workers really need them to do: I look for the spreadsheets and email. These are the best indicator of shadow IT at work, where someone in the business area creates an application that is not sanctioned or supported by IT, usually because IT is too busy to "do it right". Instead of accessing data from a validated source, it’s being copied to a spreadsheet, where scripts are performing calculations using business logic that was probably valid at that point that it was written but hasn’t been updated since that person left the company. Multiple copies of the spreadsheet (or a link to an unprotected copy on a shared drive) are forwarded to people via email, but there’s no way to track who has it or what they’ve done with it. If the data in the source system changes, the spreadsheet and all of its copies stay the same unless manually updated.

Don’t get me wrong: I love spreadsheets. I once claimed that you could take away every other tool on my desktop and I could just reproduce it in Excel. Spreadsheets and email fill the gaps between brittle legacy systems, but they aren’t a great solution. That’s where low-code platforms fit really well: they let semi-technical business analysts (or semi-business technical analysts) create applications that can access realtime business data, assign and track tasks, and integrate other capabilities such as decision management and analytics.

I gave a keynote at bpmNEXT this year about creating your own digital automation platform using a BPMS and other technology components, which is what many large enterprises are doing. However, there are many other companies — and even departments within those large companies — for which a low-code platform fills an important gap. I’ll be doing a modified version of that presentation at this year’s CamundaCon in Berlin, and I’m putting together a bit of a chart on how to decide when to build your own platform and when to use a monolithic low-code platform for building business applications. Just don’t use spreadsheets and email.

Microservices meets case management: my post on the @Alfresco blog

Image lifted from my post on Alfresco’s blog, charmingly named “analyst_meme1”

I wrote a post on a microservices approach to intelligent case management applications for Alfresco, which they’ve published on their blog. It covers the convergence of three key factors: the business requirement to support case management paradigms for knowledge work; the operational drive to increase automation and ensure compliance; and the technology platform trend to adopt microservices.

It’s a pretty long read, I originally wrote it as a 3-4 page paper to cover the scope of the issues and cover case management examples in insurance claims, citizen services, and customer onboarding. My conclusion:

Moving from a monolithic application to microservices architecture makes good sense for many business systems today; for intelligent case management, where no one supplier can provide a good solution for all of the required capabilities, it’s essential.

Before you ask:

  • Yes, I was paid for it, which is why it’s there and not here.
  • No, it’s not about Alfresco products, it’s technology/business analysis.

Wrapping up OpenText Enterprise World 2019

It’s the last day of OpenText Enterprise World for this year. I started the day attending one of the developer labs, where I created a JavaScript app using OT2 services, then attended a couple of AppsWorks-related sessions: Duke Energy’s transition from MetaStorm to AppWorks, and using AppWorks for process/case and content integration in the public sector. I also got to meet the adorable Great Dane that was here as part of the Paws for a Break program: she’s a cross between a Harlequin and Merle in color, so they call her a Merlequin.

Mark Barrenechea was back to close the conference with a quick recap: 3,500 attendees, OpenText Cloud Edition, Google partnership, ethical supply chains, and the talk by Sir Tim Berners-Lee. Plus Berner-Lee’s quote of the real reason that the web was created: cat videos!

In addition to the announcements that we heard during the week, Barrenechea also told us about their new partnership with MasterCard to provide integrated payment services in B2B supply chains, and had two MasterCard Enterprise Partnership executives on stage to talk more about it.

The closing ceremonies finished off with another very special guest: singer, songwriter and activist Peter Gabriel. I was familiar with his music career — having had the pleasure to see him live in concert in the past — but didn’t realize the extent of his human rights activism. He talked about his start and career in music, and some of the ways that he’s woven human rights into his career, from writing the timeless anti-apartheid hit about Stephen Biko to starting the WOMAD festival. He’s been involved in the creation of an inter-species internet, and showed a video of a bonobo composing music with him.

Then his band joined him and he played a set! Amazing finish to the week.

OpenText Enterprise World 2019: AppWorks roadmap and technical deep dive

I had an afternoon with AppWorks at OpenText Enterprise World: a roadmap session followed by a technical deep dive. AppWorks is their low-code tool that includes process management, case management, and access to content and other information, supported across mobile and desktop and platforms. It contains a number of pre-packaged components, and a technical developer can create new components that can be accessed as services from the AppWorks environment. They’ve recently made it into the top-right corner of the Forrester Wave for [deep] digital process automation platforms, with their strength in case management and content integration listed as some of their strongest features, as well as Magellan’s AI and analytics, and the OpenText Cloud deployment platform.

The current release has focused on improving end-user flexibility and developer ease-of-use, but also on integration capabilities with the large portfolio of other OpenText tools and products. Some new developer features such as an expression editor and a mobile-first design paradigm, plus an upcoming framework for end-user UI customization in terms of themes and custom forms. Runtime performance has been improved by making applications into true single-page applications.

There are four applications built on the current on-premise AppWorks: Core for Legal, Core for Quality Management, Contract Center and People Center. These are all some combination of content (from the different content services platforms available) plus case or process management, customized for a vertical application. I didn’t hear a commitment to migrate these to the cloud, but there’s no reason that this won’t happen.

Some interesting future plans, such as how AppWorks will be used as a low-code development tool for OT2 applications. They have a containerized version of AppWorks available as a developer preview as a stepping stone to next year’s cloud edition. There was a mention of RPA although not a clear direction at present: they can integrate with third-party RPA tools now and may be mulling over whether to build/buy their own capability. There’s also the potential to build process intelligence/mining and reporting functionality based on their Magellan machine learning and analytics. There were a lot of questions from the audience, such as whether they will be supporting GitHub for source code control (probably but not yet scheduled) and better REST support.

Nick King, the director of product management for AppWorks, took us through a technical session that was primarily an extended live demonstration of creating a complex application in AppWorks. Although the initial part of creating the layout and forms is pretty accessible to non-technical people, the creation of BPMN diagrams, web service integration, and case lifecycle workflows are clearly much more technical; even the use of expressions in the forms definition is starting to get pretty technical. Also, based on the naming of components visible at various points, there is still a lot of the legacy Cordys infrastructure under the covers of AppWorks; I can’t believe it’s been 12 years since I first saw Cordys (and thought it was pretty cool).

There are a lot of nice things that just happen without configuration, much less coding, such as the linkages between components within a UI layout. Basically, if an application contains a number of different building blocks such as properties, forms and lifecycle workflows, those components are automatically wired together when assembled on a single page layout. Navigation breadcrumbs and action buttons are generated automatically, and changes in one component can cause updates to other components without a screen refresh.

OpenText, like every other low-code application development vendor, will likely continue to struggle with the issues of what a non-technical business analyst versus a technical developer does within a low-code environment. As a JAVA developer at one of my enterprise clients said recently upon seeing a low-code environment, “That’s nice…but we’ll never use it.” I hope that they’re wrong, but fear that they’re right. To address that, it is possible to use the AppWorks environment to write “pro-code” (technical lower-level code) to create services that could be added to a low-code application, or to create an app with a completely different look and feel than is possible using AppWorks low-code. If you were going to do a full-on BPMN process model, or make calls to Magellan for sentiment analysis, it would be more of a pro-code application.

bpmNEXT 2019 demos: automation services with @Trisotech and @bpmswatch

The day started with my keynote on rolling your own digital automation platform using BPM and microservices, which set the stage for the two demos and the round table discussion that followed.

Business Automation as a Service, with Denis Gagne of Trisotech

Denis demoed a new product release from Trisotech, their business automation as a service platform: competing with services such as Zapier and IFTTT but with better process and decision management, and more complex service types available for integration. He showed creating a service built on a Twitter trigger, using BPMN to model the orchestration and FEEL as the scripting language in script activities, and incorporating a machine learning sentiment score and a decision service for categorizing the results, with the result displayed in the color of a flashing smart light bulb. Every service created exposes an Open API and REST API by default, and is deployed as a self-contained microservice. He showed a more complex example of marketing automation that extracts data from an input form, uses a geo-locator to find the customer location, uses a DMN decision model to assign to a sales team based on geography and other form parameters, then creates a lead in Microsoft Dynamics CRM. He finished up with an RPA task example that included the funniest execution of an “I am not a robot” CAPTCHA ever. Key point here is that Trisotech has moved from a pure modeling vendor into the execution space, integrated with any Open API service, and deployable across a number of different cloud platforms using standard protocols. Looking forward to playing around with this.

Business-Composable Services for the Mortgage Industry, with Bruce Silver of Method and Style

Bruce showed the business automation services that he’s created using Trisotech’s platform for the mortgage industry. Although he started looking at decision services around how to determine if someone should be approved for a mortgage (or how large of a mortgage), process was also required to do things like handle mapping and validation of data. Everything is driven by a standard application form and a standard set of underwriting rules used in the US mortgage industry, although this could be modified to suit other markets with different rules. The DMN rules are written in business-readable language, allowing them to be changed by non-developers. The BPMN process does the data validation and mapping before invoking the underwriting decision service. The entire process can be published as a service to be called from any environment, such as a web app used by underwriters inside a financial company or by an online prequalification review done directly by the consumer. The plan is to make these models and services available to see what the adoption is like, to help highlight the value and drive the usage of BPMN and DMN in practice.

Industry Round Table: The Coming Impact of Decision Services and Machine Learning on Business Automation

We finished the morning of day 2 with a discussion that included three of the earlier demo presenters: Denis Gagne, Bruce Silver and Scott Menter. They each gave a short talk on how decision services and machine learning are changing the automation landscape. Some ideas discussed:

  • It’s still up in the air whether DMN will “cross the chasm” and become generally used (to the same degree as, for example, BPMN); this means that vendors need to fully support it, potentially as an execution as well as requirements language.
  • Having machine learning algorithms expressed as DMN can improve transparency of decisions, which is essential in some jurisdictions (e.g., GDPR). There is a need for “explainable AI”.
  • The population using DMN is lower than BPMN, and the skill level is higher, although still well within the capabilities of data-focused business people who are comfortable with formulas and expression languages.
  • There’s a distinction between symbolic (rules-based) and sub symbolic (neural network) AI algorithms in terms of what they can do and how they perform; however, sub symbolic AI is less of a black box in terms of decision transparency.
  • If we here at bpmNEXT aren’t thinking about the ethics of automation, who will? Consider the labor disruption of automation, or decisions that make a choice involving the value of life (the AI “trolley problem”), or old norms used as training data to create biased machine learning.
  • We’re still in a culture of having people at a certain skill level (e.g., surgeons, pilots) make their own decisions, although they might be advised by AI. How soon before we accept automated decisions at that level?
  • Individually-targeted decisions are happening now by what is presented to specific people through platforms like Google Search and Amazon. How is our behavior being controlled by the limited set of options presented to us?
  • The closer that a technology gets to the end effect, the more responsibility that the creator of the technology needs to take in how it is used.
  • Machine learning may be the best way to discover the best transparent decision logic from human action (unfortunately that will also include the human biases), allowing for people to understand how and why specific decisions are made.
  • When AI is a black box, it needs to be understood as being a black box, so that adequate constructs can be created around it for testing and usage.

Great discussion and audience participation, and a good follow-on from the two demos that showed decision services in action.