DecisionCAMP 2019: Standards-based machine learning and the friendliness of FEEL

Machine Learning and Decision Management:
A standards-based approach. Edson Tirelli and Matteo Mortari, Red Hat

DecisionCAMP Day 1 morning sessions continue with Edson Tirelli and Mateo Mortari presenting on the integration of machine learning and decision management to address predictive decision automation. The problem to date is that integrating machine learning into business automation (either process or decision) has required proprietary interfaces and APIs, although there is an existing standard (PMML, Predictive Model Markup Language) for specifying and exchanging many types of executable machine learning models. The entry of the DMN standard provides a potential bridge between PMML and both BPMN and CMMN, allowing for an end-to-end standards-based representation for cases, processes, decisions and predictive models.

Linking business automation and machine learning with standards. From Edson Tirelli’s presentation.

They gave a demo of how they have implemented this using RedHat decision and process engines along with open source tools Prometheus and Grafana, with a credit card dispute use case that uses BPMN, DMN and PMML to model the process and decisions. They started with a standard use of BPMN and DMN, where the DMN decision tables and graphs calculate the risk factors of the dispute and the customer, and make a decision on whether or not the dispute process can be automated. They added a predictive model for better calculation of the risk factors, positioning this in the DMN DRD as a business knowledge model that can then drive the decision model instead of a hard-coded decision table.

They finished their demo by importing the same PMML and DMN models in the Trisotech modeler to show interoperability of the integrated model types, with the predictive models providing knowledge sources for the decision models.

Coming from the process side, this is really exciting: we’re already seeing a lot of proprietary plug-ins and APIs to add machine learning to business processes, but this goes beyond that to allow standards-based tools to be plugged together easily. There’s still obviously work to be done to make this a seamless integration, but the idea that it can be all standards-based is pretty significant.

FEEL, Is It Really Friendly Enough? Daniel Schmitz-Hübsch and Ulrich Striffler, Materna

Materna has a number of implementation projects (mostly German government) that involve decision automation, where logic is modeled by business users and require that the decision justification be able to be explained to all users for transparency of decision automation. They use both decision tables and FEEL — decision tables are easier for business users to understand, but can’t represent everything — and some of the early adopters are using DMN. Given that most requirements are documented by business users in natural language, there are some obstacles to moving that initial representation to DMN instead.

Having the business users model the details of decisions in FEEL is the biggest issue: basically, you’re asking business people to write code in a script language, with the added twist that in their case, the business users are not native English speakers but the FEEL keywords are in English. In my experience, it’s hard enough to get business people to create syntactically-correct visual models in BPMN, moving to a scripting language would be a daunting task, and doing that in a foreign language would make most business people’s heads explode in frustration.

They are trying some different approaches for dealing with this: allowing the users to read and write the logic in their native natural language (German), or replacing some FEEL elements (text statements) with graphical representations. They believe that this is a good starting point for a discussion on making FEEL a bit friendlier for business users, especially those whose native language is not English.

Graphical representation of FEEL elements. From Daniel Schmitz-Hübsch and Ulrich Striffler’s presentation.

Good closing discussion on the use of different tools for different levels of people doing the modeling.

DecisionCAMP 2019: collaborative decision making and temporal reasoning in DMN

Collaborative decisions: coordinating automated and human decision-making. Alan Fish, FICO

Alan Fish presented on the coordination of decisions between automation, individuals and groups. He considered how DMN isn’t enough to model these interactions, since it doesn’t allow for modeling certain characteristics; for example, partitioning decisions over time is best done with a combination of BPMN and DMN, where temporal dependencies can be represented, while combining CMMN and DMN can represent the partitioning decisions between decision-makers.

Partitioning decisions over time, modeled with BPMN and DMN. From Alan Fish’s presentation.

He also looked at how to represent the partition between decisions and meta-decisions — which is not currently covered in DMN — where meta-decisions may be an analytical human activity that then determines some of the rules around how decisions are made. He defines an organization as a network of decision-making entities passing information to each other, with the minimum requirement for success based on having models of processes, case management, decisions and data. The OMG “Triple Crown” of DMN, BPMN and CMMN figure significantly in his ideas on a certain level of organizational modeling, and the success of the organizations that embrace them as part of their overall modeling and improvement efforts.

He sees radical process reengineering as being a risky operation, and posits that doing process reengineering once then constantly updating decision models to adapt to changing conditions. An interesting discussion on organizational models and how decision management fits into larger representations of organizations. Also some good follow-on Q&A about whether to consider modeling state in decision models, or leaving that to the process and case models; and about the value of modeling human decisions along with automated ones.

Making the Right Decision at the Right Time: Introducing Temporal Reasoning to DMN. Denis Gagné, Trisotech

Denis Gagné covered the concepts of temporal reasoning in DMN, including a new proposal to the DMN RTF for adding temporal reasoning concepts. Temporal logic is “any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time”, that is, representing events in terms of whether they happened sequentially or concurrently, or what time that a particular event occurred.

The proposal will be for an extension to FEEL — which already has some basic temporal constructs with date and time types — that provides a more comprehensive representation based on Allen’s interval algebra and Zaidi’s point-interval logic. This would have built-in functions regarding intervals and points, with two levels of abstraction for expressiveness and business friendliness, allowing for DMN to represent temporal relationships between points, between points and intervals, and between intervals.

Proposed DMN syntax for temporal relationships. From Denis Gagné‘s presentation.

The proposal also includes a more “business person common sense” interpretation for interval overlaps and other constructs: note that 11 of the possible interval-interval relationships fall into this category, which makes this into a simpler before/after/overlap designation. Given all of these representations, plus more robust temporal functions, the standard can then allow expressions such as “interval X starts 3 days before interval Y” or “did this happen in September”.

This is my first time at DecisionCAMP (formerly RulesFest), and I’m totally loving it. It’s full of technology practitioners — vendors, researchers and consultants — who more interested in discussing interesting ways to improve decision management and the DMN standard rather than plugging their own products. I’m not as much of a decision management expert as I am in process management, so great learning opportunities for me.

DecisionCAMP 2019 kicks off – business rules and decision management technology conference

I’m finishing up a European tour of three conferences with DecisionCAMP in Bolzano, which has a focus on business rules and decision management technology. This is really a technology conference, with sessions intended to be more discussions about what’s happening with new advances rather than the business or marketing side of products. Jacob Feldman of OpenRules was kind enough to invite me to attend when he heard that I was going to be with striking distance at CamundaCon last week in Berlin, and I’ll be moderating a panel tomorrow afternoon in return.

Feldman opened the conference with an overview of operational decision services for decision-making applications, such as smart processes, and the new requirements for decision services regarding performance, security and architectural models. He sees operational decision services as breaking down into three components: business knowledge (managed by business subject matter experts), business decision models (managed by business analysts) and deployed decision services (managed by developers/devops) — the last of these is what is triggered by decision-making applications when they pass data and request a decision. There are defined standards for the business decision models (e.g., DMN) and transferring those to execution engines for the deployed services, but issues arise in standardizing how SMEs capture business knowledge and pass it on the to BAs for the creation of the decision models; definitely an area requiring more work from both standards groups and vendors.

I’ll do some blog posts that combine multiple presentations; you can see copies of most of the presentations here.

bpmNEXT 2019 demos: automation services with @Trisotech and @bpmswatch

The day started with my keynote on rolling your own digital automation platform using BPM and microservices, which set the stage for the two demos and the round table discussion that followed.

Business Automation as a Service, with Denis Gagne of Trisotech

Denis demoed a new product release from Trisotech, their business automation as a service platform: competing with services such as Zapier and IFTTT but with better process and decision management, and more complex service types available for integration. He showed creating a service built on a Twitter trigger, using BPMN to model the orchestration and FEEL as the scripting language in script activities, and incorporating a machine learning sentiment score and a decision service for categorizing the results, with the result displayed in the color of a flashing smart light bulb. Every service created exposes an Open API and REST API by default, and is deployed as a self-contained microservice. He showed a more complex example of marketing automation that extracts data from an input form, uses a geo-locator to find the customer location, uses a DMN decision model to assign to a sales team based on geography and other form parameters, then creates a lead in Microsoft Dynamics CRM. He finished up with an RPA task example that included the funniest execution of an “I am not a robot” CAPTCHA ever. Key point here is that Trisotech has moved from a pure modeling vendor into the execution space, integrated with any Open API service, and deployable across a number of different cloud platforms using standard protocols. Looking forward to playing around with this.

Business-Composable Services for the Mortgage Industry, with Bruce Silver of Method and Style

Bruce showed the business automation services that he’s created using Trisotech’s platform for the mortgage industry. Although he started looking at decision services around how to determine if someone should be approved for a mortgage (or how large of a mortgage), process was also required to do things like handle mapping and validation of data. Everything is driven by a standard application form and a standard set of underwriting rules used in the US mortgage industry, although this could be modified to suit other markets with different rules. The DMN rules are written in business-readable language, allowing them to be changed by non-developers. The BPMN process does the data validation and mapping before invoking the underwriting decision service. The entire process can be published as a service to be called from any environment, such as a web app used by underwriters inside a financial company or by an online prequalification review done directly by the consumer. The plan is to make these models and services available to see what the adoption is like, to help highlight the value and drive the usage of BPMN and DMN in practice.

Industry Round Table: The Coming Impact of Decision Services and Machine Learning on Business Automation

We finished the morning of day 2 with a discussion that included three of the earlier demo presenters: Denis Gagne, Bruce Silver and Scott Menter. They each gave a short talk on how decision services and machine learning are changing the automation landscape. Some ideas discussed:

  • It’s still up in the air whether DMN will “cross the chasm” and become generally used (to the same degree as, for example, BPMN); this means that vendors need to fully support it, potentially as an execution as well as requirements language.
  • Having machine learning algorithms expressed as DMN can improve transparency of decisions, which is essential in some jurisdictions (e.g., GDPR). There is a need for “explainable AI”.
  • The population using DMN is lower than BPMN, and the skill level is higher, although still well within the capabilities of data-focused business people who are comfortable with formulas and expression languages.
  • There’s a distinction between symbolic (rules-based) and sub symbolic (neural network) AI algorithms in terms of what they can do and how they perform; however, sub symbolic AI is less of a black box in terms of decision transparency.
  • If we here at bpmNEXT aren’t thinking about the ethics of automation, who will? Consider the labor disruption of automation, or decisions that make a choice involving the value of life (the AI “trolley problem”), or old norms used as training data to create biased machine learning.
  • We’re still in a culture of having people at a certain skill level (e.g., surgeons, pilots) make their own decisions, although they might be advised by AI. How soon before we accept automated decisions at that level?
  • Individually-targeted decisions are happening now by what is presented to specific people through platforms like Google Search and Amazon. How is our behavior being controlled by the limited set of options presented to us?
  • The closer that a technology gets to the end effect, the more responsibility that the creator of the technology needs to take in how it is used.
  • Machine learning may be the best way to discover the best transparent decision logic from human action (unfortunately that will also include the human biases), allowing for people to understand how and why specific decisions are made.
  • When AI is a black box, it needs to be understood as being a black box, so that adequate constructs can be created around it for testing and usage.

Great discussion and audience participation, and a good follow-on from the two demos that showed decision services in action.

bpmNEXT 2018: Last session with a Red Hat demo, Serco presentation and DMN TCK review

We’re on the final session of bpmNEXT 2018 — it’s been an amazing three days with great demos and wonderful conversations.

Exploiting Cloud Infrastructure for Efficient Business Process Execution, Red Hat

Kris Verlaenen, project lead for jBPM as part of Red Hat, presented on cloud BPM infrastructure, specifically for execution and monitoring. Cloud makes BPM lightweight, scalable, embedable and able to take advantage of the larger cloud app ecosystem. They are introducing some new cloud infrastructure, including a controller for managing server deployments, a smart router for delegating and aggregating requests from applications to servers, and monitoring that aggregates process statistics across servers and containers. The demo showed using Red Hat’s OpenShift container application platform (actually MiniShift running on his laptop) to create a new environment and deploy an IT hardware ordering BPM application. He walked through using the application to create a new order and see the milestone-based monitoring of the order, then the hardware provider’s view of their steps in the process to provide information and advance the process to the next stage. The process engine and monitoring engine can be deployed in different containers on different hardware, in any combination of cloud providers and on-premise infrastructure. Applications and servers can be bundled into a single immutable image for easy provisioning — more of a microservices style — or can be deployed independently. Multiple versions of the same application can be deployed, allowing current instances to play out in the original version while new instances use the most recent version, or other strategies that would allow new instances of any version to be created, while monitoring can aggregate instance data from all versions in all containers.

Kris is also live-blogging the conference, check out his posts. He has gone back and included the video of each presentation when they are released (something that I didn’t do for page load performance reasons) as well as providing his commentary on each presentation.

Dynamic Work Assignment, Serco

Lloyd Dugan of Serco has the unenviable position of being the last presenter of the conference, although he gave a presentation of dynamic work assignment implementation rather than an actual demo (with a quick view of the simple process model in the Trisotech animator near the end, plus an animation of the work assignment in action). His company is a call center business process outsourcer, where knowledge workers use a case management application implemented in BPMN, driven by events such as inbound calls and documents, as well as timers. Real-time work prioritization and assignment is necessary because of SLAs around inbound calls, and the task management model is moving from work being selected (and potentially cherry-picked) by workers, to push assignments. Tasks are scored and assigned using decision models that include task type and SLAs, and worker eligibility based on each individual’s skills and training. Although work assignment products exist, this one is specifically for the complex rules around the US Affordable Care Act administration, which requires a combination of decision tables, database table-driven rules, and lower-level coding to provide the right combination of flexibility and performance.

DMN TCK (Technical Compatibility Kit) Working Group

Keith Swenson of Fujitsu (but presenting here in his role on the DMN standards) started on the idea of a set of standardized DMN technical compatibility tests based on conversations at bpmNEXT in 2016, and he presented today on where they’re at with the TCK. Basically, the TCK provides a way for DMN vendors to demonstrate their compliance with the standard by providing a set of DMN models, input data, and expected results, testing decision tables, boxed expressions and FEEL. Vendors who can demonstrate that they pass all of the TCK tests are listed on a github site along with information about individual test results, providing a way for DMN customers to assess the compliance level of vendors. Keith wrote an update on this last September that provides a good summary up to that point, and in today’s presentation he walked through some of the additional things that they’ve done including identifying sections of the DMN specification that require clarifications or additions due to ambiguity that can lead to different implementations. DMN 1.2 is coming out this year, which will require a new set of tests specifically for that version while maintaining the previous version tests; they are also trying to improve testing of error cases and introducing more real-world decision models. If you create and use DMN models, or make a DMN-compliant decision management product, or you’re otherwise interested in the DMN TCK, you can find out here how to get involved in the working group.

That’s it for bpmNEXT 2018. There will be voting for the best in show and some wrapup after lunch, but we’re pretty much done for this year. Another amazing year that makes me proud to be a part of this community.

bpmNEXT 2018 day 2 keynote with @NathanielPalmer

Nathaniel Palmer kicked off day 2 of bpmNEXT 2018 with his ever-prescient views on the next five years of BPM. Bots, decisions and automation are key, with the three R’s (robots, rules and relationships) defining BPM in the years to come. More and more, commercial transactions (or services that form part of those transactions) will happen on servers outside your organization, and often outside of your control; robots and intelligent agents will be doing a big part of that work. He also believes that we’re seeing the beginning of the death of smartphones, to be replaced with other devices and other interfaces such as conversational UI and wearable technology. This is going to radically change how apps have to be designed, and will leave a lot of companies scrambling to catch up with this change as people move more of their interactions off smartphones and laptops. Although more conservative organizations — including government agencies — will continue to support the least common denominator in interaction style (probably email and traditional websites), commercial organizations don’t have that luxury, and need to rethink sooner. He envisions that your fastest-growing competitors will have fewer employees than robots, although some interesting news out of Tesla this week may indicate that it’s premature to replace some human functions.

He spoke about how this will refine application architecture to four tiers: a client tier unique to each platform, a separate delivery tier that optimizes delivery for the platforms, an aggregation tier that integrates services and data, and a services tier that pulls data from both internal and external source. This creates an abstraction between what a task is and how it is performed, and even whether it is automated or performed by a person. Decision as a service for both commercial and government services will become a primary delivery model, allowing decisions (and the automation enabled by them) to be easily plugged into applications; this will require more of a business-first, model-driven approach rather than having decisions built in code by developers.

His Future-Proof BPM architecture — what others are calling a digital transformation platform — brings together a variety of capabilities that can be provided by many vendors or other organizations, and fed by events. In fact, the core capabilities (automation, machine learning, decision management, workflow management) also generate events that feed back into the data flooding into these processes. BPM platforms have the ability to become the orchestrating platforms for this, which is possibly why many of the BPMS vendors are rebranding as low-code application development environments, but be aware of fundamental differences in the underlying architecture: do they support modularity and microservices, or are they just lifting and shifting to monolithic containers in the cloud?

Finishing up, he returned to the concept that intelligent agents can act autonomously in complex transactions, and this will be becoming more common over the next few years. Interestingly, an interview that I did for a European publication is being translated into German, and the translator emailed me this morning to tell me that they needed to change some of my comments on automating loan transactions since that’s not permitted in Germany. My response: not yet, but it will be. We all need to be prepared for a more automated future.

Great audience discussion at the end on how this architecture is manifesting, how to model/represent some of these automation concepts, the role of a smarter event bus, the future of the word “bot” and more. Max Young from Capital BPM took over to discuss the development of a grammar for RPA, with an invitation for the brain trust in the room to start thinking about this in more detail. RPA vendors are creating their own notation, but a vendor-agnostic standard would go a long ways towards helping business people to directly specify automation.

Since they’re pumping out the video on the same day as the presentations, check the bpmNEXT YouTube channel later for a replay of Nathaniel’s presentation.

bpmNEXT 2018: Complex Modeling with MID GmbH, Signavio and IYCON

The final session of the first day of bpmNEXT 2018 was focused on advanced modeling techniques.

Designing the Data-Driven Company, MID GmbH

Elmar Nathe of MID GmbH presented on their enterprise decision maps, which provides an aggregated visualization of strategic, tactical and operational decisions with business events. They provide a variety of modeling tools, but see decisions as key to understanding how organizations are driven by data and events. Clearly a rich decision modeling environment, including support for PMML for including predictive models and other data scientist analysis tools, plus links to other model types such as ERDs that can show what data contributes to which decision model, and business process models. Much more of an enterprise architecture approach to model-driven design that can incorporate the work of data scientists.

Using Customer Journeys to Connect Theory with Reality, Signavio

Till Reiter and Enrico Teterra of Signavio started with a great example of an Ignite presentation, with few words, lots of graphics and a bit of humor, discussing their new notation for modeling an outside-in view of the customer journey rather than just having an undifferentiated “customer” swimlane in a BPMN diagram. The demo walked through their customer journey mapping tool, and how their collaboration hub overlays on that to allow information about each component of the journey map to be discussed amongst process modeling users. The journey map contains a lot of information about KPIs and other process metrics in a form most consumable by process owners and modelers, but also has a notebook/dashboard view for analysts to determine problems with the process and identify potential resolution actions. This includes a variety of analysis tools including process discovery, where process mining techniques are applied to determine which paths in the process model may be contributing to specific problems such as cycle time, then overlay this on the process model to assist with root cause analysis. Although their product does a good job of combing CJMs, process models and process analysis, this was more of a walkthrough of a set of pre-calculated dashboard screens rather than an actual demo — a far cry from the experimental features that Gero Decker showed off in their demo at the first bpmNEXT.

Discovering the Organizational DNA, IYCON and Knowledge Consultants

The final presentation of this section was with Jude Chagas Pereira of IYCON and Frank Kowalkowski of Knowledge Consultants presenting IYCON’s Afterspyre modeling tool for creating a catalog of complex business objects, their attributes and their linkages to create organizational DNA diagrams. Ranking these with machine learning algorithms for semantic and sentiment analysis allows identification of process improvement opportunities. They have a number of standard business analysis techniques built in, and robust analytics focused on problem solving. The demo walked through their catalog, drilling down into the “Strategy DNA” section and into “Technology Solutions” subsection to show an enumeration of the platforms currently in place together with attributes such as technology risk and obsolescence, which can be used to rank technology upgrade plans. Relationships between business objects can be auto-detected based on existing data. Levels including Objectives, Key Processes, Technology Solutions, Database Technology and Datacenter and their interrelationships are mapped into a DNA diagram and an alluvial diagram, starting at any point in the catalog and drilling down a specific number of levels as selected by the modeling analyst. These diagrams can then be refined further based on factors such as scaling the individual markers based on actual performance. They showed sentiment analysis for a hotel rank on a review site, which included extracting specific phrases that related to certain sentiments. They also demonstrated a two-model comparison, which compared the models for two different companies to determine the overlap and unique processes; a good indicator for a merger/acquisition (or even divestiture) level of difficulty. They finished up with affinity modeling, such as the type used by Amazon when they tell you what books that other people bought who also bought the book that you’re looking at: easy to do in a matrix form with a small data set, but computationally intensive once you get into non-trivial amounts of data. Affinity modeling is most commonly used in marketing to analyze buying habits and offering people something that they are likely to buy, even if that’s what they didn’t plan to buy at first — this sort of “would you like fries with that” technique can increase purchase value by 30-40%. Related to that is correlation modeling, which can be used as a first step for determining causation. Impressive semantic data-driven analytics tool for modeling a lot of different organizational characteristics.

That’s it for day one; if everyone else is as overloaded with information as I am, we’re all ready for tonight’s wine tasting! Check the Twitter stream for opinions and photos from other attendees.

bpmNEXT 2018: All DMN all the time, with Trisotech, Bruce Silver Associates and Red Hat

First session of the afternoon on the first day of bpmNEXT 2018, and this entire section is on DMN (decision management notation) and the requirement for decision automation based on DMN.

Decision as a Service (DaaS): The DMN Platform Revolution, Trisotech

Denis Gagne of Trisotech, who knows as much about DMN and other related standards as anyone around, started off the session with his ideas on the need for decision automation driven by requirements such as GDPR. He walked through their suite of decision-related products that can be used to create decision services to be consumed by other applications, as well as their conformance to the DMN standards. His demo showed a decision model for determining the best price to offer a rental vehicle customer, and walked through the capabilities of their platform with this model: DMN style check, import/export, execution, team collaboration, and governance through versioning. He also showed how decision models can be reused, so that elements from one model can be used in another model. Then, he showed how to take portions of the model and define them as a service using a visual wrapper, much like a subprocess wrapper visualization in BPMN, where the relationship lines that cross the service boundary become the inputs and outputs to the service. Cool. The service can then be deployed as an executable service using (in his demo) the Red Hat platform, test its execution using from a generated HTML form, generate the REST API or Open API interface code, run predefined test cases based on DMN TCK, promote the service from test to production, and publish it to an API publisher platform such as WSO2 for public consumption. The execution environment includes debugging and audit logs, providing traceability on the decision services.

Timing the Stock Market with DMN, Bruce Silver Associates

Bruce Silver, also a huge contributor to BPMN and DMN standards, and author of the BPMN Method & Style books and now the DMN M&S, presented an application for buying a stock at the right time based on price patterns. For investors who time the market based the pricing, the best way to do this is to look at daily min/max trends and fit them to one of several base type models. Bruce figured that this could be done with a decision table applied to a manipulated version of the data, and automated this for a range of stocks using a one-year history, processing in Excel, and decision services in the Trisotech cloud. This is a practical example of using decision services in a low-code environment by non-programmers to do something useful. His demo showed us the decision model for doing this, then the data processing (smoothing) done in Excel. However, for an application that you want to run every day, you’re probably not going to want to do the manual import/export of data, so he showed how to automate/orchestrate this with Microsoft Flow, which can still use the Excel sheet for data manipulation but automate the data import, execute the decision service, and publish the results back to the same Excel file. Good demonstration of the democratization of creating decisioning applications by through easy-to-use tools such as the graphical DMN modeler, Excel and Flow, highlighting that DMN is an execution language as well as a requirement language. Bruce has also just published a new book, DMN Cookbook, co-authored with Edson Tirelli of Red Hat, on getting started DMN business implementations using lightweight stateless decision services called via REST APIs.

Smarter Contracts with DMN, Red Hat

Edson Tirelli of Red Hat, Bruce Silver’s co-author on the above-mentioned DMN Cookbook, finished this section of DMN presentations with a combination of blockchain and DMN, where DMN is used to define the business language for calculations within a smart contract. His demo showed a smart land registry case, specifically a transaction for selling a property involving a seller, a buyer and a settlement service created in DMN that calculates taxes and insurance, with the purchase being executed using cryptocurrency. He mentioned Vanessa Bridge’s demo from earlier today, which showed using BPMN to define smart contract flows; this adds another dimension to the same problem, and likely no reason why you wouldn’t use them all together given the right situation. Edson said that he was inspired, in part, by this post on smart contracts by Paul Lachance, in which Lachance said “a visual model such as a BPMN and/or DMN diagram could be used to generate the contract source code via a process-engine”. He used Ethereum for the blockchain smart contract and the Ether cryptocurrency, Trisotech for the DMN models, and Drools for the rules execution. All in all, not such a far-fetched idea.

I’m still catching flak for suggesting the now-ubiquitous Ignite style for presentations here at bpmNEXT; my next lobbying effort will be around restricting the maximum number of words per slide. 🙂

Release webinar: @CamundaBPM 7.8

I listened in on the Camunda 7.8 release webinar this morning – they issue product releases every six months like clockwork – to hear about the new features and upgrades from CEO Jakob Freund and VP of engineering Daniel Meyer.

Camunda BPM stack, community versus enterpriseThey’re obviously getting a broader audience for these release webinars than just their current customers and open source community members, and started with a bit about the company, the product stack and their clients. We heard about a recent case study presented at their first San Francisco community day: 24 Hour Fitness is using Camunda process and decision management for high volume real-time orchestration of their core business processes. With over 190 processes in production, executing 20 million BPMN and 18 million DMN instances per day, this is clearly an enterprise-strength application; they are using the Camunda Enterprise Edition rather than the Community Edition for the additional features and SLA-based support, but the underlying engine and much of the tooling is identical between the products.

The key new features and updates are as follows:

  • Camunda BPM batch mode database operationsWorkflow engine performance improvements. A new batch mode allows 3-4 times more process instances to be executed per minute on several of the supported databases. This is based on grouping database transactions for the same database table (including both operational and audit tables), then doing a single round-trip call between the Camunda server and the database server to execute the batch of inserts, updates and deletes.
  • Cockpit batch operations. It’s now possible to do bulk operations for suspending/activating and modifying running processes instances, and restarting completed process instances. Process instances can be selected by process definition name and by more complex search and filtering operations such as instance variable values, then a batch command issued to suspend, restart, modify or delete instances. A new feature also allows all instances that are at a specific task to be dragged to a new task directly in the process model, whereas this was only possible with single instances before; this can be used either to move the instances to a new task to correct for an error condition or changed process flow, or to restart instances that are sitting at the final end node.
  • More Cockpit features. In addition to the batch operations, Cockpit also now has faster BPMN model rendering (from 8 seconds down to 2 seconds), ability to delete process definitions, and a number of other administrative functions.
  • Spring Boot Starter. Originally created as a community extension in 2015 (with significant contributions from community members Jan Galinski and Oliver Steinhauer), Camunda adopted this project into the main code base to create an officially-supported version of the Camunda Spring Boot Starter, documented here.

The first two updates are focused squarely on improving performance and administration for high volume operations, likely driven by clients such as 24 Hour Fitness, that will serve Camunda well as they push into more core enterprise business processes. The Spring Boot integration positions them well for deploying BPM services in a microservice architecture.

Camunda BPM 7.8

Good summary of the new features in 7.8, and a great Spring Boot coding demo by Meyer, in spite of his grumbling about having to do it on Windows for the webinar. Smile

The webinar will be available for replay soon; check their website for availability. You can also see their release blog post that links to the release notes and describes many of the things that I saw today in the webinar.

Disclaimer: Camunda has been, but is not currently, a client. They did not provide any incentive to attend and write about this webinar, and these are my own opinions. That’s always the case for what I write here, but it’s good to make it explicit every once in a while.

Financial decisions in DMN with @JanPurchase

Trisotech and their partner Lux Magi held a webinar today on the role of decision modeling and management in financial services firms. Jan Purchase of Lux Magi, co-author (with James Taylor) of Real-World Decision Modeling with DMN, gave us a look at why decision management is important for financial services. One of the key places for applying decision management is in compliance, which is all about decision-making: assessing risks, applying regulations, sharing data, and ensuring that rules are applied in a uniform manner. There are a lot of other areas where decision management can be applied, and potentially automated where this is a high volume/speed of transactions with a non-zero cost of errors. Decision management lets you make decisions explicit: it separates them from other business software to increase transparency and agility, and makes it easier for business people to understand what decisions are being applied and how that links to overall business goals. In particular, if decisions are automated with a decision management system, business people can quickly make changes to decision-making when compliance regulations change, with a much smaller IT involvement that would be required to modify legacy business systems.

There is a great deal of value in modeling decisions even if they are embedded within business systems and won’t be automated using a decision management system: decision models provide a way for business people to specify how systems should behave based on business data. Luckily, there is now a standard for decision modeling: Decision Model and Notation (DMN). This notation allows a decision to be modeled as a Decision Requirements Diagram (DRD) of the sub-decisions and knowledge sources that are required to reach that decision, and the possible paths to take in order to reach the decision. Within each of the decision nodes in the DRD, a definition of the decision can be specified using a decision table or the Friendly Enough Expression Language (FEEL), which may then be linked to an automated decision management system.

We then saw what a decision model looks like in Trisotech’s DMN Modeler, which allows for a standard DRD to be created, then augmented with additional information such as decision makers and owners. Purchase walked us through a number of the features of DMN as well as specific features of Trisotech’s tool, including analysis of decisions relative to Bruce Silver’s Method and Style best practices, and decision animation.

Lux Magi/Trisotech DMN 2017-10

If you know a bit about DMN already but want to understand some of the practical aspects of working with it in financial services, I assume that a replay of the webinar will be available at the original registration link or the Lux Magi event page.