Category Archives: DM

decision management

Camunda Community Day: @CamundaBPM technical sessions

I’m a few weeks late completing my report on the Camunda Community Day. The first part was on the community contributions and sessions, while the second half documented here is about Camunda showing new things that could be used by the community developers in the audience.

First up was Vladimirs Katusenoks, core developer on BPMN.io, with a presentation on bpmn-js: how it works, and how to extend it with custom functionality such as adding color to BPMN diagrams, which is a permitted extension to BPMN XML. His live coding presentation showed changing the colour of a shape background, either statically in code for the element class or by adding a colour picker to an individual element context palette; this was based on the bpmn-js core BPMN functionality, using bpmn-moddle to read/write into the metamodel and diagram-js to render it. There are a number of other bpmn-js examples on Github.

Next, Felix Müller discussed KPI management, expanding on his August blog post on the topic. KPI management is based on quantitative indicators for process cycle-time improvement, including cycle time and overdue time, plus definitions of the time period, unit of measure and calculation method. In Camunda, KPIs are defined in the Modeler, then monitored in Cockpit. He showed how to use the concept of element templates (that extend core definitions) to create custom fields on collaboration object (process) or individual tasks, e.g., KPI unit (hours, days, minutes) and KPI threshold (number). In Cockpit, this appears as a new tab for KPI Overview, showing a list of individual instances and target/current/average duration, plus an indicator of overdue status of the instance and any contained tasks; there is also a decorator bubble on the top right of the task on the process model to show the number of overdue instances on the aggregate model, or overdue status as a check mark or exclamation on individual models. The Cockpit modifications were done by creating a plug-in to display KPI statistics, which queries and calculates on the fly – a potential performance problem that might be improved through pre-aggregation of statistics. He also demonstrated how to modify this basic KPI model to include an expected duration as well as maximum duration. A good start, although I think there’s a lot more that’s needed here.

Thorsen Lindhauer, a Camunda core BPM developer, discussed how to contribute to the Camunda open source community, both at camunda.org (engine and desktop modeler, same as the commercial product) and bpmn.io (JS tools). Possible contributions include answering questions on forums; logging error reports; documenting ideas for new functionality; and working on code. Code contributions typically start by having a forum discussion about planned new functionality, then a decision is made on whether it will be core code (higher quality standards since it will become part of the commercial product, and will eventually be maintained by Camunda) versus a community extension; this is followed by ongoing development, merge and release cycles. Camunda is very supportive of community contributions, even if they don’t become part of the core product: community involvement is critical to the health of any open source project.

The last presentation of the community day was Daniel Meyer discussing the product roadmap. The next release, 7.6, will be on November 30 – they have a strict twice-yearly release cycle. This release includes updates to DMN, CMMN, BPMN, rolling updates, Cockpit features, and UI/UX in web apps; I have captured a few notes here but see the linked roadmap for a more complete and accurate description and the online documentation as it is rolled out.

  • DMN:
    • Simpler decision table editing with drop-down lists of comparison/range operators instead of having to remember FEEL or Juel syntax
    • Ability to add list of selection values (advanced mode still exists for full flexibility)
    • Decisions with literal expressions
    • DMN engine performance 4-6x faster
    • Support for decision requirements diagrams/graphs (DRD/DRG) that can link decision tables; visualization in Modeler and Cockpit are not there yet but the structures are supported – in my experience, this is typical of Camunda, which builds and releases the engine capabilities early then follows with the visualization, allowing for a quicker start for executable diagrams
  • CMMN:
    • Modeler now completely models CMMN including technical attributes such as listeners
    • Cockpit (visualization still incomplete although we saw a brief view) will allow linking models of same or different types
    • Engine feature and functionality improvements
  • Rolling updates allow Camunda process engine to be updated without shutdown: guaranteed backwards compatibility of database schema to allow database to be updated first, then roll updates of engines by taking each offline individually and allowing load balancer to reroute sessions.
  • BPMN:
    • BPMN conditional event supported
    • Improved modeling including labels, collapsing/expanding subprocesses to switch between view types, and field injections in property panel.
  • Cockpit:
    • More flexible/granular human task monitoring
    • New welcome page with links to apps (Cockpit, Tasklist, Admin), user profile, and frequent links
    • Batch operations (cancel, suspend, etc.) based on batch action capability built for instance migration
    • CMMN and DMN DRD visualization

Daniel discussed some other minor improvements based on customer feedback, plus plans for 2017, including a web modeler for collaborative BPMN, CMMN and DMN modeling via a SaaS offering and a future on-premise version. They finished the day with a poll and community feedback to establish priorities for future versions.

I stayed on for the second day, which is actually a separate conference: BPMCon for Camunda’s enterprise (commercial) customers. Rather, I stayed on for Neil Ward-Dutton’s keynote, then ducked out for most of the rest of day, which was in German. Neil’s keynote included results from workshops that he has done with executives on digital transformation, and how BPM can be used to create the bridges between the diverse parts of a digital business (internal to external, automated to people-centric), while tracking and coordinating the work that flows between the different areas.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

Camunda Community Day: community contributions

Two years ago, I attended Camunda’s open source community day, and gave the opening keynote at their enterprise user conference the following day. I really enjoyed my experience at the open source day, and jumped at the chance to attend again this year – and to visit Berlin.

The first day was the community day, where users of Camunda’s open source software version (primarily developers) talk about what they’re doing with it, plus some of the contributions that the community is making to the project and updates from Camunda on new features on the horizon. To break this up a bit – since I’m already a week after the conference and want to get something out there – I’ll cover the community sessions in this post, then the Camunda technical sessions and a bit about the enterprise conference in a later post.

The first presentation was by Oliver Hock of Videa Project Services, demonstrating robot control using a LEGO Mindstorms robot to solve a Rubix cube. He showed how they used BPMN to define movements and decision tables to determine the move logic, then automated the solution using Camunda BPM. Although you may never want to build a robot to solve a Rubix cube, there are a lot of other devices out there that, like the Mindstorms robot, are controlled via Java APIs; Hock’s design showed how these Java-enabled devices can make use of higher-level modeling constructs such as BPMN and decision tables.

Next up was Jan Galinski of Holisticon to show the Spring Boot community code extension – an example of how the community of Camunda open source users give back to the open source project for everyone’s benefit. Spring Boot is a microservices framework allowing for fast deployment of web applications with a minimal amount of overhead; the Spring Boot starter extension to Camunda allows for using Camunda without a Java application server to essentially provide Camunda apps as microservices. The extension, consisting of about 5,000 lines of code, has been developed over two years with 10 contributors, including both community and Camunda contributors. Galinski showed a live coding demo of replacing JBoss server with Spring Boot starter in a Camunda application to show how this works; he has also written a post on the Camunda community site on the 1.3.0 version of Camunda BPM Spring Boot for more technical details. Although granualar process apps such as this are easier from a devops perspective in terms of deployment and scalability, the challenge is that there is no single point of entry for an end user to look at a worklist (for example). We saw some methods for dealing with this, where a workload service collects information from individual process services with the help of the Camunda BPM Reactor plugin and aggregates them; a federated task list is under development to bring together tasks from multiple process servers into a single list, with a simple completion form. Galinski walked through the general architecture for this, and noted that they are working on making this an official extension. Update: Jan Galinski pointed out in the comments that it was Simon Zambrovski  (also of Holisticon) who did the portion on the presentation on cloud, universal tasklist and event processing — I missed the transition and his name in my hasty note-taking.

Jarl Friis of the Danish tax authority (SKAT) presented their use of the Camunda decision engine: they are using only decision services, and not the BPM capabilities, which likely makes them unusual as a Camunda customer. There are a couple of applications for them: first is to raise data quality in financial reporting to the IRS (for FATCA requirements), where they receive data from Danish financial institutions and have to process it into a specific XML format to send to the IRS. Although many of the data cleansing and transformation rules are in the XML schema definitions, some are not amenable to that format and are being defined in DMN decision tables instead. As this has rolled out, they see that decision tables give them an easier way to respond to annual rule changes, although their business people are not yet trained to make changes to the decision tables. That has resulted in developers having to make the decision table changes and test the results, which is one of the challenges that they have had to deal with: some of the developer test frameworks replicated the original decision table logic in code, which effectively tested the decision table implementation rather than the business logic. That test framework, of course, no longer worked when the decision table was changed, and Friis’ message to the audience was that organizations have to deal with challenges of ownership and responsibility for rules as well as rules testing.

Niall Deehan of Camunda gave a great presentation on on modeling anti-patterns: snaking models that are often used to fit models onto a single sheet of paper (instead, use model with happy path down the centre from left to right); inappropriate use of BPMN versus CMMN (e.g., voting scenarios); inappropriate use of BPMN versus process engine or Cockpit capabilities (e.g., service call with error exceptions for null pointer, bad response, service down); too many listeners on tasks (masks problems and pushes process logic into code, based on concept that analysts’ model should not be changed). He discussed some best practices for consistency: define the symbol set to be used by your analysts and lock down the modeler to remove elements that you don’t want people using; create and maintain your own best practices documentation; use model templates for commonly used activities; and proper training. I would love to see his presentation captured for replay: it was engaging and informative.

The last community presentation was Martin Schimak of plexiti, showing three community extensions used for automating testing of BPMN and CMMN models.  Assert checks and sets the status of tasks in order to drive a process instance through a test scenario. Process Test Coverage visualizes test process paths and checks process model coverage ratio, as covered by individual test methods and entire test classes (e.g., using mockito). Assert Scenario is for writing robust test suites for process models; this was not covered in Schimak’s demo due to time contraints, but you can read more about it on his blog.

Before we started on the Camunda technical presentations, the community award was presented by Camunda as a reward for contributions on extensions: this went to Jan Galinski from Holisticon. It’s really encouraging to see the level of engagement between Camunda and their open source community: Camunda obviously realizes that the community is an important contributor to the success of the enterprise version of the software and the company altogether, and treats them as trusted partners.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

bpmNEXT 2016 demos: Oracle, OpenRules and Sapiens DECISION

This afternoon’s first demo session shifts the focus to decision management and DMN.

Decision Modeling Service – Alvin To, Oracle

wp-1461187532237.jpgOracle Process Cloud as an alternative to their Business Rules, implementing the DMN standard and the FEEL expression language. Exposes decisions as services that can be called from a BPMN process. Create a space (container) to contain all related decision models, then create a DMN decision model in that space. Create test data records in the space, which will be deleted before final deployment. Define decisions using expressions, decision tables, if-then-else constructs and functions. Demo example was a loyalty program, where discounts and points accumulation were decided based on program tier and customer age. The decisions can be manually executed using the test data, and the rules changed and saved to immediately change the decision logic. A second demo example was an order approval decision, where an order number could be fed into the decision and an approval decision returned, including looping through all of the line items in the order and making decisions at that level as well as an overall decision based on the subdecisions. Once created, expose the decisions or subdecisions as services to be called from external systems, such as a step in a BPMN model (or presumably any other application). Good way to introduce standard DMN decision modeling into any application without having an on-premise decision management system.

Dynamic Decision Models: Activation/Deactivation of Business Rules in Real Time – Jacob Feldman, OpenRules

wp-1461187557576.jpgWhat-If Analyzer for decision modeling, for optimization, to show conflicts between rules, and to enable/disable rules dynamically. Interface shows glossary of decision variables, and a list of business rules with a checkbox to activate/deactivate each. Deactivating rules using the checkboxes updates the values of the decision results to find a desired solution, and can find minimum and maximum values for specified decision variables that will still yield the same decision result. The demo example was a loan approval calculation, where several rules were disabled in order to have the decision result of “approved”, then a maximum value generated for accumulated debt that would still give an “approved” result. Second example was how to build a good burger, optimizing cost for specific health and taste standards by selecting different rules and optimizing the resulting sets of decision variables. Third example was a scheduling problem, optimizing activities when building a house in order to maintain precedence and resulting in the earliest possible move-in date, working within budget and schedule constraints. Interesting analysis tool for gaining a deep understanding of how your rules/decisions interact, far beyond what can be done using decision tables, especially for goal-seeking optimization problems. All open source.

The Dirty Secrete in Process and Decision Management: Integration is Difficult – LarryGoldberg, Sapiens DECISION

wp-1461190003376.jpgData virtualization to create in-memory logical units of data related to specific business entities. Demo started with a decision model for an insurance policy renewal, with input variables included for each decision and subdecision. Acquiring the data for those input variables can require a great deal of import/export and mapping from source systems containing that data; their InfoHub creates the data model and allows setup of the integration with external sources by connecting data sources and defining mapping and transformation between source and destination data fields. When deployed to the InfoHub server, web service interfaces are created to allow calling from any application; at runtime, InfoHub ensures that the logical unit of data required for a decision is maintained in memory to improve performance and reduce implementation complexity of the calling application. There are various synchronization strategies to update their logical units when the source data changes — effectively, a really smart caching scheme that syncronizes only the data that is required for decisions.

bpmNEXT 2016 demo session: Signavio and Princeton Blue

Second demo round, and the last for this first day of bpmNEXT 2016.

Process Intelligence – Sven Wagner-Boysen, Signavio

Signavio allows creating a BPMN model with definitions of KPIs for the process such as backlog size and end-to-end cycle time. The demo today was their process intelligence application, which allows a process model to be uploaded as well as an activity log of historical process instance data from an operational system — either a BPMS or some other system such as an ERP or CRM system — in CSV format. Since the process model is already known (in theory), this doesn’t do process mining to derive the model, but rather aggregates the instance data and creates a dashboard that shows the problem areas relative to the KPIs defined in the process model. Drilling down into a particular problem area shows some aggregate statistics as well as the individual instance data. Hovering over an instance shows the trace overlaid on the defined process model, that is, what path that that instance took as it executed. There’s an interesting feature to show instances that deviate from the process model, typically by skipping or repeating steps where there is no explicit path in the process model to allow that. This is similar in nature to what SAP demonstrated in the previous session, although it is using imported process log data rather than a direct connection to the history data. Given that Signavio can model DMN integrated with BPMN, future versions of this could include intelligence around decisions as well as processes; this is a first version with some limitations.

Leveraging Cognitive Computing and Decision Management to Deliver Actionable Customer Insight – Pramod Sachdeva, Princeton Blue

Sentiment analysis of unstructured social media data, creating a dashboard of escalations and activities integrated with internal customer data. Uses Watson for much of the analysis, IBM ODM to apply rules for escalation, and future enhancements may add IBM BPM to automatically spawn action/escalation processes. Includes a history of sentiment for the individual, tied to service requests that responded to social media activity. There are other social listening and sentiment analysis tools that have been around for a while, but they mostly just drive dashboards and visualizations; the goal here is to apply decisions about escalations, and trigger automated actions based on the results. Interesting work, but this was not a demo up to the standards of bpmNEXT: it was only static screenshots and some additional PowerPoint slides after the Ignite portion, effectively just an extended presentation.

Positioning Business Modeling panel at bpmNEXT

We had a panel of Clay Richardson of Forrester, Kramer Reeves of Sapiens and Denis Gagne of Trisotech, moderated by Bruce Silver, discussing the current state of business modeling in the face of digital transformation, where we need to consider modeling processes, cases, content, decisions, data and events in an integrated fashion rather than as separate activities. The emergence of the CMMN and DMN standards, joining BPMN, is driving the emergence of modeling platforms that not only include all three of these, but provide seamless integration between them in the modeling environment: a decision task in a BPMN or CMMN model links directly to the DMN model that represents that decision; a predefined process snippet in a CMMN model links directly to the BPMN model, and an ad hoc task in a BPMN model links directly to the CMMN model. The resulting models may be translated to (or even created in) a low-code executable environment, or may be purely for the purposes of understanding and optimizing the business.

Some of the points covered on the panel:

  • The people creating these models are often in a business architecture role if they are being created top down, although bottom-up modeling is often done by business analysts embedded within business areas. There is a large increase in interest in modeling within architecture groups.
  • One of the challenges is how to justify the time required to create these models. A potential positioning is that business models are essential to capturing knowledge and understanding the business even if they are not directly executable, and as organizations’ use of modeling matures and gains visibility with executives, it will be easier to justify without having to show an immediate tangible ROI. Executable models are easier to justify since they are an integrated part of an application development lifecycle.
  • Models may be non-executable because they model across multiple implementation systems, or are used to model activities in systems that do not have modeling capabilities, such as many ERP, CRM and other core operational systems, or are at higher levels of abstraction. These models have strategic value in understanding complexity and interrelationships.
  • Models may be initiated using a model derived from process/data mining to reduce the time required to get started.
  • Modeling vendors aren’t competing against each other, they’re competing against old methods of text-based business requirements.
  • Many models are persistent, not created just for a specific point in time and discarded after use.

A panel including two vendors and an analyst made for some lively conversation, and not a small amount of finger-pointing. 🙂

Bruce Silver Now Stylish With DMN As Well As BPMN

I thought that Bruce Silver’s blog had been quiet for a while: turns out that he moved to a new, more representative domain name, and my feed reader wasn’t updating from there. He’s rebranding his business, including his blog, under Method & Style, mirroring the title of his popular book and training BPMN Method and Style , and now his new book and training options for DMN: DMN Method and Style: The Practitioner’s Guide to Decision Modeling with Business Rules .

His blog has a ton of new content on DMN, starting with a great piece that compares the path of the DMN standard with that of BPMN, which is considerably more mature. He discusses the five key elements of DMN, then goes into each of those in detail in the next five posts: Decision Requirements Diagrams, Decision Tables, FEEL (a new expression language developed for DMN), Boxed Expressions and the Metamodel and Schema. It’s really interesting to read his analysis comparing the evolution of the two standards: there was a time when everyone thought that BPMN was just about the visual notation, but to make it really useful, the interchange format and execution semantics have to come along at some point. Still, it’s useful to get started in DMN now with DRDs and decision tables, since that at least makes the decision models explicit instead of being buried in text requirements.

Once you’ve brushed up on his posts covering the five key elements, you can also read about conformance levels that vendor can choose to implement, and what didn’t make it into DMN 1.1, which is the first real version of the standard.

He doesn’t pull any punches in his discussion, and is not very complimentary on some aspects of the standard and how some vendor choose to implement it. Just as he is with BPMN. Smile

Day 2 Keynote at BPMCM15

Second day at the BPM and Case Management summit in DC, and our morning keynote started with Jim Sinur — former Gartner BPM analyst — discussing opportunities in BPM and case management. He pointed out the proven benefits of process and case management, in terms of improving revenue, costs, time to market, innovation and visibility, while paving a path to digital transformation. However, these tried-and-true ROI measures aren’t just enough these days: we also need to consider customer loyalty, IoT, disruptive companies and business models, and in general, maintaining competitive differentiation in whatever way necessary to thrive in the emerging marketplace. In order to accommodate this, as well as attract good workers, it’s necessary to break the specialist mindset and allow people to become knowledge workers. I gave a workshop last week at the IRM BPM conference on the future of work, and I agree that this is a key part of it: more of the routine work is being automated, leaving the knowledge work for the people in the process; this requires a work environment that allows people to do the right thing at the right time to achieve a goal, not just work at a pre-defined task in a pre-defined way. Sinur cited a number of examples of processes that are leveraging emerging technologies, including knowledge workers’ workbenches that incorporate smart automated agents and predictive analytics; and IoT applications in healthcare and farming. The idea is to create goal-driven and proactive “smarming” processes that figure out on their own how to accomplish a goal through both human and automated intelligence, then assemble the resources to do it. Instead of pre-defining processes, you provide goals, constraints, analytics and contexts; the agents — including people, services, bots and sensors — create each process instance on the fly to best meet the situation. Although his case studies included a number of other technologies, he finished with a comment on how BPM and case management can be used to coordinate and orchestrate these processes as we move to a new world of digital transformation of the customer experience.

Next up was Tom Debevoise, now with Signavio to help promote their recently-released DMN modeler; we had a sneak peek of the DMN modeler at bpmNEXT. He talked about three levels of decisions — strategic (e.g., should we change our business model), tactical (e.g. which customers to target) and operational (e.g., which discount to apply to this transaction) — and how these tend to be embedded within process models and business application logic, rather than externalized into decision models where they can be explicitly managed. Most organizations manage their decisions very poorly, both human and automated, resulting in inconsistent or just plain wrong decisions being made. In other words, our business decisions are at the same point now as business processes were a decade or more ago, before BPM systems became widespread, and the path to improving this is to consider decision management as a discipline as well as the systems to model and automate decisions. We now have a decision modeling standard, DMN 1.0; this is expected to drive the adoption of decision modeling in organizations in the same way that BPMN did for process modeling. He proposed a decision management lifecycle similar to a BPM lifecycle, starting with decision discovery that allows modeling using the DMN-standard elements of a decision, input data, knowledge sources, information requirements, authority requirements and knowledge requirements. He wrapped up with the linkage between process and decision models, particularly using the Signavio BPMN and DMN modelers: how decisions that are defined external to a process can be used to assign process activity participants, decide on next steps, select the process pathway, define data access control, or detect and respond to events. We saw yesterday how Trisotech’s tools combine BPMN, CMMN and DMN, and today how Signavio combines BPMN and DMN; as more process modeling vendors expand to include decision modeling, we are going to see more implementations of these modeling standards integrated.

The last speaker in the keynote was Lloyd Dugan, on how business architecture and BPM work together, in response to a paper that he wrote last year with Neal McWhorter. Although dense (I recommend checking out the paper at the link), his presentation discussed some of the issues with reconciling business architecture and BPM, such as reconciling value stream, balanced scorecard and other BA models with activities within a process model. He reviewed a number of definitions and model types, cutting a wide swath through pretty much everything even remotely related to process and architecture, and highlighting some of the failures of mapping enterprise architecture frameworks to BPMN. He finished with a spectrum from business model perspectives (what the business is doing) to the operational model perspective (how the business is doing it), and how the business architecture versus BPM viewpoints differ, but can still both use BPMN as a modeling language. Pretty sure of two things from this: 1) I missed a lot of the detail 2) Dugan has never heard that you’re supposed to have less than 500 words on each PowerPoint slide.

BPMN, CMMN and DMN with @denisgagne at BPMCM15

Last session of day 1 of the BPM and Case Management Summit 2015 in DC, and Denis Gagne of Trisotech is up to talk about the three big standards: the Business Process Model and Notation (BPMN), the Case Management Model & Notation, and the Decision Model & Notation. BPMN has been around for a few years and is well-established — pretty much every business process modeling and automation vendor uses BPMN in some form in their process modelers, and it is OMG’s most-adopted standard — but CMMN and DMN are much newer and less widespread in the market. There are a few vendors offering CMMN modelers and even fewer offering DMN. There are two major benefits to standards such as BPMN, CMMN and DMN, in addition to the obvious benefit of providing an unambiguous format for modeling processes, management and decisions: they can be used to create models that can be interchanged between different vendors’ products; and they provide a common and readily-transferable “language” that is learned by analysts. This interchangeability, both of models and skills, means that organizations don’t need to be quite so worried about which modeling tool that they use, or the people that they hire to use it. Denis was at the Model Interchange Working Group (MIWG) OMG meeting in Berlin last week, where they showed all types of interchange for BPMN; with luck, we’ll be seeing the same activities for the other standards as they become widely adopted.

There are some grey areas about when to use BPMN versus CMMN, since both are (sort of) process-based. However, the main focus in BPMN is on activities within processes, whereas CMMN focuses on events that impact cases. He showed a chart comparing different facets of the three standards:

BPMN CMMN DMN
Processes Cases Decisions
Activities Events Rules
Transitional Contextual Applied
Data Information Knowledge
Procedural Declarative Functional
Token Event Condition Action (ECA) First Order Logic (FOL)

The interesting part (at least to me) comes when we look at the bridges between these standards: in BPMN, there is a business rule task that can call a decision in DMN; in CMMN, there is a process task that can call a process defined in BPMN. Trisotech’s version of all of these modelers (not yet in the standards, but count on Denis to get them in there) also provides for a case task type in BPMN that can call a CMMN case, and a decision task in CMMN that can call a DMN decision. There are some patterns to watch for when modeling that might indicate that you should be using another model type:

  • In BPMN, if you have a lot of gateways expressing business logic, then consider moving the gateway logic to DMN
  • In BPMN, if you have a lot of events especially boundary events, then consider encapsulating that portion into a CMMN case
  • In BPMN, if you have a lot of ad hoc subprocesses, then consider using CMMN to allow for greater specification of the ad hoc activities
  • In CMMN, if you have a lot of task interdependencies, consider using BPMN to replace the temporal dependencies with flow diagrams

The recognition and refactoring of these patterns is pretty critical for using the right model type, and are likely a place where a more trained technical analytical eye might be able to suggest improvements to models created by a less-technical analyst who isn’t familiar with all of the model types or how to think about this sort of decomposition and linking.

He demonstrated integration between the three model types using the Trisotech BPMN, CMMN and DMN modelers, where a decision task in the BPMN modeler can link directly to a decision within a model in the DMN modeler, and a case task in BPMN can link directly to a case model in the CMMN modeler. Nice integration, although it remains to be seen what analyst skill level is required to be able to model across all three types, or how to coordinate different analysts who might be modeling in only one of the three model types each, where the different models are loosely coupled with different authors.

Disclosure: I’m doing some internal work with Trisotech, which means that I have quite a bit of knowledge about their products, although I have not been compensated in any way for writing about them here on my blog.

bpmNEXT 2015 Day 2 Demos: Sapiens Decision, Signavio

We finished the morning demo sessions with two on the theme of decision modeling and management.

Sapiens: How to Manage Business Logic

Michael Grohs highlighted the OMG release of the Decision Model and Notation (DMN) standard, and how the decision model is really a business logic model. However, business rule management systems are typically technical solutions, and don’t do much for business users and analysts trying to model their decision logic and rules based on their policies and procedures. Decision-aware processes extract declarative knowledge from process models, greatly simplifying the process models and moving the declarative information to a model format more suitable to business logic, such as a decision table. BPMS and DMS are complementary, and can be combined to create a complete model of the business process. He provided a demo of their decision modeling and repository tooling, which starts with the definition of a community space that shares a glossary, attributes and models, and has governance workflows for decision model approval and deployment. The glossary allows for the definition of fact types, including multiple synonyms to allow different stakeholders to use their own terminology. The decision models are made up of rule families that capture the business logic, with a visual syntax that indicates the rules and conditions that make up a particular decision. This can be expanded into a full decision table style that shows the if-then-else logic using the business terms. Different instances of decisions and rules sets can be created — in his demo example, the insurance policy renewals base logic versus that for a hurricane-prone state such as Florida — and visually compared in the graphical or tabular view, with changes highlighted or listed in detail in a report. Rule sets can be validated to highlight conflicts and missing information, then exported in a variety of formats for importing into a DMS for execution.

Signavio: Business Decision Management

Gero Decker talked about their collaborative process design and SAP upgrade tools as an introduction, but mainly addressed decision modeling and how they are embracing the DMN standard: modeling decisions, inputs and knowledge sources, then linking that to a decision activity in a BPMN model. DMN provides a graphical model form, and also allows for decision tables for detailed steps. Like Sapiens, Signavio does only decision modeling, not execution, and exports in standard formats for importing to a DMN such as Drools for execution. They are releasing the Signavio Decision Manager in a few weeks, and he gave us a preview demo of modeling and testing rules integrated with their process modeling environment. Similar to the modeling that we saw from Comindware earlier this morning, Signavio can be used to model higher-level enterprise architecture constructs such as value chains plus full BPMN models for specific capabilities within those models; he used a BPMN model as a jumping-off point for demonstrating decision modeling by creating a business rule task. From that point, you can specify a decision table directly in situ, or choose to create a DMN model at that point, which launches the DMN modeler with the top-level question/answer in the DMN model linked to the business rule activity from the BPMN model. The DMN model can be built out graphically, data objects defined and rules added with decision tables, and sub-decisions added as required. The DMN modeler can make use of the existing glossary in the Signavio environment for data objects and attributes. The decision tables can be validated to detect conflicts, and can export test cases in a spreadsheet format to drive manual or automated testing. They are also doing some work on detecting complex decision logic within BPMN models, with the goal to refactor the models to externalize the decision logic into DMN models where it makes the BPMN model unnecessarily complex.