bpmNEXT 2018: Complex Modeling with MID GmbH, Signavio and IYCON

The final session of the first day of bpmNEXT 2018 was focused on advanced modeling techniques.

Designing the Data-Driven Company, MID GmbH

Elmar Nathe of MID GmbH presented on their enterprise decision maps, which provides an aggregated visualization of strategic, tactical and operational decisions with business events. They provide a variety of modeling tools, but see decisions as key to understanding how organizations are driven by data and events. Clearly a rich decision modeling environment, including support for PMML for including predictive models and other data scientist analysis tools, plus links to other model types such as ERDs that can show what data contributes to which decision model, and business process models. Much more of an enterprise architecture approach to model-driven design that can incorporate the work of data scientists.

Using Customer Journeys to Connect Theory with Reality, Signavio

Till Reiter and Enrico Teterra of Signavio started with a great example of an Ignite presentation, with few words, lots of graphics and a bit of humor, discussing their new notation for modeling an outside-in view of the customer journey rather than just having an undifferentiated “customer” swimlane in a BPMN diagram. The demo walked through their customer journey mapping tool, and how their collaboration hub overlays on that to allow information about each component of the journey map to be discussed amongst process modeling users. The journey map contains a lot of information about KPIs and other process metrics in a form most consumable by process owners and modelers, but also has a notebook/dashboard view for analysts to determine problems with the process and identify potential resolution actions. This includes a variety of analysis tools including process discovery, where process mining techniques are applied to determine which paths in the process model may be contributing to specific problems such as cycle time, then overlay this on the process model to assist with root cause analysis. Although their product does a good job of combing CJMs, process models and process analysis, this was more of a walkthrough of a set of pre-calculated dashboard screens rather than an actual demo — a far cry from the experimental features that Gero Decker showed off in their demo at the first bpmNEXT.

Discovering the Organizational DNA, IYCON and Knowledge Consultants

The final presentation of this section was with Jude Chagas Pereira of IYCON and Frank Kowalkowski of Knowledge Consultants presenting IYCON’s Afterspyre modeling tool for creating a catalog of complex business objects, their attributes and their linkages to create organizational DNA diagrams. Ranking these with machine learning algorithms for semantic and sentiment analysis allows identification of process improvement opportunities. They have a number of standard business analysis techniques built in, and robust analytics focused on problem solving. The demo walked through their catalog, drilling down into the “Strategy DNA” section and into “Technology Solutions” subsection to show an enumeration of the platforms currently in place together with attributes such as technology risk and obsolescence, which can be used to rank technology upgrade plans. Relationships between business objects can be auto-detected based on existing data. Levels including Objectives, Key Processes, Technology Solutions, Database Technology and Datacenter and their interrelationships are mapped into a DNA diagram and an alluvial diagram, starting at any point in the catalog and drilling down a specific number of levels as selected by the modeling analyst. These diagrams can then be refined further based on factors such as scaling the individual markers based on actual performance. They showed sentiment analysis for a hotel rank on a review site, which included extracting specific phrases that related to certain sentiments. They also demonstrated a two-model comparison, which compared the models for two different companies to determine the overlap and unique processes; a good indicator for a merger/acquisition (or even divestiture) level of difficulty. They finished up with affinity modeling, such as the type used by Amazon when they tell you what books that other people bought who also bought the book that you’re looking at: easy to do in a matrix form with a small data set, but computationally intensive once you get into non-trivial amounts of data. Affinity modeling is most commonly used in marketing to analyze buying habits and offering people something that they are likely to buy, even if that’s what they didn’t plan to buy at first — this sort of “would you like fries with that” technique can increase purchase value by 30-40%. Related to that is correlation modeling, which can be used as a first step for determining causation. Impressive semantic data-driven analytics tool for modeling a lot of different organizational characteristics.

That’s it for day one; if everyone else is as overloaded with information as I am, we’re all ready for tonight’s wine tasting! Check the Twitter stream for opinions and photos from other attendees.

bpmNEXT 2018: All DMN all the time, with Trisotech, Bruce Silver Associates and Red Hat

First session of the afternoon on the first day of bpmNEXT 2018, and this entire section is on DMN (decision management notation) and the requirement for decision automation based on DMN.

Decision as a Service (DaaS): The DMN Platform Revolution, Trisotech

Denis Gagne of Trisotech, who knows as much about DMN and other related standards as anyone around, started off the session with his ideas on the need for decision automation driven by requirements such as GDPR. He walked through their suite of decision-related products that can be used to create decision services to be consumed by other applications, as well as their conformance to the DMN standards. His demo showed a decision model for determining the best price to offer a rental vehicle customer, and walked through the capabilities of their platform with this model: DMN style check, import/export, execution, team collaboration, and governance through versioning. He also showed how decision models can be reused, so that elements from one model can be used in another model. Then, he showed how to take portions of the model and define them as a service using a visual wrapper, much like a subprocess wrapper visualization in BPMN, where the relationship lines that cross the service boundary become the inputs and outputs to the service. Cool. The service can then be deployed as an executable service using (in his demo) the Red Hat platform, test its execution using from a generated HTML form, generate the REST API or Open API interface code, run predefined test cases based on DMN TCK, promote the service from test to production, and publish it to an API publisher platform such as WSO2 for public consumption. The execution environment includes debugging and audit logs, providing traceability on the decision services.

Timing the Stock Market with DMN, Bruce Silver Associates

Bruce Silver, also a huge contributor to BPMN and DMN standards, and author of the BPMN Method & Style books and now the DMN M&S, presented an application for buying a stock at the right time based on price patterns. For investors who time the market based the pricing, the best way to do this is to look at daily min/max trends and fit them to one of several base type models. Bruce figured that this could be done with a decision table applied to a manipulated version of the data, and automated this for a range of stocks using a one-year history, processing in Excel, and decision services in the Trisotech cloud. This is a practical example of using decision services in a low-code environment by non-programmers to do something useful. His demo showed us the decision model for doing this, then the data processing (smoothing) done in Excel. However, for an application that you want to run every day, you’re probably not going to want to do the manual import/export of data, so he showed how to automate/orchestrate this with Microsoft Flow, which can still use the Excel sheet for data manipulation but automate the data import, execute the decision service, and publish the results back to the same Excel file. Good demonstration of the democratization of creating decisioning applications by through easy-to-use tools such as the graphical DMN modeler, Excel and Flow, highlighting that DMN is an execution language as well as a requirement language. Bruce has also just published a new book, DMN Cookbook, co-authored with Edson Tirelli of Red Hat, on getting started DMN business implementations using lightweight stateless decision services called via REST APIs.

Smarter Contracts with DMN, Red Hat

Edson Tirelli of Red Hat, Bruce Silver’s co-author on the above-mentioned DMN Cookbook, finished this section of DMN presentations with a combination of blockchain and DMN, where DMN is used to define the business language for calculations within a smart contract. His demo showed a smart land registry case, specifically a transaction for selling a property involving a seller, a buyer and a settlement service created in DMN that calculates taxes and insurance, with the purchase being executed using cryptocurrency. He mentioned Vanessa Bridge’s demo from earlier today, which showed using BPMN to define smart contract flows; this adds another dimension to the same problem, and likely no reason why you wouldn’t use them all together given the right situation. Edson said that he was inspired, in part, by this post on smart contracts by Paul Lachance, in which Lachance said “a visual model such as a BPMN and/or DMN diagram could be used to generate the contract source code via a process-engine”. He used Ethereum for the blockchain smart contract and the Ether cryptocurrency, Trisotech for the DMN models, and Drools for the rules execution. All in all, not such a far-fetched idea.

I’m still catching flak for suggesting the now-ubiquitous Ignite style for presentations here at bpmNEXT; my next lobbying effort will be around restricting the maximum number of words per slide. 🙂

Release webinar: @CamundaBPM 7.8

I listened in on the Camunda 7.8 release webinar this morning – they issue product releases every six months like clockwork – to hear about the new features and upgrades from CEO Jakob Freund and VP of engineering Daniel Meyer.

Camunda BPM stack, community versus enterpriseThey’re obviously getting a broader audience for these release webinars than just their current customers and open source community members, and started with a bit about the company, the product stack and their clients. We heard about a recent case study presented at their first San Francisco community day: 24 Hour Fitness is using Camunda process and decision management for high volume real-time orchestration of their core business processes. With over 190 processes in production, executing 20 million BPMN and 18 million DMN instances per day, this is clearly an enterprise-strength application; they are using the Camunda Enterprise Edition rather than the Community Edition for the additional features and SLA-based support, but the underlying engine and much of the tooling is identical between the products.

The key new features and updates are as follows:

  • Camunda BPM batch mode database operationsWorkflow engine performance improvements. A new batch mode allows 3-4 times more process instances to be executed per minute on several of the supported databases. This is based on grouping database transactions for the same database table (including both operational and audit tables), then doing a single round-trip call between the Camunda server and the database server to execute the batch of inserts, updates and deletes.
  • Cockpit batch operations. It’s now possible to do bulk operations for suspending/activating and modifying running processes instances, and restarting completed process instances. Process instances can be selected by process definition name and by more complex search and filtering operations such as instance variable values, then a batch command issued to suspend, restart, modify or delete instances. A new feature also allows all instances that are at a specific task to be dragged to a new task directly in the process model, whereas this was only possible with single instances before; this can be used either to move the instances to a new task to correct for an error condition or changed process flow, or to restart instances that are sitting at the final end node.
  • More Cockpit features. In addition to the batch operations, Cockpit also now has faster BPMN model rendering (from 8 seconds down to 2 seconds), ability to delete process definitions, and a number of other administrative functions.
  • Spring Boot Starter. Originally created as a community extension in 2015 (with significant contributions from community members Jan Galinski and Oliver Steinhauer), Camunda adopted this project into the main code base to create an officially-supported version of the Camunda Spring Boot Starter, documented here.

The first two updates are focused squarely on improving performance and administration for high volume operations, likely driven by clients such as 24 Hour Fitness, that will serve Camunda well as they push into more core enterprise business processes. The Spring Boot integration positions them well for deploying BPM services in a microservice architecture.

Camunda BPM 7.8

Good summary of the new features in 7.8, and a great Spring Boot coding demo by Meyer, in spite of his grumbling about having to do it on Windows for the webinar. Smile

The webinar will be available for replay soon; check their website for availability. You can also see their release blog post that links to the release notes and describes many of the things that I saw today in the webinar.

Disclaimer: Camunda has been, but is not currently, a client. They did not provide any incentive to attend and write about this webinar, and these are my own opinions. That’s always the case for what I write here, but it’s good to make it explicit every once in a while.

Financial decisions in DMN with @JanPurchase

Trisotech and their partner Lux Magi held a webinar today on the role of decision modeling and management in financial services firms. Jan Purchase of Lux Magi, co-author (with James Taylor) of Real-World Decision Modeling with DMN, gave us a look at why decision management is important for financial services. One of the key places for applying decision management is in compliance, which is all about decision-making: assessing risks, applying regulations, sharing data, and ensuring that rules are applied in a uniform manner. There are a lot of other areas where decision management can be applied, and potentially automated where this is a high volume/speed of transactions with a non-zero cost of errors. Decision management lets you make decisions explicit: it separates them from other business software to increase transparency and agility, and makes it easier for business people to understand what decisions are being applied and how that links to overall business goals. In particular, if decisions are automated with a decision management system, business people can quickly make changes to decision-making when compliance regulations change, with a much smaller IT involvement that would be required to modify legacy business systems.

There is a great deal of value in modeling decisions even if they are embedded within business systems and won’t be automated using a decision management system: decision models provide a way for business people to specify how systems should behave based on business data. Luckily, there is now a standard for decision modeling: Decision Model and Notation (DMN). This notation allows a decision to be modeled as a Decision Requirements Diagram (DRD) of the sub-decisions and knowledge sources that are required to reach that decision, and the possible paths to take in order to reach the decision. Within each of the decision nodes in the DRD, a definition of the decision can be specified using a decision table or the Friendly Enough Expression Language (FEEL), which may then be linked to an automated decision management system.

We then saw what a decision model looks like in Trisotech’s DMN Modeler, which allows for a standard DRD to be created, then augmented with additional information such as decision makers and owners. Purchase walked us through a number of the features of DMN as well as specific features of Trisotech’s tool, including analysis of decisions relative to Bruce Silver’s Method and Style best practices, and decision animation.

Lux Magi/Trisotech DMN 2017-10

If you know a bit about DMN already but want to understand some of the practical aspects of working with it in financial services, I assume that a replay of the webinar will be available at the original registration link or the Lux Magi event page.

Camunda Community Day: @CamundaBPM technical sessions

I’m a few weeks late completing my report on the Camunda Community Day. The first part was on the community contributions and sessions, while the second half documented here is about Camunda showing new things that could be used by the community developers in the audience.

First up was Vladimirs Katusenoks, core developer on BPMN.io, with a presentation on bpmn-js: how it works, and how to extend it with custom functionality such as adding color to BPMN diagrams, which is a permitted extension to BPMN XML. His live coding presentation showed changing the colour of a shape background, either statically in code for the element class or by adding a colour picker to an individual element context palette; this was based on the bpmn-js core BPMN functionality, using bpmn-moddle to read/write into the metamodel and diagram-js to render it. There are a number of other bpmn-js examples on Github.

Next, Felix Müller discussed KPI management, expanding on his August blog post on the topic. KPI management is based on quantitative indicators for process cycle-time improvement, including cycle time and overdue time, plus definitions of the time period, unit of measure and calculation method. In Camunda, KPIs are defined in the Modeler, then monitored in Cockpit. He showed how to use the concept of element templates (that extend core definitions) to create custom fields on collaboration object (process) or individual tasks, e.g., KPI unit (hours, days, minutes) and KPI threshold (number). In Cockpit, this appears as a new tab for KPI Overview, showing a list of individual instances and target/current/average duration, plus an indicator of overdue status of the instance and any contained tasks; there is also a decorator bubble on the top right of the task on the process model to show the number of overdue instances on the aggregate model, or overdue status as a check mark or exclamation on individual models. The Cockpit modifications were done by creating a plug-in to display KPI statistics, which queries and calculates on the fly – a potential performance problem that might be improved through pre-aggregation of statistics. He also demonstrated how to modify this basic KPI model to include an expected duration as well as maximum duration. A good start, although I think there’s a lot more that’s needed here.

Thorsen Lindhauer, a Camunda core BPM developer, discussed how to contribute to the Camunda open source community, both at camunda.org (engine and desktop modeler, same as the commercial product) and bpmn.io (JS tools). Possible contributions include answering questions on forums; logging error reports; documenting ideas for new functionality; and working on code. Code contributions typically start by having a forum discussion about planned new functionality, then a decision is made on whether it will be core code (higher quality standards since it will become part of the commercial product, and will eventually be maintained by Camunda) versus a community extension; this is followed by ongoing development, merge and release cycles. Camunda is very supportive of community contributions, even if they don’t become part of the core product: community involvement is critical to the health of any open source project.

The last presentation of the community day was Daniel Meyer discussing the product roadmap. The next release, 7.6, will be on November 30 – they have a strict twice-yearly release cycle. This release includes updates to DMN, CMMN, BPMN, rolling updates, Cockpit features, and UI/UX in web apps; I have captured a few notes here but see the linked roadmap for a more complete and accurate description and the online documentation as it is rolled out.

  • DMN:
    • Simpler decision table editing with drop-down lists of comparison/range operators instead of having to remember FEEL or Juel syntax
    • Ability to add list of selection values (advanced mode still exists for full flexibility)
    • Decisions with literal expressions
    • DMN engine performance 4-6x faster
    • Support for decision requirements diagrams/graphs (DRD/DRG) that can link decision tables; visualization in Modeler and Cockpit are not there yet but the structures are supported – in my experience, this is typical of Camunda, which builds and releases the engine capabilities early then follows with the visualization, allowing for a quicker start for executable diagrams
  • CMMN:
    • Modeler now completely models CMMN including technical attributes such as listeners
    • Cockpit (visualization still incomplete although we saw a brief view) will allow linking models of same or different types
    • Engine feature and functionality improvements
  • Rolling updates allow Camunda process engine to be updated without shutdown: guaranteed backwards compatibility of database schema to allow database to be updated first, then roll updates of engines by taking each offline individually and allowing load balancer to reroute sessions.
  • BPMN:
    • BPMN conditional event supported
    • Improved modeling including labels, collapsing/expanding subprocesses to switch between view types, and field injections in property panel.
  • Cockpit:
    • More flexible/granular human task monitoring
    • New welcome page with links to apps (Cockpit, Tasklist, Admin), user profile, and frequent links
    • Batch operations (cancel, suspend, etc.) based on batch action capability built for instance migration
    • CMMN and DMN DRD visualization

Daniel discussed some other minor improvements based on customer feedback, plus plans for 2017, including a web modeler for collaborative BPMN, CMMN and DMN modeling via a SaaS offering and a future on-premise version. They finished the day with a poll and community feedback to establish priorities for future versions.

I stayed on for the second day, which is actually a separate conference: BPMCon for Camunda’s enterprise (commercial) customers. Rather, I stayed on for Neil Ward-Dutton’s keynote, then ducked out for most of the rest of day, which was in German. Neil’s keynote included results from workshops that he has done with executives on digital transformation, and how BPM can be used to create the bridges between the diverse parts of a digital business (internal to external, automated to people-centric), while tracking and coordinating the work that flows between the different areas.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

Camunda Community Day: community contributions

Two years ago, I attended Camunda’s open source community day, and gave the opening keynote at their enterprise user conference the following day. I really enjoyed my experience at the open source day, and jumped at the chance to attend again this year – and to visit Berlin.

The first day was the community day, where users of Camunda’s open source software version (primarily developers) talk about what they’re doing with it, plus some of the contributions that the community is making to the project and updates from Camunda on new features on the horizon. To break this up a bit – since I’m already a week after the conference and want to get something out there – I’ll cover the community sessions in this post, then the Camunda technical sessions and a bit about the enterprise conference in a later post.

The first presentation was by Oliver Hock of Videa Project Services, demonstrating robot control using a LEGO Mindstorms robot to solve a Rubix cube. He showed how they used BPMN to define movements and decision tables to determine the move logic, then automated the solution using Camunda BPM. Although you may never want to build a robot to solve a Rubix cube, there are a lot of other devices out there that, like the Mindstorms robot, are controlled via Java APIs; Hock’s design showed how these Java-enabled devices can make use of higher-level modeling constructs such as BPMN and decision tables.

Next up was Jan Galinski of Holisticon to show the Spring Boot community code extension – an example of how the community of Camunda open source users give back to the open source project for everyone’s benefit. Spring Boot is a microservices framework allowing for fast deployment of web applications with a minimal amount of overhead; the Spring Boot starter extension to Camunda allows for using Camunda without a Java application server to essentially provide Camunda apps as microservices. The extension, consisting of about 5,000 lines of code, has been developed over two years with 10 contributors, including both community and Camunda contributors. Galinski showed a live coding demo of replacing JBoss server with Spring Boot starter in a Camunda application to show how this works; he has also written a post on the Camunda community site on the 1.3.0 version of Camunda BPM Spring Boot for more technical details. Although granualar process apps such as this are easier from a devops perspective in terms of deployment and scalability, the challenge is that there is no single point of entry for an end user to look at a worklist (for example). We saw some methods for dealing with this, where a workload service collects information from individual process services with the help of the Camunda BPM Reactor plugin and aggregates them; a federated task list is under development to bring together tasks from multiple process servers into a single list, with a simple completion form. Galinski walked through the general architecture for this, and noted that they are working on making this an official extension. Update: Jan Galinski pointed out in the comments that it was Simon Zambrovski  (also of Holisticon) who did the portion on the presentation on cloud, universal tasklist and event processing — I missed the transition and his name in my hasty note-taking.

Jarl Friis of the Danish tax authority (SKAT) presented their use of the Camunda decision engine: they are using only decision services, and not the BPM capabilities, which likely makes them unusual as a Camunda customer. There are a couple of applications for them: first is to raise data quality in financial reporting to the IRS (for FATCA requirements), where they receive data from Danish financial institutions and have to process it into a specific XML format to send to the IRS. Although many of the data cleansing and transformation rules are in the XML schema definitions, some are not amenable to that format and are being defined in DMN decision tables instead. As this has rolled out, they see that decision tables give them an easier way to respond to annual rule changes, although their business people are not yet trained to make changes to the decision tables. That has resulted in developers having to make the decision table changes and test the results, which is one of the challenges that they have had to deal with: some of the developer test frameworks replicated the original decision table logic in code, which effectively tested the decision table implementation rather than the business logic. That test framework, of course, no longer worked when the decision table was changed, and Friis’ message to the audience was that organizations have to deal with challenges of ownership and responsibility for rules as well as rules testing.

Niall Deehan of Camunda gave a great presentation on on modeling anti-patterns: snaking models that are often used to fit models onto a single sheet of paper (instead, use model with happy path down the centre from left to right); inappropriate use of BPMN versus CMMN (e.g., voting scenarios); inappropriate use of BPMN versus process engine or Cockpit capabilities (e.g., service call with error exceptions for null pointer, bad response, service down); too many listeners on tasks (masks problems and pushes process logic into code, based on concept that analysts’ model should not be changed). He discussed some best practices for consistency: define the symbol set to be used by your analysts and lock down the modeler to remove elements that you don’t want people using; create and maintain your own best practices documentation; use model templates for commonly used activities; and proper training. I would love to see his presentation captured for replay: it was engaging and informative.

The last community presentation was Martin Schimak of plexiti, showing three community extensions used for automating testing of BPMN and CMMN models.  Assert checks and sets the status of tasks in order to drive a process instance through a test scenario. Process Test Coverage visualizes test process paths and checks process model coverage ratio, as covered by individual test methods and entire test classes (e.g., using mockito). Assert Scenario is for writing robust test suites for process models; this was not covered in Schimak’s demo due to time contraints, but you can read more about it on his blog.

Before we started on the Camunda technical presentations, the community award was presented by Camunda as a reward for contributions on extensions: this went to Jan Galinski from Holisticon. It’s really encouraging to see the level of engagement between Camunda and their open source community: Camunda obviously realizes that the community is an important contributor to the success of the enterprise version of the software and the company altogether, and treats them as trusted partners.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

bpmNEXT 2016 demos: Oracle, OpenRules and Sapiens DECISION

This afternoon’s first demo session shifts the focus to decision management and DMN.

Decision Modeling Service – Alvin To, Oracle

wp-1461187532237.jpgOracle Process Cloud as an alternative to their Business Rules, implementing the DMN standard and the FEEL expression language. Exposes decisions as services that can be called from a BPMN process. Create a space (container) to contain all related decision models, then create a DMN decision model in that space. Create test data records in the space, which will be deleted before final deployment. Define decisions using expressions, decision tables, if-then-else constructs and functions. Demo example was a loyalty program, where discounts and points accumulation were decided based on program tier and customer age. The decisions can be manually executed using the test data, and the rules changed and saved to immediately change the decision logic. A second demo example was an order approval decision, where an order number could be fed into the decision and an approval decision returned, including looping through all of the line items in the order and making decisions at that level as well as an overall decision based on the subdecisions. Once created, expose the decisions or subdecisions as services to be called from external systems, such as a step in a BPMN model (or presumably any other application). Good way to introduce standard DMN decision modeling into any application without having an on-premise decision management system.

Dynamic Decision Models: Activation/Deactivation of Business Rules in Real Time – Jacob Feldman, OpenRules

wp-1461187557576.jpgWhat-If Analyzer for decision modeling, for optimization, to show conflicts between rules, and to enable/disable rules dynamically. Interface shows glossary of decision variables, and a list of business rules with a checkbox to activate/deactivate each. Deactivating rules using the checkboxes updates the values of the decision results to find a desired solution, and can find minimum and maximum values for specified decision variables that will still yield the same decision result. The demo example was a loan approval calculation, where several rules were disabled in order to have the decision result of “approved”, then a maximum value generated for accumulated debt that would still give an “approved” result. Second example was how to build a good burger, optimizing cost for specific health and taste standards by selecting different rules and optimizing the resulting sets of decision variables. Third example was a scheduling problem, optimizing activities when building a house in order to maintain precedence and resulting in the earliest possible move-in date, working within budget and schedule constraints. Interesting analysis tool for gaining a deep understanding of how your rules/decisions interact, far beyond what can be done using decision tables, especially for goal-seeking optimization problems. All open source.

The Dirty Secrete in Process and Decision Management: Integration is Difficult – LarryGoldberg, Sapiens DECISION

wp-1461190003376.jpgData virtualization to create in-memory logical units of data related to specific business entities. Demo started with a decision model for an insurance policy renewal, with input variables included for each decision and subdecision. Acquiring the data for those input variables can require a great deal of import/export and mapping from source systems containing that data; their InfoHub creates the data model and allows setup of the integration with external sources by connecting data sources and defining mapping and transformation between source and destination data fields. When deployed to the InfoHub server, web service interfaces are created to allow calling from any application; at runtime, InfoHub ensures that the logical unit of data required for a decision is maintained in memory to improve performance and reduce implementation complexity of the calling application. There are various synchronization strategies to update their logical units when the source data changes — effectively, a really smart caching scheme that syncronizes only the data that is required for decisions.

bpmNEXT 2016 demo session: Signavio and Princeton Blue

Second demo round, and the last for this first day of bpmNEXT 2016.

Process Intelligence – Sven Wagner-Boysen, Signavio

Signavio allows creating a BPMN model with definitions of KPIs for the process such as backlog size and end-to-end cycle time. The demo today was their process intelligence application, which allows a process model to be uploaded as well as an activity log of historical process instance data from an operational system — either a BPMS or some other system such as an ERP or CRM system — in CSV format. Since the process model is already known (in theory), this doesn’t do process mining to derive the model, but rather aggregates the instance data and creates a dashboard that shows the problem areas relative to the KPIs defined in the process model. Drilling down into a particular problem area shows some aggregate statistics as well as the individual instance data. Hovering over an instance shows the trace overlaid on the defined process model, that is, what path that that instance took as it executed. There’s an interesting feature to show instances that deviate from the process model, typically by skipping or repeating steps where there is no explicit path in the process model to allow that. This is similar in nature to what SAP demonstrated in the previous session, although it is using imported process log data rather than a direct connection to the history data. Given that Signavio can model DMN integrated with BPMN, future versions of this could include intelligence around decisions as well as processes; this is a first version with some limitations.

Leveraging Cognitive Computing and Decision Management to Deliver Actionable Customer Insight – Pramod Sachdeva, Princeton Blue

Sentiment analysis of unstructured social media data, creating a dashboard of escalations and activities integrated with internal customer data. Uses Watson for much of the analysis, IBM ODM to apply rules for escalation, and future enhancements may add IBM BPM to automatically spawn action/escalation processes. Includes a history of sentiment for the individual, tied to service requests that responded to social media activity. There are other social listening and sentiment analysis tools that have been around for a while, but they mostly just drive dashboards and visualizations; the goal here is to apply decisions about escalations, and trigger automated actions based on the results. Interesting work, but this was not a demo up to the standards of bpmNEXT: it was only static screenshots and some additional PowerPoint slides after the Ignite portion, effectively just an extended presentation.

Positioning Business Modeling panel at bpmNEXT

We had a panel of Clay Richardson of Forrester, Kramer Reeves of Sapiens and Denis Gagne of Trisotech, moderated by Bruce Silver, discussing the current state of business modeling in the face of digital transformation, where we need to consider modeling processes, cases, content, decisions, data and events in an integrated fashion rather than as separate activities. The emergence of the CMMN and DMN standards, joining BPMN, is driving the emergence of modeling platforms that not only include all three of these, but provide seamless integration between them in the modeling environment: a decision task in a BPMN or CMMN model links directly to the DMN model that represents that decision; a predefined process snippet in a CMMN model links directly to the BPMN model, and an ad hoc task in a BPMN model links directly to the CMMN model. The resulting models may be translated to (or even created in) a low-code executable environment, or may be purely for the purposes of understanding and optimizing the business.

Some of the points covered on the panel:

  • The people creating these models are often in a business architecture role if they are being created top down, although bottom-up modeling is often done by business analysts embedded within business areas. There is a large increase in interest in modeling within architecture groups.
  • One of the challenges is how to justify the time required to create these models. A potential positioning is that business models are essential to capturing knowledge and understanding the business even if they are not directly executable, and as organizations’ use of modeling matures and gains visibility with executives, it will be easier to justify without having to show an immediate tangible ROI. Executable models are easier to justify since they are an integrated part of an application development lifecycle.
  • Models may be non-executable because they model across multiple implementation systems, or are used to model activities in systems that do not have modeling capabilities, such as many ERP, CRM and other core operational systems, or are at higher levels of abstraction. These models have strategic value in understanding complexity and interrelationships.
  • Models may be initiated using a model derived from process/data mining to reduce the time required to get started.
  • Modeling vendors aren’t competing against each other, they’re competing against old methods of text-based business requirements.
  • Many models are persistent, not created just for a specific point in time and discarded after use.

A panel including two vendors and an analyst made for some lively conversation, and not a small amount of finger-pointing. 🙂

Bruce Silver Now Stylish With DMN As Well As BPMN

I thought that Bruce Silver’s blog had been quiet for a while: turns out that he moved to a new, more representative domain name, and my feed reader wasn’t updating from there. He’s rebranding his business, including his blog, under Method & Style, mirroring the title of his popular book and training BPMN Method and Style , and now his new book and training options for DMN: DMN Method and Style: The Practitioner’s Guide to Decision Modeling with Business Rules .

His blog has a ton of new content on DMN, starting with a great piece that compares the path of the DMN standard with that of BPMN, which is considerably more mature. He discusses the five key elements of DMN, then goes into each of those in detail in the next five posts: Decision Requirements Diagrams, Decision Tables, FEEL (a new expression language developed for DMN), Boxed Expressions and the Metamodel and Schema. It’s really interesting to read his analysis comparing the evolution of the two standards: there was a time when everyone thought that BPMN was just about the visual notation, but to make it really useful, the interchange format and execution semantics have to come along at some point. Still, it’s useful to get started in DMN now with DRDs and decision tables, since that at least makes the decision models explicit instead of being buried in text requirements.

Once you’ve brushed up on his posts covering the five key elements, you can also read about conformance levels that vendor can choose to implement, and what didn’t make it into DMN 1.1, which is the first real version of the standard.

He doesn’t pull any punches in his discussion, and is not very complimentary on some aspects of the standard and how some vendor choose to implement it. Just as he is with BPMN. Smile