In the second half of the morning, we started with James Taylor of Decision Management Solutions showing how to use decision modeling for simpler, smarter, more agile processes. He showed what a process model looks like in the absence of externalized decisions and rules: it’s a mess of gateways and branches that basically creates a decision tree in BPMN. A cleaner solution is to externalize the decisions so that they are called as a business rules activity from the process model, but the usual challenge is that the decision logic is opaque from the viewpoint of the process modeler. James demonstrated how the DecisionsFirst modeler can be used to model decisions using the Decision Model and Notation standard, then link a read-only view of that to a process model (which he created in Signavio) so that the process modeler can see the logic behind the decision as if it were a callable subprocess. He stepped through the notation within a decision called from a loan origination process, then took us into the full DecisionsFirst modeler to add another decision to the diagram. The interesting thing about decision modeling, which is exploited in the tool, is that it is based on firmer notions of reusability of data sources, decisions and other objects than we see in process models: although reusability can definitely exist in process models, the modeling tools often don’t support it well. DecisionsFirst isn’t a rules/decision engine itself: it’s a modeling environment where decisions are assembled from the rules and decisions in other environments, including external engines, spreadsheet-based decision tables, or knowledge sources describing the decision. It also allows linking to the processes from which it is invoked, objectives and organizational context; since this is a collaborative authoring environment, it can also include comments from other designers.
François Chevresson-Aubain and Aurélien Pupier of Bonitasoft were up next to show how to build flexibility into deployed processes through a few simple but powerful features. First, adding collaboration tasks at runtime, so that a user in a pre-defined step who needs to include other users at that point can do so even if collaboration wasn’t built in at that point. Second, process model parameters can be changed (by an administrator) at runtime, which will impact all running processes based on that model: the situation demonstrated was to change an external service connector when the service call failed, then replay the tasks that failed on that service call. Both of these features are intended to address dynamic environments where the situation at runtime may be different from that at design time, and how to adjust both manual and automated tasks to accommodate those differences.
We finished the morning with Robert Shapiro of Process Analytica on improving resource utilization and productivity using his Optima workbench. Optima is a tool for a serious analyst – likely with some amount of statistical or data science background – to import a process model and runtime data, set optimization parameters (e.g., reduce resource idleness without unduly impacting cycle time), simulate the process, analyze the results, and determine how to best allocate resources in order to optimize relative to the parameters. Although a complex environment, it provides a lot of visualization of the analytics and optimization; Robert actually encourages “eyeballing” the charts and playing around with parameters to fine-tune the process, although he has a great deal more experience at that than the average user. There are a number of analytical tools that can be applied to the data, such as critical path modeling, and financial parameters to optimize revenues and costs. It can also do quite a bit of process mining based on event log inputs in XES format, including deriving a BPMN process model and data correlation based on the event logs; this type of detailed offline analysis could be applied with the data captured and visualized through an intelligent business operations dashboard for advanced process optimization.
We have one more short session after lunch, then best in show voting before bpmNEXT wraps up for another year.
One of the worst recommendations I have ever heard in IT, is to split process model and rules into two different engines and environments. What is completely ignored is the fact that both the process and the rule engine require data and they require them ABSOLUTELY IN SYNC. That means two times data modeling, update locks, proces to rule locks and overall a much harder maintenance. CLEAN in the sense of a seperation of Boolean logic from flow diagrams is a theoretical aspect that has nothing to do with real life and with the real world of people performing processes. In most cases it is not documented anywhere what changes to which rules impact which business decisions in the process environment and vice versa. If it is documented the effort to do so is immense. You can’t just change a rule because you have to test if that decision has a positive impact in all cases.
Hi Max, I agree that separating rules and process engines and models creates a lot of problems, although if you’re going to use BPMN (or any other process flow diagram) to model processes, it’s difficult to model rules in the same diagrammatic model. Also, you could be calling the same rules from another non-process application, so sometimes it makes sense to separate out the models and engines. Given that the two are separate (as they often are), James’ demonstration of how to move easily between the two model types is interesting. You can see the video of his demo here.
So why would you be using BPMN? Other tools provide rules within the process environment while they are still not coded in the flow-model. They can still be called and used from outside the process environment, but they are tightly linked to the same data model and follow the same change management mechanism. Just because someone else might use the same rule, doesn’t mean that you want it to be the SAME rule. There are actually only a few situations where the same rule is used generically. In most cases there is a complex rule set that is called and has to be run top down that controls which rules are applicable for which data and situation.
If a rule is defined and linked into the process then that is a lot easier. There is no sense in actually seperating the engines. it is a stopgap solution to fulfill a business need with limited technology.
it is really not only about the modeling but about controlling the runtime data. With seperate engines the application passes a data set (or pointer) to the rule engine that then determines if and what rules have to be executed on this data. That is a lot more complex than to say: ‘validate this rule at this point in the flow (on the data in the flow’. Their is no need for modeling twice and passing data, Writing the rule actually offers the data used in the process and ensures that only valid data are being used.
Hi. could you provide a link to the Optima workbench? Thx
Hi Greg, I’m not sure that Optima is released yet, Robert may have been showing a preliminary/beta version of it. You can contact him at the Process Analytica above to ask for more information.
Greg. You can reach me at: [email protected] or [email protected]. I am happy to set up a demo of Optima and discuss pricing. Regards, Robert Shapiro.
@Max – I think you are mistaking rules for decisions. It’s a common error and one that gets people into trouble – partly because rules get everywhere and partly because real decisions can involve so many rules.
All process environments should support rules as processes contain rules – no question. And when these rules are tightly coupled with the process, being managed and versioned with the process, they should be managed as part of the process. Separating these rules into a separate engine makes no sense and they should not be modeled in a separate decision model either.
But when I have a REAL business decision I am solving a different problem. Now I have a decision that involves hundreds or thousands of rules that might be invoked outside of a process context (or at least in several process contexts). And the way I make this decision will change when my process does not (the pricing decision v an order to cash process). Well now managing that decision outside the process makes perfect sense.
In addition well designed decision services don’t update the data – that’s handled by the process – they just make decisions: they decide what SHOULD be done and tell the process so the process can DO it. No confusion, no locking problems, no state management. The process assembles the data, it asks for a decision, and then it acts on it.
Talking as though all decisions involve only a couple of rules has “nothing to do with real life” and is not helpful. Try out an example with a REAL decision, one that is based on multiple pieces of legislation and complex risk analysis, one with hundreds or rules. If I followed your advice for that kind of decision then my simple process would become nightmarishly complex.
The complexity of decision-making in the real-world is PRECISELY why decision modeling (and business rules management systems) work.
Thanks for the reply. I am not mixing up anything.
If one tries to code rules with gateways and branches then yes, a rule engine is most likely the simpler choice to code decision logic. If however the process engine has an embedded rule engine then this argument is no longer valid. Decisions can be as complex in number of rules and managed independently of any of the processes. They can be simpler, because the process engine provides the data and transactional context. Rules and processes use the same data definition and the same data set. Natural language capability allows non-technical people to write rules.
Decisions can be a single rule or many. Decisions always require a context. If you encode decisions into standalone rule sets that have to be valid in every situation that they are being called in then they become by default very complex. So it is the principle of segregating them that creates the complexity of controlling the decision context. If decisions (made up of n rules) are connected to a process then the context is provided and the decision can often be handled with a single rule.
Clearly there can be decisions that are complex by their nature and they might require a fairly powerful rule engine. But I am more seeing the problem of using the RIGHT data at decision time that have to be the same data as the ones used by the process and they might even require a transactional lock.
Decisions never happen standalone but always in a process context. Segregating responsibility of decisions and processes into different tools coded by different people with different responsibilities creates a management overhead that no tool can reduce. An integrated process and rules engine allows logical seperation of processes and decisions but keeps the business responsibility in one hand. BRE’s have to be coded, tested and validated by experts and can’t be handled by business people. That alone creates a rigidity for business processes that should be avoided but it does go hand in hand with the rigidity of BPMN flow diagrams.
There are two schools of thought. The old one that believes that a business can be encoded into processes and decision and the more modern one that knows that this kills the ability of a business to serve customers and to innovate.
@Max My apologies, I thought you had watched the demonstration I gave and therefore understood my position better than appears to be the case. I don’t care where the rules execute – my focus is on how to specify them and how to manage them. I don’t believe specifying a long list of rules – in a (mythical) natural language or other format – or BPMN are good notations for specifying decisions. That’s why I showed how a logic model of decision making ties a process model to a rules-based implementation.
Decisions do always require a context but it does not therefore follow (as you fallaciously argue) that one must code all rules to handle all situations. It’s perfectly sensible to design decisions (and therefore rules) that work for a defined context and only use them in those contexts. No process has to handle every situation – we define different processes with shared sub-processes or activities – and this is how decisions and decision modeling work too. Complex decisions are complex for valid business reasons, not simply because they have been “segregated” and to pretend otherwise is to ignore reality.
Of course if you have designed a decision and implemented it it can be invoked with a single rule in the process – that’s the point. The point is to remove the decision-making complexity from the process and replace it with a model. Two models, each focused on their share of the problem that can be validated against each other but where each is focused on the particular concerns of one part of the problem.
My experiences is that BRMSs can and are managed by business owners when the decision-making being implemented is the focus, not a mindless list of rules. In the same way business people can only realistically manage reasonably well designed processes: design them poorly and both processes and decisions require IT to manage. Yet processes and decisions are not the same: they change at different paces, they are owned by different business users and have differing governance and management issues. When there is not, in fact, “one hand” making changes, stuffing everything into the same software architecture is a poor solution.
We clearly disagree so I will leave you with one closing statement: There are indeed two schools of thought involved here. A modern one where a separation of concerns and models are a better way to describe complex systems that consist of multiple moving parts and an old-school mentality that one language should suffice for everything.
Yes, we disagree and thats where progress comes from. No problem at all.
One can’t design a model and not care how it can be practcally executed. You might have no data to use your model on. This is how IT disasters are created.
What BPMN and BRMS produce are overly complicated architectures that have no resemblance to real world complexity and its adaptive dynamics. Complex systems a business and the economy can’t be described and most certainly not controlled. IT straightjackets are the consequence.
A single system makes when it merges design time and runtime to provides top-down and bottom-up transparency and empowers the people running the business. Process and decision automation is for those who still think that cost cutting produces a more competitive business. But it is easy … Time will tell who is right. Thanks for the dicussion. Thanks Sandy for the forum.