bpmNEXT 2018: Bonitasoft, Know Process

We’re in the home stretch here at bpmNEXT 2018, day 3 has only a couple of shorter demo sessions and a few related talks before we break early to head home.

When Artificial Intelligence meets Process-Based Applications, Bonitasoft

Nicolas Chabanoles and Nathalie Cotte from Bonitasoft presented on their integration of AI with process applications, specifically for predictive analytics for automating decisions and making recommendations. They use an extension of process mining to examine case data and activity times in order to predict, for example, if a specific case will finish on time; in the future, they hope to be able to accurately predict the end time for individual cases for better feedback to internal users and customers. The demo was a loan origination application built on Bonita BPM, which was fairly standard, with the process mining and machine learning coming in with how the processes are monitored. Log data is polled from the BPM system into an elastic search database, then machine learning is applied to instance data; configuration of the machine learning is based (at this point) only on the specification of an expected completion time for each instance type to build the predictions model. At that point, predictions can be made for in-flight instances as to whether each one will complete on time, or its probability of completing on time for those predicted to be late — for example, if key documents are missing, or the loan officer is not responding quickly enough to review requests. The loan officer is shown what tasks are likely to be causing the late prediction, and completing those tasks will change the prediction for that case. Priority for cases can be set dynamically based on the prediction, so that cases more likely to be late are set to higher priority in order to be worked earlier. Future plans are to include more business data and human resource data, which could be used to explicitly assign late cases to individual users. The use of process mining algorithms, rather than simpler prediction techniques, will allow suggestions on state transitions (i.e., which path to take) in addition to just setting instance priority.

Understanding Your Models and What They Are Trying To Tell You, KnowProcess

Tim Stephenson of KnowProcess spoke about models and standards, particularly applied to their main use case of marketing automation and customer onboarding. Their ModelMinder application ingests BPMN, CMMN and DMN models, and can be used to search the models for activities, resources and other model components, as well as identify and understand extensions such as calling a REST service from a BPMN service task. The demo showed a KnowProcess repository initially through the search interface; searching for “loan” or “send memo” returned links to models with those terms; the model (process, case or decision) can be displayed directly in their viewer with the location of the search term highlighted. The repository can be stored as files or an engine can be directly indexed. He also showed an interface to Slack that uses a model-minder bot that can handle natural language requests for certain model types and content such as which resources do the work as specified in the models or those that call a specific subprocess, providing a link directly back to the models in the KnowProcess repository. Finishing up the demo, he showed how the model search and reuse is attached to a CRM application, so that a marketing person sees the models as functions that can be executed directly within their environment.

Instead of a third demo, we had a more free-ranging discussion that had started yesterday during one of the Q&As about a standardized modeling language for RPA, led by Max Young from Capital BPM and with contributions of a number of others in the audience (including me). Good starting point but there’s obviously still a lot of work to do in this direction, starting with getting some of the major RPA vendors on board with standardization efforts. The emerging ideas seem to center around defining a grammar for the activities that occur in RPA (e.g., extract data from an Excel file, write data to a certain location in an application screen), then an event and flow language to piece together those primitives that might look something like BPMN or CMMN. I see this as similar to the issue of defining page flows, which are often done as a black box function that is performed within a human activity in a BPMN flow: exposing and standardizing that black box is what we’re talking about. This discussion is a prime example of what makes bpmNEXT great, and keeps me coming back year after year.

bpmNEXT 2018: All about bots with Cognitive Technology, PMG.net, Flowable

We’re into the afternoon of day 2 of bpmNEXT 2018, with another demo section.

RPA Enablement: Focus on Long-Term Value and Continuous Process Improvement, Cognitive Technology

Massimiliano Delsante of Cognitive Technology presented their myInvenio product for analyzing processes to determine where gaps exist and create models for closing those gaps through RPA task automation. The demo started with loading historical process data for process mining, which created a process model from the data together with activity resources, counts and other metrics; then comparing the model for conformance with a reference model to determine the frequency and performance of conformant and non-conformant cases. The process discovery model can be transformed to a BPMN model, and simulated performance. With a baseline data set of all manual activities, the system identified the cost of each activity, helping to identify which activities would result in the greatest savings if automated, and fed the data for actual resources used into the simulation scenario; adjusting the resources required by specifying the number of RPA robots that could be deployed at specific tasks allows for a what-if simulation for the process performance with an RPA implementation. An analytics dashboard provides visualization of the original process discovery and the simulated changes, with performance trends over time. Predictive analytics can be applied to running processes to, for example, predict which cases will not meet their deadlines, and some root cause analysis for the problems. Doing this analysis requires that you have information about the cost of the RPA robots as well as being able to identify which tasks could be automated with RPA. Good integration of process discovery, simulation, analysis and ongoing monitoring.

Integration is Still Cool, and Core in your BPM Strategy, PMG.net

Ben Alexander from PMG.net focused on integration within BPM as a key element for driving innovation by increasing the speed of application development: integrating services for RPA, ML, AI, IoT, blockchain, chatbots and whatever other hot new technologies can be brought together in a low-code environment such as PMG. His demo showed a vendor onboarding application, adding a function/subprocess for assessing probability of vendor approval using machine learning by calling AzureML, user task assignment using Slack integration or SMS/phone support through a Twilio connector, and RPA bot invocation using a generic REST API. Nice demo of how to put all of these third-party services together using a BPM platform as the main application development and orchestration engine.

Making Process Personal, Flowable

Paul Holmes-Higgin and Micha Keiner from Flowable presented on their Engage product for customer engagement via chat, using chatbots to augment rather than replace human chat, and modeling the chatbot behavior using standard modeling tools. In particular, they have found that a conversation can be modeled as a case with dynamic injection of processes, with the ability to bring intelligence into conversations, and the added benefit of the chat being completely audited. The demo was around the use case of a high-wealth banking client talking to their relationship manager using chat, with simultaneous views of both the client and relationship manager UI in the Flowable Engage chat interface. The client mentioned that she moved to a new home, and the RM initiated the change address process by starting a new case right in the chat by invoking a context-sensitive digital assistant. This provided advice to the RM about address change regulatory rules, and provided a form in situ to collect the address data. The case is now progressed through a combination of chat message to collaborate between human players, forms filled directly in the chat window, and confirmation by the client via chat by presenting them with information to be updated. Potential issues, such as compliance regulations due to a country move, are raised to the RM, and related processes execute behind the scenes that include a compliance officer via a more standard task inbox interface. Once the compliance process completes, the RM is informed via the chat interface. Behind the scenes, there’s a standard address change BPMN diagram, where the chat interface is integrated through service activities. They also showed replacing the human compliance decision with a decision table that was created (and manually edited if necessary) based on a decision tree generated by machine learning on 200,000 historical address change cases; rerunning the scenario skipped the compliance officer step and approved the change instantaneously. Other chat automated tasks that the RM can invoke include setting reminders, retrieving customer information and more using natural language processing, as well as other types of more structured cases and processes. Great demo, and an excellent look at the future of chat interfaces in process and case management.

bpmNEXT 2018: Complex Modeling with MID GmbH, Signavio and IYCON

The final session of the first day of bpmNEXT 2018 was focused on advanced modeling techniques.

Designing the Data-Driven Company, MID GmbH

Elmar Nathe of MID GmbH presented on their enterprise decision maps, which provides an aggregated visualization of strategic, tactical and operational decisions with business events. They provide a variety of modeling tools, but see decisions as key to understanding how organizations are driven by data and events. Clearly a rich decision modeling environment, including support for PMML for including predictive models and other data scientist analysis tools, plus links to other model types such as ERDs that can show what data contributes to which decision model, and business process models. Much more of an enterprise architecture approach to model-driven design that can incorporate the work of data scientists.

Using Customer Journeys to Connect Theory with Reality, Signavio

Till Reiter and Enrico Teterra of Signavio started with a great example of an Ignite presentation, with few words, lots of graphics and a bit of humor, discussing their new notation for modeling an outside-in view of the customer journey rather than just having an undifferentiated “customer” swimlane in a BPMN diagram. The demo walked through their customer journey mapping tool, and how their collaboration hub overlays on that to allow information about each component of the journey map to be discussed amongst process modeling users. The journey map contains a lot of information about KPIs and other process metrics in a form most consumable by process owners and modelers, but also has a notebook/dashboard view for analysts to determine problems with the process and identify potential resolution actions. This includes a variety of analysis tools including process discovery, where process mining techniques are applied to determine which paths in the process model may be contributing to specific problems such as cycle time, then overlay this on the process model to assist with root cause analysis. Although their product does a good job of combing CJMs, process models and process analysis, this was more of a walkthrough of a set of pre-calculated dashboard screens rather than an actual demo — a far cry from the experimental features that Gero Decker showed off in their demo at the first bpmNEXT.

Discovering the Organizational DNA, IYCON and Knowledge Consultants

The final presentation of this section was with Jude Chagas Pereira of IYCON and Frank Kowalkowski of Knowledge Consultants presenting IYCON’s Afterspyre modeling tool for creating a catalog of complex business objects, their attributes and their linkages to create organizational DNA diagrams. Ranking these with machine learning algorithms for semantic and sentiment analysis allows identification of process improvement opportunities. They have a number of standard business analysis techniques built in, and robust analytics focused on problem solving. The demo walked through their catalog, drilling down into the “Strategy DNA” section and into “Technology Solutions” subsection to show an enumeration of the platforms currently in place together with attributes such as technology risk and obsolescence, which can be used to rank technology upgrade plans. Relationships between business objects can be auto-detected based on existing data. Levels including Objectives, Key Processes, Technology Solutions, Database Technology and Datacenter and their interrelationships are mapped into a DNA diagram and an alluvial diagram, starting at any point in the catalog and drilling down a specific number of levels as selected by the modeling analyst. These diagrams can then be refined further based on factors such as scaling the individual markers based on actual performance. They showed sentiment analysis for a hotel rank on a review site, which included extracting specific phrases that related to certain sentiments. They also demonstrated a two-model comparison, which compared the models for two different companies to determine the overlap and unique processes; a good indicator for a merger/acquisition (or even divestiture) level of difficulty. They finished up with affinity modeling, such as the type used by Amazon when they tell you what books that other people bought who also bought the book that you’re looking at: easy to do in a matrix form with a small data set, but computationally intensive once you get into non-trivial amounts of data. Affinity modeling is most commonly used in marketing to analyze buying habits and offering people something that they are likely to buy, even if that’s what they didn’t plan to buy at first — this sort of “would you like fries with that” technique can increase purchase value by 30-40%. Related to that is correlation modeling, which can be used as a first step for determining causation. Impressive semantic data-driven analytics tool for modeling a lot of different organizational characteristics.

That’s it for day one; if everyone else is as overloaded with information as I am, we’re all ready for tonight’s wine tasting! Check the Twitter stream for opinions and photos from other attendees.

Prepping for OPEXWeek presentation on customer journey mapping – share your ideas!

I’m headed off to OPEX Week in Orlando later this month, where I’ll give a presentation on customer journey mapping and how it results in process improvement as well as customer satisfaction/value. Although customer journey mapping is commonly used to talk about user experience/navigation on customer-facing websites, I want to look at the bigger picture of what we used to call “outside-in processes”, where internal processes are turned on their head to show the process from the customer’s point of view. Once you start thinking about what the customer is trying to accomplish, it can completely change how you perform and set priorities on the internal work, as well as changing the user experience presented to the customer.

I’m preparing a few slides to guide the presentation, and if you have any good stories to share, feel free to let me know by commenting on this post or tweeting to me.

I’m also sitting on a panel the following day on low code and BPM, which I’ve recently written a paper on (sponsored by TIBCO).

Presenting at OPEXWeek in January: customer journey mapping and lowcode

I’ll be on stage for a couple of speaking slots at the OPEX Week Business Transformation Summit 2018 in Orlando the week of January 22nd:

  • Tuesday afternoon, I’ll lead a breakout session in the customer-centric transformation track on increasing customer satisfaction through customer journey mapping and process improvement.
  • Wednesday morning, I’ll be on a panel in the RPA track on how low-code platforms are transforming BPM.

I was last at OPEX Week in 2012, when it was still called PEX Week (for Process Excellence Network) – I was on a BPM blogger panel that time around – and it will be interesting to see how it’s changed since then. Looks like a lot more automation technology in the current version, with the expectation that digital transformation isn’t going to come about just by modeling your business.

If you’re going to be there, look me up at one of my sessions or around the conference on Tuesday and Wednesday.

TIBCO Nimbus for regulatory compliance at Bank of Montreal

It’s the first afternoon of breakout sessions at TIBCO NOW 2016, and Alex Kurm from Bank of Montreal is presenting how the bank has used Nimbus for process documentation, to serve the goals of regulatory compliance and process transformation. They are one of the largest Nimbus users, and Kurm leads a team of process experts deploying Nimbus across the enterprise as part of their in-house process excellence strategy.

He provided a good overview of regulatory and compliance requirements: to quote his slide, you need to have “evidence of robust, documented standard processes to ensure compliance to risk and regulatory requirements” as a minimum. Overlaid on that, there’s an evolving set of consumer demands, moving from traditional in-person, telephone and ATM banking to web and mobile platforms. As a Canadian resident, I can attest that our banks haven’t been as responsive as desired to customer needs in the past; their focus is on operational risk and security.

wp-1463521439971.jpgBMO’s process centre of excellence maintains a knowledge hub of process best practices (including how to use Nimbus in their environment), leads and supports process-related projects, and heads up governance of all process efforts. They have about 16 people in the CoE, then process specialists out in business areas; they even have internalized the Nimbus training. Although there are a variety of tools being used for process models in the bank, they selected Nimbus because of its business-understandable notation, the ability to put all process content in one place, the built-in governance and control over the content (key for auditors to be able to review), and the direct link between process architecture and process maps.

They started on Nimbus 3 years ago with about 20 process authors working on a couple of opportunistic projects; this quickly ramped up to 300 authors by the next year, and they now have more than 500 authors (including business analysts and project managers as well as process specialists), although there are only about 160 active any given month since this work is often project-based. There are 1800 end users looking at Nimbus maps each month, with the largest number in capital markets, although the highest number of distinct initiatives is in the highly regulated area of capital markets. They organize their 20,000 Nimbus maps by core business capability, such as onboarding, then drill down into the business area; they’re looking at ways of improving that to allow for finding content by any search path. They’re also adding Spotfire to be able to interrogate the content to find non-compliant and high-risk maps for review by the CoE.

Their key use cases are:

  • Process documentation for use as a high-level procedural guide
  • A guide for compliance auditors to verify that specific checks and balances are being done
  • Requirements gathering prior to automation (they are also an ActiveMatrix BPM customer), and as ongoing documentation of the automated process

Nimbus is now a core part of their process transformation and risk mitigation strategies; interestingly, the only resistance came from other “process gurus” in the bank who had their own favorite modeling tools.

Good case study of the benefit of process documentation – even in the absence of process automation — in highly-regulated industries.

bpmNEXT 2016 demo session: Signavio and Princeton Blue

Second demo round, and the last for this first day of bpmNEXT 2016.

Process Intelligence – Sven Wagner-Boysen, Signavio

Signavio allows creating a BPMN model with definitions of KPIs for the process such as backlog size and end-to-end cycle time. The demo today was their process intelligence application, which allows a process model to be uploaded as well as an activity log of historical process instance data from an operational system — either a BPMS or some other system such as an ERP or CRM system — in CSV format. Since the process model is already known (in theory), this doesn’t do process mining to derive the model, but rather aggregates the instance data and creates a dashboard that shows the problem areas relative to the KPIs defined in the process model. Drilling down into a particular problem area shows some aggregate statistics as well as the individual instance data. Hovering over an instance shows the trace overlaid on the defined process model, that is, what path that that instance took as it executed. There’s an interesting feature to show instances that deviate from the process model, typically by skipping or repeating steps where there is no explicit path in the process model to allow that. This is similar in nature to what SAP demonstrated in the previous session, although it is using imported process log data rather than a direct connection to the history data. Given that Signavio can model DMN integrated with BPMN, future versions of this could include intelligence around decisions as well as processes; this is a first version with some limitations.

Leveraging Cognitive Computing and Decision Management to Deliver Actionable Customer Insight – Pramod Sachdeva, Princeton Blue

Sentiment analysis of unstructured social media data, creating a dashboard of escalations and activities integrated with internal customer data. Uses Watson for much of the analysis, IBM ODM to apply rules for escalation, and future enhancements may add IBM BPM to automatically spawn action/escalation processes. Includes a history of sentiment for the individual, tied to service requests that responded to social media activity. There are other social listening and sentiment analysis tools that have been around for a while, but they mostly just drive dashboards and visualizations; the goal here is to apply decisions about escalations, and trigger automated actions based on the results. Interesting work, but this was not a demo up to the standards of bpmNEXT: it was only static screenshots and some additional PowerPoint slides after the Ignite portion, effectively just an extended presentation.

bpmNEXT 2016 demo session: 8020 and SAP

My panel done — which probably set some sort of record for containing exactly 50% of the entire female attendees at the conference — we’re on to the bpmNEXT demo session: each is 5 minutes of Ignite-style presentation, 20 minutes of demo, and 5 minutes for Q&A. For the demos, I’ll just try capture some of the high points of each, and I highly recommend that you check out the video of the presentations when they are published after the conference.

Process Design & Automation for a New Economy – Ian Ramsay, 8020 BPM

A simplified, list-based process designer that defines a list of real-world business entities (e.g., application), a list of states unique to each entity (e.g., approved), lists of individuals and groups, lists of stages and tasks associated with each stage. Each new process has a list of start events that happen when a process is instantiated, one or more tasks in the middle, then a list of end events that define when the process is done. Dragging from the lists of entities, states, groups, individuals, stages and tasks onto the process model creates the underlying flow and events, building a more comprehensive process model behind the scenes. This allows a business specialist to create a process model without understanding process modeling or even simple flowcharting, just by identifying the relationships between the different states of business entity, the stages of a business process, and the people involved. Removing an entity from a process modifies the model to remove that entity while keeping the model syntactically correct. Interesting alternative to BPMN-style process modeling, from someone who helped create the BPMN standard, where the process model is a byproduct of entity-state modeling.

Process Intelligence for the Digital Age: Combining Intelligent Insights with Process Mining – Tarun Kamal Khiani and Joachim Meyer, SAP, and Bastian Nominacher, Celonis

Combining SAP’s Operational Process Intelligence analytics and dashboard (which was shown in last year’s bpmNEXT as well as some other briefings that I’ve documented) with Celonis’ process mining. Drilling down on a trouble item from the OPInt dashboard, such as late completion of a specific process type, to determine the root cause of the problem; this includes actionable insights, that is, being able to trigger an operational activity to fix the problem. That allows a case-by-case problem resolution, but adding in the Celonis HANA-based process mining capability allows past process instance data to be mined and analyzed. Adjusting the view on the mined data allows outliers and exceptions to be identified, transforming the straight-through process model to a full model of the instance data. For root cause analysis, this involved filtering down to only processes that took longer than a specific number of days to complete, then manually identifying the portions of the model where the lag times or certain activities may be causing the overly-long cycle time. Similar to other process mining tools, but nicely integrated with SAP S4 processes via the in-memory HANA data mart: no export or preprocessing of the process instance history log, since the process mining is applied directly to the realtime data. This has the potential to be taken further by looking at doing realtime recommendations based on the process mining data and some predictive modeling, although that’s just my opinion.

Good start to the demos with some new ideas on modeling and realtime process mining.

Positioning Business Modeling panel at bpmNEXT

We had a panel of Clay Richardson of Forrester, Kramer Reeves of Sapiens and Denis Gagne of Trisotech, moderated by Bruce Silver, discussing the current state of business modeling in the face of digital transformation, where we need to consider modeling processes, cases, content, decisions, data and events in an integrated fashion rather than as separate activities. The emergence of the CMMN and DMN standards, joining BPMN, is driving the emergence of modeling platforms that not only include all three of these, but provide seamless integration between them in the modeling environment: a decision task in a BPMN or CMMN model links directly to the DMN model that represents that decision; a predefined process snippet in a CMMN model links directly to the BPMN model, and an ad hoc task in a BPMN model links directly to the CMMN model. The resulting models may be translated to (or even created in) a low-code executable environment, or may be purely for the purposes of understanding and optimizing the business.

Some of the points covered on the panel:

  • The people creating these models are often in a business architecture role if they are being created top down, although bottom-up modeling is often done by business analysts embedded within business areas. There is a large increase in interest in modeling within architecture groups.
  • One of the challenges is how to justify the time required to create these models. A potential positioning is that business models are essential to capturing knowledge and understanding the business even if they are not directly executable, and as organizations’ use of modeling matures and gains visibility with executives, it will be easier to justify without having to show an immediate tangible ROI. Executable models are easier to justify since they are an integrated part of an application development lifecycle.
  • Models may be non-executable because they model across multiple implementation systems, or are used to model activities in systems that do not have modeling capabilities, such as many ERP, CRM and other core operational systems, or are at higher levels of abstraction. These models have strategic value in understanding complexity and interrelationships.
  • Models may be initiated using a model derived from process/data mining to reduce the time required to get started.
  • Modeling vendors aren’t competing against each other, they’re competing against old methods of text-based business requirements.
  • Many models are persistent, not created just for a specific point in time and discarded after use.

A panel including two vendors and an analyst made for some lively conversation, and not a small amount of finger-pointing. 🙂

HP Consulting’s Standards-Driven Requirements Method at BPMCM15

Tim Price from HP’s enterprise transformation consulting group presented in the last slot of day 2 of the BPM and case management summit (and what will be my last session, since I’m not staying for the workshops tomorrow) with a discussion on how to improve requirements management by applying standards. There are a lot of potential problems with requirements: inconsistency, not meeting the actual needs, not designed for change, and especially the short-term thinking of treating requirements as project rather than architecture assets. Price is pretty up-front about how you can’t take a “garden variety” business analyst and have them create BPMN diagrams without training, and that 50% of business analysts are unable to create lasting and valuable requirements.

Although I haven’t done any quantitative studies on this, I would tend to agree that the term “business analyst” covers a wide variety of skill levels, and you can’t just assume that anyone with that title can create reusable requirements models and assets. This becomes especially important when you move past written requirements — that need the written language skills that many BAs do have — to event-driven BPMN and other models; the main issue is that these models are actually code, albeit visual code, that may be beyond the technical analysis capabilities of most BAs.

Getting back to Price’s presentation, he established traceability as key to requirements: between BPMN or UML process models and UML use cases, for example; or upwards from processes to capabilities. Data needs to be modeled at the same time as processes, and processes should be modeled as soon as the the high level use case is defined. You can’t always created a one-to-one relationship between different types of elements: an atomic BPMN activity may translate to a use case (system or human), or to more than one use cases, or to only a portion of a use case; lanes and pools may translate to use case actors, but not necessarily; events may represent states and implied state transitions, although also not necessarily. Use prose for descriptions, but not for control flow: that’s what you use process models for, with the prose just explaining the process model. Develop the use case and process models first, then write text to explain whatever is not obvious in the diagrams.

He walked through a case study of a government welfare and benefits organization that went through multiple failed implementations, which were traced back to poor requirements: structural problems, consistency issues, and designs embedded in the specification. Price and his team spent 12 months getting their analysts back on track by establishing standards for creating requirements — with a few of the analysts not making the transition — that led to CMMI recognition of their new techniques. Another case study applied BPMN process models and UML use cases for a code modernization process: basically, their SDLC was the process being improved. A third case study used BPMN to document as-is and to-be processes, then use case models with complete traceability from the to-be processes to the use cases, with UML domain class models being developed in parallel.

The lessons learned from HP’s experiences:

    • Apply existing standards consistently, including BPMN, CMMN, DMN, UML

    • Use graph-structured languages for structure and logic, and prose for description

    • Use repository-based modeling tools to allow for reusability and collaboration

    • Be concise, be precise, be consistent

    • Create requirements models that are architecture assets, not just short-term project assets

    Some good lessons for requirements analysis; although this was developed for complex more waterfall-y SDLCs, some of these can definitely be adapted for more agile implementations.