Microservices meets case management: my post on the @Alfresco blog

Image lifted from my post on Alfresco’s blog, charmingly named “analyst_meme1”

I wrote a post on a microservices approach to intelligent case management applications for Alfresco, which they’ve published on their blog. It covers the convergence of three key factors: the business requirement to support case management paradigms for knowledge work; the operational drive to increase automation and ensure compliance; and the technology platform trend to adopt microservices.

It’s a pretty long read, I originally wrote it as a 3-4 page paper to cover the scope of the issues and cover case management examples in insurance claims, citizen services, and customer onboarding. My conclusion:

Moving from a monolithic application to microservices architecture makes good sense for many business systems today; for intelligent case management, where no one supplier can provide a good solution for all of the required capabilities, it’s essential.

Before you ask:

  • Yes, I was paid for it, which is why it’s there and not here.
  • No, it’s not about Alfresco products, it’s technology/business analysis.

Wrapping up OpenText Enterprise World 2019

It’s the last day of OpenText Enterprise World for this year. I started the day attending one of the developer labs, where I created a JavaScript app using OT2 services, then attended a couple of AppsWorks-related sessions: Duke Energy’s transition from MetaStorm to AppWorks, and using AppWorks for process/case and content integration in the public sector. I also got to meet the adorable Great Dane that was here as part of the Paws for a Break program: she’s a cross between a Harlequin and Merle in color, so they call her a Merlequin.

Mark Barrenechea was back to close the conference with a quick recap: 3,500 attendees, OpenText Cloud Edition, Google partnership, ethical supply chains, and the talk by Sir Tim Berners-Lee. Plus Berner-Lee’s quote of the real reason that the web was created: cat videos!

In addition to the announcements that we heard during the week, Barrenechea also told us about their new partnership with MasterCard to provide integrated payment services in B2B supply chains, and had two MasterCard Enterprise Partnership executives on stage to talk more about it.

The closing ceremonies finished off with another very special guest: singer, songwriter and activist Peter Gabriel. I was familiar with his music career — having had the pleasure to see him live in concert in the past — but didn’t realize the extent of his human rights activism. He talked about his start and career in music, and some of the ways that he’s woven human rights into his career, from writing the timeless anti-apartheid hit about Stephen Biko to starting the WOMAD festival. He’s been involved in the creation of an inter-species internet, and showed a video of a bonobo composing music with him.

Then his band joined him and he played a set! Amazing finish to the week.

OpenText Enterprise World 2019: AppWorks roadmap and technical deep dive

I had an afternoon with AppWorks at OpenText Enterprise World: a roadmap session followed by a technical deep dive. AppWorks is their low-code tool that includes process management, case management, and access to content and other information, supported across mobile and desktop and platforms. It contains a number of pre-packaged components, and a technical developer can create new components that can be accessed as services from the AppWorks environment. They’ve recently made it into the top-right corner of the Forrester Wave for [deep] digital process automation platforms, with their strength in case management and content integration listed as some of their strongest features, as well as Magellan’s AI and analytics, and the OpenText Cloud deployment platform.

The current release has focused on improving end-user flexibility and developer ease-of-use, but also on integration capabilities with the large portfolio of other OpenText tools and products. Some new developer features such as an expression editor and a mobile-first design paradigm, plus an upcoming framework for end-user UI customization in terms of themes and custom forms. Runtime performance has been improved by making applications into true single-page applications.

There are four applications built on the current on-premise AppWorks: Core for Legal, Core for Quality Management, Contract Center and People Center. These are all some combination of content (from the different content services platforms available) plus case or process management, customized for a vertical application. I didn’t hear a commitment to migrate these to the cloud, but there’s no reason that this won’t happen.

Some interesting future plans, such as how AppWorks will be used as a low-code development tool for OT2 applications. They have a containerized version of AppWorks available as a developer preview as a stepping stone to next year’s cloud edition. There was a mention of RPA although not a clear direction at present: they can integrate with third-party RPA tools now and may be mulling over whether to build/buy their own capability. There’s also the potential to build process intelligence/mining and reporting functionality based on their Magellan machine learning and analytics. There were a lot of questions from the audience, such as whether they will be supporting GitHub for source code control (probably but not yet scheduled) and better REST support.

Nick King, the director of product management for AppWorks, took us through a technical session that was primarily an extended live demonstration of creating a complex application in AppWorks. Although the initial part of creating the layout and forms is pretty accessible to non-technical people, the creation of BPMN diagrams, web service integration, and case lifecycle workflows are clearly much more technical; even the use of expressions in the forms definition is starting to get pretty technical. Also, based on the naming of components visible at various points, there is still a lot of the legacy Cordys infrastructure under the covers of AppWorks; I can’t believe it’s been 12 years since I first saw Cordys (and thought it was pretty cool).

There are a lot of nice things that just happen without configuration, much less coding, such as the linkages between components within a UI layout. Basically, if an application contains a number of different building blocks such as properties, forms and lifecycle workflows, those components are automatically wired together when assembled on a single page layout. Navigation breadcrumbs and action buttons are generated automatically, and changes in one component can cause updates to other components without a screen refresh.

OpenText, like every other low-code application development vendor, will likely continue to struggle with the issues of what a non-technical business analyst versus a technical developer does within a low-code environment. As a JAVA developer at one of my enterprise clients said recently upon seeing a low-code environment, “That’s nice…but we’ll never use it.” I hope that they’re wrong, but fear that they’re right. To address that, it is possible to use the AppWorks environment to write “pro-code” (technical lower-level code) to create services that could be added to a low-code application, or to create an app with a completely different look and feel than is possible using AppWorks low-code. If you were going to do a full-on BPMN process model, or make calls to Magellan for sentiment analysis, it would be more of a pro-code application.

OpenText Enterprise World 2019 day 2: technology keynote

We started day 2 of OpenText Enterprise World with a technology keynote by Muhi Majzoub, EVP of Engineering. He opened with a list of their major releases over the last year. He highlighted the upcoming shift to cloud-first containerized deployments of the next generation of their Release 16 that we heard about in Mark Barrenechea’s keynote yesterday, and described the new applications that they have created on the OT2 platform.

We heard about and saw a demo of their Core for Federated Compliance, which allows for federated records and retention management across CMS Core, Content Suite and Documentum repositories, with future potential to connect to other (including non-OpenText) repositories. I’m still pondering the question of when they might force customers to migrate off some of the older platforms, but in the meantime, the content compliance and disposition can be managed in a consolidated manner.

Next was a demo of Documentum D2 integrated with SAP — this already existed for their other content products but this was a direct request from customers — allowing content imported into D2 to support transactions such as purchase orders to be viewed from a Smart View by an SAP user as related documents. They have a strong partnership with SAP, providing enterprise-scale content management as a service on the SAP cloud, integrated with SAP S/4HANA and other applications. They are providing content management as OT2-based microservices, allowing content to be integrated anywhere in the SAP product stack.

AppWorks also made an appearance: this is OpenText’s low-code application development platform that also includes their process management capabilities. They have new interfaces for developers and users, including better mobile applications. No demo, however; given that I missed my pre-conference briefing, I’ll have to wait until later today for that.

Majzoub walked through the updates of many of the other products in their portfolio: EnCase, customer experience management, AI, analytics, eDocs, Business Network and more. They have such a vast portfolio that there are probably few analysts or customers here that are interested in all of them, but there are many customers that use multiple OpenText products in concert.

He finished up with more on OT2, positioning it as a platform and repository of services for building applications in any of their product areas. These services can be consumed by any application development environment, whether their AppWorks low-code platform or more technical development tools such as JAVA. An interesting point made in yesterday’s keynote challenges the idea of non-technical users as “citizen developers”: they see low-code as something that is used by [semi-]technical developers to build applications. The reality of low-code may finally be emerging.

They are featuring six new cloud-based applications built on OT2 that are available to customers now: Core for Capital Projects, Core for Supplier Exchange, Core Enhances Integration with CSP, Core Capture, Core for SAP SuccessFactors, and Core Experience Insights. We saw a demo that included the Capital Projects and Supplier Exchange applications, where information was shared and integrated between a project manager on a project and a supplier providing documentation on proposed components. The Capital Projects application includes analytics dashboards to track progress on deliverables and issues.

Good start to the day, although I’m looking forward to more of a technical drill-down on AppWorks and OT2.

OpenText Enterprise World 2019 day 1 keynote

OpenText is holding their global Enterprise World back in Toronto for the third year in a row (meaning that they’ll probably move on to another city for next year — please not Vegas) and I’m here for a couple of days for briefings with the product teams and to sit in on some of the sessions.

I attended a session earlier on connecting content and process that was mostly market research presented by analysts John Mancini and Connie Moore — some interesting points from both of them — before going to the opening keynote with CEO/CTO Mark Barrenechea and a few guests including Sir Tim Berners-Lee.

Barrenechea started with some information about where OpenText is at now, including their well-ranked positions in analyst rankings for content services platforms (Content Services), supply chain commerce networks (Business Network) and digital process automation (AppWorks). He believes that we’re “beyond digital”, with a focus on information rather than automation. He announced cloud-first versions of their products coming in April 2020, although some products will also be available on premise. Their OT2 Cloud Platform will be sold on a service model; I’m not sure if it’s a full microservice implementation, but it sounds like it’s at least moving in that direction. They’ve also announced a new partnership with Google, with Google Cloud being their preferred platform for customers and the integration of Google Services (such as machine learning) into OpenText EIM; this is on a similar scale to what we’ve seen between Alfresco and Amazon AWS.

The keynote finished with a talk by Sir Tim Berners-Lee, inventor of the World Wide Web, on how the web started, how it’s now used and abused, and what we all can do to make it better.

What’s hot this summer? @Camunda Day NYC 2019

Robert Gimbel of CamundaI popped down to a steamy New York today for the half-day local Camunda Day, which was a good opportunity to see an update on their customer-facing messaging and also hear from some of their customers. It was a packed agenda, starting with Robert Gimbel (Chief Revenue Officer) on best practices for successful Camunda projects. Since he’s in charge of sales, some amount of this was about why to choose the enterprise edition over the community edition, but lots of good insights for all type of customers and even applicable to other BPM products. Although he characterized the community edition for lower complexity and business criticality, I know there are Camunda customers using the open source version on mission-critical processes; however, these organizations have made a larger developer commitment to have in-house experts who can diagnose and fix problems as required.

Gimbel outlined the four major types of projects, which are similar to those that I’ve seen with most enterprise clients:

  • Automation of manual work
  • Migrate processes from other systems, whether legacy BPMS, an embedded workflow within another system, or a homegrown workflow system
  • Add process management to a software product that has no (or inflexible) workflows, such as an accounts payable system
  • Provide a centralized workflow infrastructure as part of a digital automation platform, which is what I talked about in my bpmNEXT keynote

They are seeing a typical project timeline of 3-9 months from initiation to go-live, with the understanding that the initial deployment will continue to be analyzed and improved in an agile manner. He walked through the typical critical success factors for projects, which includes “BPMN and DMN proficiency for all participants”: something that is not universally accepted by many BPM vendors and practitioners. I happen to agree that there is a lot of benefit in everyone involved learning some subset of BPMN and DMN; it’s a matter of what that subset is and how it’s used.

We had a demo by Joe Pappas, a New York-based senior technical consultant, which walked us through using Cawemo (now free!) for collaborative modeling by the business, then importing, augmenting, deploying and managing an application that included both a BPMN and a DMN model. He showed how to detect and resolve problems in operational systems, and finished with building new reports and dashboards to display process analytics.

John Fontaine, Capital OneThe first half of the morning finished with a presentation from John Fontaine, Master Software Engineer at Capital One (a Camunda customer) on organizing a Camunda hackathon. As an aside, this is a great topic for getting a customer involved who can’t talk directly about their BPM implementation due to privacy or intellectual property concerns. They had a 2-day event with 42 developers in 6 teams, plus product and process owners/managers — the latter of which are a bit less common as hackathon participants, but everyone was expected to work collaboratively and have fun.

Capital One started with a problem brief in terms of the business case and required technical elements, and a specific judging rubric for evaluating the projects. Since many of the participants were relatively new to Camunda and BPMN, they included some playful uses of BPMN such as the agenda. The first morning was spent on ideation and solution selection, with the afternoon spent creating the BPMN models and creating application wireframes. On the second day, the morning was spent on completing the coding and preparing their demo, with the afternoon for the team demos.

Fontaine finished up with lessons learned across all aspects of the hackathon, from logistics and staffing to attendee recruiting and organization, agenda pacing and milestones, judging, and resource materials such as code samples. Their goal was not to create applications ready for deployment, but a couple of the teams created applications that have become a trigger for ongoing projects.

After the break, we heard from Bernd Ruecker, co-founder of Camunda and now in the role of developer evangelist, on workflow automation in a microservices architecture. He has been writing and speaking on this topic for a while now, including some key points that run counter to many BPM vendors’ views of microservices, and even counter to some of Camunda’s previous views:

  • Every workflow must be owned by one microservice, and workflow live inside service boundaries. This means no monolithic (end-to-end) BPMN models for execution, although the business likely creates higher-level non-executable models that shown an end-to-end view.
  • Event-driven architecture for passing information between services in a decoupled manner, although it’s necessary to keep a vision of an overall flow to avoid unexpected emergent behaviors. This can still be accomplished with messaging, but you need to think about some degree of coupling by emitting commands rather than just events: a balance of orchestration and choreography.
  • Microservices are, by their nature, distributed systems; however, there is usually a need for some amount of stateful orchestration, such as is provided by a BPM engine.

From Bernd Ruecker’s blog post

Ruecker talked about the different ways of communication — message/event bus versus REST-ish command-type events between services versus using a BPM engine as a work distributor for external services — with the note that it’s possible to do good microservices architecture with any of these methods. He notes that in the last scenario (using a BPM engine as the overall service orchestrator) is not necessarily best practice; he is looking more at the use of the engine at a lower granularity, where there is a BPM engine encapsulated in each service that requires it. Check out his blog post on microservices workflow automation for more details.

The (half) day finished with Frederic Meier, Camunda’s head of sales for North America, in conversation with Michael Kirven, VP of IT Business Solutions at People’s United Bank about their Camunda implementation in lending, insurance, wealth management and other business applications. They opened it up to the audience of mostly financial services customers to talk about their use cases, which included esoteric scenarios such as video processing (passing a video through transcoding and other services), and more mainstream examples such as a multi-account closure. This gave an opportunity for prospects and less-experienced customers to ask questions of the battle-hardened veterans who have deployed multiple Camunda applications.

Great content, and definitely worthwhile for the 40-50 people in attendance.

bpmNEXT 2019 wrapup: coverage from others plus my keynote video

Finally getting around to going through all of the other coverage of bpmNEXT, and reviewing the video of my keynote.

This is the first time that I’ve presented these concepts in this presentation format, and I definitely have ideas about how to make this clearer: there are some good use cases to include in more detail, plus counter-use cases where a microservices approach doesn’t fit.

All of the presentation videos are now available online, check out the entire playlist here.

Kris Verlaenen from Red Hat, in addition to presenting his own session on automating human-centric processes with machine learning, posted his impressions in five posts. He also went back and updated them with the videos of each session:

  • Day 1, covering the two keynotes by Nathaniel Palmer and Jim Sinur, and the initial demo session by Appian.
  • Day 1 Part 2, covering demo sessions by BP Logix, Minit, Cognitive Technology, Kissflow, Wizly and IBM.
  • Day 2, covering my keynote, demo sessions by Trisotech and Method & Style, and a panel on decision services and machine learning.
  • Day 2 Part 2, covering demo sessions by Bonitasoft, Signavio and Flowable, plus a panel on the value proposition of intelligent automation.
  • Day 3, covering demo sessions by Serco, Fujitsu, Red Hat (his own presentation) and SAP, wrapping up with the discussion on the DMN TCK.

Great coverage, since he and I sometimes see different things in the demo and it’s good to read someone else’s views.

Keith Swenson wrote a summary post for the three keynotes including some detailed criticisms of my keynote; I’ll definitely be reviewing these for improving the presentation and reworking how I present some of the concepts. He also wrote a post about the DMN TCK (technical compatibility kit) efforts, now three years in, and some of the success that they’re seeing in helping to standardize the use of DMN.

Another great year of bpmNEXT.

bpmNEXT 2019 demo: intelligent BPM by @SAP plus DMN TCK working group

ML, Conversational UX, and Intelligence in BPM, with Andre Hofeditz and Seshadri Sreeniva of SAP plus DMN TCK update

We’re at the end of bpmNEXT for another year, and we have one last demo. Seshadri showed a demo of their intelligent BPM for an employee onboarding process (integrated with SuccessFactors), where the process can vary widely depending on level, location and other characteristics. This exposes the pre-defined business processes in SuccessFactors, with configuration tools for customizing the process by adding and modifying building blocks to create a process variant for a special case. Decisions involved in the processes can also be configured, as well as dashboards for viewing the processes in flight. Extension workflows can be created by selecting a standard process “recipe” from a SuccessFactors library, then configuring it for the specific use; he showed an example here for adding an equipment provisioning extension that can be added as a service task to one of the top-level process models. He demonstrated a voice-controlled chatbot interface for interacting with processes, allowing a manager to ask what’s happening for them today, and get back information on the new employee onboardings in progress, and expected delays and a link to his task inbox. Tasks can be displayed in the chat interface, and approvals accepted via voice or typed chat. The chatbot is using AI for determining the intent of the input and providing a precise and accurate response, and using ML to provide predictions on the time required to complete processes that are in flight if asked about completion times and possible delays. The chatbot can also make decision table-based recommendations such as creating an IT ticket to assign roles to the new employee and find a desk location. He showed the interface for designing and training the bot capabilities, where a designer can create a new conversational AI skill based on conditions, triggers and actions to take. This is currently a lab preview, but will be rolled out as part of their cloud platform workflow (not unique to the SuccessFactors environment) in the coming months.

Decision Model and Notation Technology Compatibility Kit update with Keith Swenson

We finished off bpmNEXT 2019 with an update on the DMN TCK, that is, the set of tools provided for free for vendors to test their implementation of DMN. The TCK provides DMN 1.2 models plus sets of input data and expected results; a runner app calls the vendor engine, compares the results and exports them as a CSV file to show compliance. In the three years since this was kicked off, there are eight vendors showing results and over 1000 test cases, with another vendor about to join the list and add another 600 test cases. The test cases are determined through manual examination of the standard specification, so represents a significant amount of work to create this robust set of compliance tests. The TCK group is not creating the standard, but testing it; however, Keith identified some opportunities for the TCK to be more proactive in defining some things such as error handling behavior that the revision task force (RTF) at OMG are unlikely to address in the near term. He also pointed out that there are many more vendors claiming DMN compatibility than have demonstrated that compatibility with the TCK.

That’s it for bpmNEXT 2019 – always feels like it’s over too soon, yet I leave with my brain stuffed full of so many good ideas. We’ve done the wrapup survey and heading off to lunch, but the results on Best in Show won’t come out until I’m already on my way to the airport.

bpmNEXT 2019 demos focused on creating smarter processes: decisions, RPA, emergent processes and machine learning with Serco, @FujitsuAmerica and @RedHat

A Well-Mixed Cocktail: Blending Decision and RPA Technologies in 1st Gen Design Patterns, with Lloyd Dugan of Serco

Lloyd showed a scenario of using decision management to determine if a step could be done by RPA or a human operator, then modeling the RPA “operator” as a role (performer) for a specific task and dynamically assigning work – this is instead of refactoring the BPMS process to include specific RPA robot service tasks. This is shown from an actual case study that uses Sapiens for decision management and Appian for case/process management, with Kapow for RPA. The focus here is on the work assignment decisioning, since the real-world scenario is managing work for thousands of heads-down users, and the redirection of work to RPA can have huge overall cost savings and efficiency improvement even for small tasks such as logging in to the multiple systems required for a user to do work. The RPA flow was created, in part, via the procedural documentation wiki that is provided to train and guide users, and if the robot can’t work a task through to completion then it is passed off to a human operator. The “demo” was actually a pre-recorded screen video, so more like a presentation with a few dynamic bits, but gave an insight into how DM and RPA can be added to an existing complex process in a BPMS to improve efficiency and intelligence. Using this method, work can gradually be carved off and performed by robots (either completely or partially) without significantly refactoring the BPMS process for specific robot tasks.

Emergent Synthetic Process, with Keith Swenson of Fujitsu

Keith’s demo is based on the premise that although business processes can appear to be simple on the surface when you look at that original clean model, the reality is considerably messier. Instead of predefining a process and forcing workers to follow that in order, he shows defining service descriptions as tasks with their required participants and predecessor tasks. From that, processes can be synthesized at any point during execution that meet the requirements of the remaining tasks; this means that any given process instance may have the tasks in a different order and still be compliant. He showed a use case of a travel authorization process from within Fujitsu, where a travel request automatically generates an initial process – all processes are a straight-through series of steps – but any changes to the parameters of the request may modify the model. This is all based on satisfying the conditions defined by the dependency graph (e.g., departmental manager requires that the manager approve before they can approve it), starting with the end point and chaining backwards through the graph to create the series of steps that have to be performed. Different divisions had different rules around their processes, specifically the Mexico group did not have departmental levels so did not have one of the levels of approval. Adding a step to a process is a matter of adding it as a prerequisite for another task; the new step will then be added to the process and the underlying dependency graph. As an instance executes, the completed tasks become fixed as history but the future tasks can change if there are changes to the tasks dependencies or participants. This methodology allows multiple stakeholders to define and change service descriptions without having a single process owner controlling the end-to-end process orchestration, and have new and in-flight processes generate the optimal path forward.

Automating Human-Centric Processes with Machine Learning, with Kris Verlaenen of Red Hat

Kris demonstrated working towards an automated process using machine learning (random forest model) in incremental small steps: first, augmenting data, then recommending the next step, and finally learning from what happened in order to potentially automate a task. The scenario was provisioning a new laptop inside an organization through their IT department, including approval, ordering and deployment to the employee. He started with the initial manual process for the first part of this – order by employee, quote provided by vendor, and approval by manager – and looked at  how ML could monitor this process over many execution instances, then start providing recommendations to the manager on whether to approve a purchase or not based on parameters such as the requester and the laptop brand. Very consistent history will result in high confidence levels of the recommendation, although more realistic history may have lower confidence levels; the manager can be presented with the confidence level and the parameters on which that was based along with the recommendation itself. In case management scenarios with dynamic task creation, the ML can also make recommendations about creating tasks at a certain stage, such as creating a new task to notify the legal department when the employee is in a certain country. Eventually, this can make recommendations about how to change the initial process/case model to encode that knowledge as new rules and activities, such as adding ad hoc tasks for the tasks that were being added manually, triggered based on new rules detected in the historical instances. Kris finished with the caveat that machine learning algorithms can be biased by the training data and may not learn the correct behavior; this is why they look at using ML to assist users before incorporating this learned behavior into the pre-defined process or case models.

bpmNEXT 2019 demos: microservices, robots and intentional processes with @Bonitasoft @Signavio and @Flowable

BPM, Serverless and Microservices: Innovative Scaling on the Cloud with Philippe Laumay and Thomas Bouffard of Bonitasoft

Turns out that my microservices talk this morning was a good lead-in to a few different presentations: Bonitasoft has moved to a serverless microservices architecture, and the pros and cons of this approach. Their key reason was scalability, especially where platform load is unpredictable. The demo showed an example of starting a new case (process instance) in a monolithic model under no load conditions, then the same with a simulated load, where the user response in the new case was significantly degraded. They then demoed the same scenario but scaling the BPM engine by deploying it multiple times in componentized “pods” in Kubernetes, where Kubernetes can automatically scale up further as load increases. This time, the user experience on the loaded system was considerably faster. This isn’t a pure microservices approach in that they are scaling a common BPM engine (hence a shared database even if there are multiple process servers), not embedding the engine within the microservices, but it does allow for easy scaling of the shared server platform. This requires cluster management for communicating between the pods and keeping state in sync. The final step of the demo was to externalize the execution completely to AWS Lambda by creating a BPM Lambda function for a serverless execution.

Performance Management for Robots, with Mark McGregor and Alessandro Manzi of Signavio

Just like human performers, robots in an RPA scenario need to have their performance monitored and managed: they need the right skills and training, and if they aren’t performing as expected, they should be replaced. Signavio does this by using their Process Intelligence (process mining) to discover potential bottleneck tasks to apply RPA and create a baseline for the pre-RPA processes. By identifying tasks that could be automated using robots, Alessandro demonstrated how they could simulate scenarios with and without robots that include cost and time. All of the simulation results can be exported as an Excel sheet for further visualization and analysis, although their dashboard tools provide a good view of the results. Once robots have been deployed, they can use process mining again to compare against the earlier analysis results as well as seeing the performance trends. In the demo, we saw that the robots at different tasks (potentially from different vendors) could have different performance results, with some requiring either replacement, upgrading or removal. He finished with a demo of their “Lights-On” view that combines process modeling and mining, where traffic lights linked to the mining performance analysis are displayed in place in the model in order to make changes more easily.

The Case of the Intentional Process, with Paul Holmes-Higgin and Micha Kiener of Flowable

The last demo of the day was Flowable showing how they combined trigger, sentry, declarative and stage concepts from CMMN with microprocesses (process fragments) to contain chatbot processes. Essentially, they’re using a CMMN case folder and stages as intelligent containers for small chatbot processes; this allows, for example, separation and coordination of multiple chatbot roles when dealing with a multi-product client such as a banking client that does both business banking and personal investments with the bank. The chat needs to switch context in order to provide the required separation of information between business and personal accounts. “Intents” as identified by the chatbot AI are handled as inbound signals to the CMMN stages, firing off the associated process fragment for the correct chatbot role. The process fragment can then drive the chatbot to walk the client through a process for the requested service, such as KYC and signing a waiver for onboarding with a new investment category, in a context-sensitive manner that is aware of the customer scenario and what has happened already. The chatbot processes can even hand the chat over to a human financial advisor or other customer support person, who would see the chat history and be able to continue the conversation in a manner that is seamless to the client. The digital assistant is still there for the advisor, and can detect their intentions and privately offer to kick off processes for them, such as preparing a proposal for the client, or prevent messages that may violate privacy or regulatory compliance. The advisor’s task list contains tasks that may be the result of conversations such as this, but will also include internally created and assigned tasks. The advisor can also provide a QR code to the client via chat that will link to a WhatsApp (or other messaging platform) version of the conversation: less capable than the full Flowable chat interface since it’s limited to text, but preferred by some clients. If the client changes context, in this case switching from private banking questions to a business banking request, the chatbot an switch seamlessly to responding to that request, although the advisor’s view would show separate private and business banking cases for regulatory reasons. Watch the video when it comes out for a great discussion at the end on using CMMN stages in combination with BPMN for reacting to events and context switching. It appears that chatbots have officially moved from “toy” to “useful”, and CMMN just got real.