Cake

My first Sacher Torte in Vienna, 2007. Yes, if you buy a whole one it comes in a fancy box.

Lately, I’ve been thinking about cake. Not (just) because I’m headed to Vienna, home of the incomparable Sacher Torte, nor because I’ll be celebrating my birthday while attending the BPM2019 academic research conference while there. No, I’ve been thinking about technical architectural layer cake models.

In 2014, an impossibly long time ago in computer-years, I wrote a paper about what one of the analyst firms was then calling Smart Process Applications (SPA). The idea is that a vendor would provide a SPA platform, then the vendor, customer or third parties would create applications using this platform — not necessarily using low-code tooling, but at least using an integrated set of tools layered on top of the customer’s infrastructure and core business systems. Instances of these applications — the actual SPAs — could then be deployed by semi-technical analysts who just needed to configure the SPA with the specifics of the business function. The paper that I wrote was sponsored by Kofax, but many other vendors provided (and still provide) similar functionality.

Layer cake diagram from my 2014 white paper on Smart Process Application platforms.

The SPA platforms included a number of integrated components to be used when creating applications: process management (BPM), content capture and management (ECM), event handling, decision management (DM), collaboration, analytics, and user experience.

The concept (or at least the name) of SPA platforms has now morphed into a “digital transformation”, “digital automation” or “digital business” platforms, but the premise is the same: you buy a monolithic platform from a vendor that sits on top of your core business systems, then you build applications on top of that to deploy to your business units. The tooling offered by the platform is now more likely to include a low-code development environment, which means that the applications built on the platform may not need a separate “configure and deploy” layer above them as in the SPA diagram here. Or this same model could be used, with non-low-code applications developed in the layer above the platform, then low-code configuration and deployment of those just as in the SPA model. Due to pressure suggestions from analysts, many BPMS platforms became these all-in-one platforms under the guise of iBPMS, but some ended up with a set of tools with uneven capabilities: great functionality for their core strengths (BPM, etc.) but weaker in functionality that they had to partner to include or hastily build in order to be included in the analyst ranking.

The monolithic vendor platform model is great for a lot of businesses that are not in the business of software development, but some very large organizations (or small software companies) want to create their own platform layer out of best-of-breed components. For example, they may want to pick BPM and DM from one vendor, ECM from multiple others, collaboration and user experience from still another, plus event handling and analytics using open source tools. In the SPA diagram above, that turns the dark blue platform layer into “Build” rather than “Buy”, although the impact is much the same for the developers who are building the applications on top of the platform. This is the core of what I’m going to be presenting at CamundaCon next month in Berlin, with some ideas on how the market divides between monolithic and best-of-breed platforms, and how to make a best-of-breed approach work (since that’s the focus of this particular audience).

And yes, there will be cake, or at least some updated technical architectural layer cake models.

September in Europe: @BPMConf in Vienna, @Camunda in Berlin, @DecisionCAMP in Bolzano

Many people vacation in Europe in September once the holiday-making families are back home. Personally, I like to cram in a few conferences between sightseeing.

Brandenburger Tor in Berlin

Primarily, my trip is to present a keynote at CamundaCon in Berlin on September 12-13. Last time that I attended, it was one day for Camunda open source users, followed by one day for commercial customers, the latter of which was mostly in German (Ich spreche nur Deutsch, wenn Google mir hilft). Since then, they’ve combined the programs into a two-day conference that includes keynotes and tracks that appeal across the board; lucky for me, it’s all in English. I’m speaking on the morning of the first day, but plan to stay for most of the conference to hear some updates from Camunda and their customers, and blog about the sessions. Also, I can’t miss the Thursday night BBQ!

Staatsoper in Vienna

Once I had agreed to be in Berlin, I realized that the international academic BPM conference is the previous week in Vienna. I attended my first one in Milan in 2008, then Ulm in 2009, Hoboken in 2010, Clermont-Ferrand in 2011 (where I had the honor of keynoting) and Tallinn in 2012, before I fell off the wagon and have missed every one since then. This year, however, I’ll be back to check out the latest BPM-related research, see workshop presentations, and attend presentations across a number of technical and management tracks.

Waltherplatz in Bolzano

Then I saw a tweet about DecisionCAMP being held in Bolzano the week after CamundaCon, and a few tweets later, I was signed up to attend. Although I’m not focused on decision management, it’s part of what I consult on and write about, and this is a great chance to hear about some of the new trends and best practices.

Look me up if you’re going to be at any of these three conferences, or want to meet up nearby.

OpenText Enterprise World 2019 day 2: technology keynote

We started day 2 of OpenText Enterprise World with a technology keynote by Muhi Majzoub, EVP of Engineering. He opened with a list of their major releases over the last year. He highlighted the upcoming shift to cloud-first containerized deployments of the next generation of their Release 16 that we heard about in Mark Barrenechea’s keynote yesterday, and described the new applications that they have created on the OT2 platform.

We heard about and saw a demo of their Core for Federated Compliance, which allows for federated records and retention management across CMS Core, Content Suite and Documentum repositories, with future potential to connect to other (including non-OpenText) repositories. I’m still pondering the question of when they might force customers to migrate off some of the older platforms, but in the meantime, the content compliance and disposition can be managed in a consolidated manner.

Next was a demo of Documentum D2 integrated with SAP — this already existed for their other content products but this was a direct request from customers — allowing content imported into D2 to support transactions such as purchase orders to be viewed from a Smart View by an SAP user as related documents. They have a strong partnership with SAP, providing enterprise-scale content management as a service on the SAP cloud, integrated with SAP S/4HANA and other applications. They are providing content management as OT2-based microservices, allowing content to be integrated anywhere in the SAP product stack.

AppWorks also made an appearance: this is OpenText’s low-code application development platform that also includes their process management capabilities. They have new interfaces for developers and users, including better mobile applications. No demo, however; given that I missed my pre-conference briefing, I’ll have to wait until later today for that.

Majzoub walked through the updates of many of the other products in their portfolio: EnCase, customer experience management, AI, analytics, eDocs, Business Network and more. They have such a vast portfolio that there are probably few analysts or customers here that are interested in all of them, but there are many customers that use multiple OpenText products in concert.

He finished up with more on OT2, positioning it as a platform and repository of services for building applications in any of their product areas. These services can be consumed by any application development environment, whether their AppWorks low-code platform or more technical development tools such as JAVA. An interesting point made in yesterday’s keynote challenges the idea of non-technical users as “citizen developers”: they see low-code as something that is used by [semi-]technical developers to build applications. The reality of low-code may finally be emerging.

They are featuring six new cloud-based applications built on OT2 that are available to customers now: Core for Capital Projects, Core for Supplier Exchange, Core Enhances Integration with CSP, Core Capture, Core for SAP SuccessFactors, and Core Experience Insights. We saw a demo that included the Capital Projects and Supplier Exchange applications, where information was shared and integrated between a project manager on a project and a supplier providing documentation on proposed components. The Capital Projects application includes analytics dashboards to track progress on deliverables and issues.

Good start to the day, although I’m looking forward to more of a technical drill-down on AppWorks and OT2.

OpenText Enterprise World 2019 day 1 keynote

OpenText is holding their global Enterprise World back in Toronto for the third year in a row (meaning that they’ll probably move on to another city for next year — please not Vegas) and I’m here for a couple of days for briefings with the product teams and to sit in on some of the sessions.

I attended a session earlier on connecting content and process that was mostly market research presented by analysts John Mancini and Connie Moore — some interesting points from both of them — before going to the opening keynote with CEO/CTO Mark Barrenechea and a few guests including Sir Tim Berners-Lee.

Barrenechea started with some information about where OpenText is at now, including their well-ranked positions in analyst rankings for content services platforms (Content Services), supply chain commerce networks (Business Network) and digital process automation (AppWorks). He believes that we’re “beyond digital”, with a focus on information rather than automation. He announced cloud-first versions of their products coming in April 2020, although some products will also be available on premise. Their OT2 Cloud Platform will be sold on a service model; I’m not sure if it’s a full microservice implementation, but it sounds like it’s at least moving in that direction. They’ve also announced a new partnership with Google, with Google Cloud being their preferred platform for customers and the integration of Google Services (such as machine learning) into OpenText EIM; this is on a similar scale to what we’ve seen between Alfresco and Amazon AWS.

The keynote finished with a talk by Sir Tim Berners-Lee, inventor of the World Wide Web, on how the web started, how it’s now used and abused, and what we all can do to make it better.

What’s hot this summer? @Camunda Day NYC 2019

Robert Gimbel of CamundaI popped down to a steamy New York today for the half-day local Camunda Day, which was a good opportunity to see an update on their customer-facing messaging and also hear from some of their customers. It was a packed agenda, starting with Robert Gimbel (Chief Revenue Officer) on best practices for successful Camunda projects. Since he’s in charge of sales, some amount of this was about why to choose the enterprise edition over the community edition, but lots of good insights for all type of customers and even applicable to other BPM products. Although he characterized the community edition for lower complexity and business criticality, I know there are Camunda customers using the open source version on mission-critical processes; however, these organizations have made a larger developer commitment to have in-house experts who can diagnose and fix problems as required.

Gimbel outlined the four major types of projects, which are similar to those that I’ve seen with most enterprise clients:

  • Automation of manual work
  • Migrate processes from other systems, whether legacy BPMS, an embedded workflow within another system, or a homegrown workflow system
  • Add process management to a software product that has no (or inflexible) workflows, such as an accounts payable system
  • Provide a centralized workflow infrastructure as part of a digital automation platform, which is what I talked about in my bpmNEXT keynote

They are seeing a typical project timeline of 3-9 months from initiation to go-live, with the understanding that the initial deployment will continue to be analyzed and improved in an agile manner. He walked through the typical critical success factors for projects, which includes “BPMN and DMN proficiency for all participants”: something that is not universally accepted by many BPM vendors and practitioners. I happen to agree that there is a lot of benefit in everyone involved learning some subset of BPMN and DMN; it’s a matter of what that subset is and how it’s used.

We had a demo by Joe Pappas, a New York-based senior technical consultant, which walked us through using Cawemo (now free!) for collaborative modeling by the business, then importing, augmenting, deploying and managing an application that included both a BPMN and a DMN model. He showed how to detect and resolve problems in operational systems, and finished with building new reports and dashboards to display process analytics.

John Fontaine, Capital OneThe first half of the morning finished with a presentation from John Fontaine, Master Software Engineer at Capital One (a Camunda customer) on organizing a Camunda hackathon. As an aside, this is a great topic for getting a customer involved who can’t talk directly about their BPM implementation due to privacy or intellectual property concerns. They had a 2-day event with 42 developers in 6 teams, plus product and process owners/managers — the latter of which are a bit less common as hackathon participants, but everyone was expected to work collaboratively and have fun.

Capital One started with a problem brief in terms of the business case and required technical elements, and a specific judging rubric for evaluating the projects. Since many of the participants were relatively new to Camunda and BPMN, they included some playful uses of BPMN such as the agenda. The first morning was spent on ideation and solution selection, with the afternoon spent creating the BPMN models and creating application wireframes. On the second day, the morning was spent on completing the coding and preparing their demo, with the afternoon for the team demos.

Fontaine finished up with lessons learned across all aspects of the hackathon, from logistics and staffing to attendee recruiting and organization, agenda pacing and milestones, judging, and resource materials such as code samples. Their goal was not to create applications ready for deployment, but a couple of the teams created applications that have become a trigger for ongoing projects.

After the break, we heard from Bernd Ruecker, co-founder of Camunda and now in the role of developer evangelist, on workflow automation in a microservices architecture. He has been writing and speaking on this topic for a while now, including some key points that run counter to many BPM vendors’ views of microservices, and even counter to some of Camunda’s previous views:

  • Every workflow must be owned by one microservice, and workflow live inside service boundaries. This means no monolithic (end-to-end) BPMN models for execution, although the business likely creates higher-level non-executable models that shown an end-to-end view.
  • Event-driven architecture for passing information between services in a decoupled manner, although it’s necessary to keep a vision of an overall flow to avoid unexpected emergent behaviors. This can still be accomplished with messaging, but you need to think about some degree of coupling by emitting commands rather than just events: a balance of orchestration and choreography.
  • Microservices are, by their nature, distributed systems; however, there is usually a need for some amount of stateful orchestration, such as is provided by a BPM engine.

From Bernd Ruecker’s blog post

Ruecker talked about the different ways of communication — message/event bus versus REST-ish command-type events between services versus using a BPM engine as a work distributor for external services — with the note that it’s possible to do good microservices architecture with any of these methods. He notes that in the last scenario (using a BPM engine as the overall service orchestrator) is not necessarily best practice; he is looking more at the use of the engine at a lower granularity, where there is a BPM engine encapsulated in each service that requires it. Check out his blog post on microservices workflow automation for more details.

The (half) day finished with Frederic Meier, Camunda’s head of sales for North America, in conversation with Michael Kirven, VP of IT Business Solutions at People’s United Bank about their Camunda implementation in lending, insurance, wealth management and other business applications. They opened it up to the audience of mostly financial services customers to talk about their use cases, which included esoteric scenarios such as video processing (passing a video through transcoding and other services), and more mainstream examples such as a multi-account closure. This gave an opportunity for prospects and less-experienced customers to ask questions of the battle-hardened veterans who have deployed multiple Camunda applications.

Great content, and definitely worthwhile for the 40-50 people in attendance.

bpmNEXT 2019 wrapup: coverage from others plus my keynote video

Finally getting around to going through all of the other coverage of bpmNEXT, and reviewing the video of my keynote.

This is the first time that I’ve presented these concepts in this presentation format, and I definitely have ideas about how to make this clearer: there are some good use cases to include in more detail, plus counter-use cases where a microservices approach doesn’t fit.

All of the presentation videos are now available online, check out the entire playlist here.

Kris Verlaenen from Red Hat, in addition to presenting his own session on automating human-centric processes with machine learning, posted his impressions in five posts. He also went back and updated them with the videos of each session:

  • Day 1, covering the two keynotes by Nathaniel Palmer and Jim Sinur, and the initial demo session by Appian.
  • Day 1 Part 2, covering demo sessions by BP Logix, Minit, Cognitive Technology, Kissflow, Wizly and IBM.
  • Day 2, covering my keynote, demo sessions by Trisotech and Method & Style, and a panel on decision services and machine learning.
  • Day 2 Part 2, covering demo sessions by Bonitasoft, Signavio and Flowable, plus a panel on the value proposition of intelligent automation.
  • Day 3, covering demo sessions by Serco, Fujitsu, Red Hat (his own presentation) and SAP, wrapping up with the discussion on the DMN TCK.

Great coverage, since he and I sometimes see different things in the demo and it’s good to read someone else’s views.

Keith Swenson wrote a summary post for the three keynotes including some detailed criticisms of my keynote; I’ll definitely be reviewing these for improving the presentation and reworking how I present some of the concepts. He also wrote a post about the DMN TCK (technical compatibility kit) efforts, now three years in, and some of the success that they’re seeing in helping to standardize the use of DMN.

Another great year of bpmNEXT.

bpmNEXT 2019 demo: intelligent BPM by @SAP plus DMN TCK working group

ML, Conversational UX, and Intelligence in BPM, with Andre Hofeditz and Seshadri Sreeniva of SAP plus DMN TCK update

We’re at the end of bpmNEXT for another year, and we have one last demo. Seshadri showed a demo of their intelligent BPM for an employee onboarding process (integrated with SuccessFactors), where the process can vary widely depending on level, location and other characteristics. This exposes the pre-defined business processes in SuccessFactors, with configuration tools for customizing the process by adding and modifying building blocks to create a process variant for a special case. Decisions involved in the processes can also be configured, as well as dashboards for viewing the processes in flight. Extension workflows can be created by selecting a standard process “recipe” from a SuccessFactors library, then configuring it for the specific use; he showed an example here for adding an equipment provisioning extension that can be added as a service task to one of the top-level process models. He demonstrated a voice-controlled chatbot interface for interacting with processes, allowing a manager to ask what’s happening for them today, and get back information on the new employee onboardings in progress, and expected delays and a link to his task inbox. Tasks can be displayed in the chat interface, and approvals accepted via voice or typed chat. The chatbot is using AI for determining the intent of the input and providing a precise and accurate response, and using ML to provide predictions on the time required to complete processes that are in flight if asked about completion times and possible delays. The chatbot can also make decision table-based recommendations such as creating an IT ticket to assign roles to the new employee and find a desk location. He showed the interface for designing and training the bot capabilities, where a designer can create a new conversational AI skill based on conditions, triggers and actions to take. This is currently a lab preview, but will be rolled out as part of their cloud platform workflow (not unique to the SuccessFactors environment) in the coming months.

Decision Model and Notation Technology Compatibility Kit update with Keith Swenson

We finished off bpmNEXT 2019 with an update on the DMN TCK, that is, the set of tools provided for free for vendors to test their implementation of DMN. The TCK provides DMN 1.2 models plus sets of input data and expected results; a runner app calls the vendor engine, compares the results and exports them as a CSV file to show compliance. In the three years since this was kicked off, there are eight vendors showing results and over 1000 test cases, with another vendor about to join the list and add another 600 test cases. The test cases are determined through manual examination of the standard specification, so represents a significant amount of work to create this robust set of compliance tests. The TCK group is not creating the standard, but testing it; however, Keith identified some opportunities for the TCK to be more proactive in defining some things such as error handling behavior that the revision task force (RTF) at OMG are unlikely to address in the near term. He also pointed out that there are many more vendors claiming DMN compatibility than have demonstrated that compatibility with the TCK.

That’s it for bpmNEXT 2019 – always feels like it’s over too soon, yet I leave with my brain stuffed full of so many good ideas. We’ve done the wrapup survey and heading off to lunch, but the results on Best in Show won’t come out until I’m already on my way to the airport.

bpmNEXT 2019 demos focused on creating smarter processes: decisions, RPA, emergent processes and machine learning with Serco, @FujitsuAmerica and @RedHat

A Well-Mixed Cocktail: Blending Decision and RPA Technologies in 1st Gen Design Patterns, with Lloyd Dugan of Serco

Lloyd showed a scenario of using decision management to determine if a step could be done by RPA or a human operator, then modeling the RPA “operator” as a role (performer) for a specific task and dynamically assigning work – this is instead of refactoring the BPMS process to include specific RPA robot service tasks. This is shown from an actual case study that uses Sapiens for decision management and Appian for case/process management, with Kapow for RPA. The focus here is on the work assignment decisioning, since the real-world scenario is managing work for thousands of heads-down users, and the redirection of work to RPA can have huge overall cost savings and efficiency improvement even for small tasks such as logging in to the multiple systems required for a user to do work. The RPA flow was created, in part, via the procedural documentation wiki that is provided to train and guide users, and if the robot can’t work a task through to completion then it is passed off to a human operator. The “demo” was actually a pre-recorded screen video, so more like a presentation with a few dynamic bits, but gave an insight into how DM and RPA can be added to an existing complex process in a BPMS to improve efficiency and intelligence. Using this method, work can gradually be carved off and performed by robots (either completely or partially) without significantly refactoring the BPMS process for specific robot tasks.

Emergent Synthetic Process, with Keith Swenson of Fujitsu

Keith’s demo is based on the premise that although business processes can appear to be simple on the surface when you look at that original clean model, the reality is considerably messier. Instead of predefining a process and forcing workers to follow that in order, he shows defining service descriptions as tasks with their required participants and predecessor tasks. From that, processes can be synthesized at any point during execution that meet the requirements of the remaining tasks; this means that any given process instance may have the tasks in a different order and still be compliant. He showed a use case of a travel authorization process from within Fujitsu, where a travel request automatically generates an initial process – all processes are a straight-through series of steps – but any changes to the parameters of the request may modify the model. This is all based on satisfying the conditions defined by the dependency graph (e.g., departmental manager requires that the manager approve before they can approve it), starting with the end point and chaining backwards through the graph to create the series of steps that have to be performed. Different divisions had different rules around their processes, specifically the Mexico group did not have departmental levels so did not have one of the levels of approval. Adding a step to a process is a matter of adding it as a prerequisite for another task; the new step will then be added to the process and the underlying dependency graph. As an instance executes, the completed tasks become fixed as history but the future tasks can change if there are changes to the tasks dependencies or participants. This methodology allows multiple stakeholders to define and change service descriptions without having a single process owner controlling the end-to-end process orchestration, and have new and in-flight processes generate the optimal path forward.

Automating Human-Centric Processes with Machine Learning, with Kris Verlaenen of Red Hat

Kris demonstrated working towards an automated process using machine learning (random forest model) in incremental small steps: first, augmenting data, then recommending the next step, and finally learning from what happened in order to potentially automate a task. The scenario was provisioning a new laptop inside an organization through their IT department, including approval, ordering and deployment to the employee. He started with the initial manual process for the first part of this – order by employee, quote provided by vendor, and approval by manager – and looked at  how ML could monitor this process over many execution instances, then start providing recommendations to the manager on whether to approve a purchase or not based on parameters such as the requester and the laptop brand. Very consistent history will result in high confidence levels of the recommendation, although more realistic history may have lower confidence levels; the manager can be presented with the confidence level and the parameters on which that was based along with the recommendation itself. In case management scenarios with dynamic task creation, the ML can also make recommendations about creating tasks at a certain stage, such as creating a new task to notify the legal department when the employee is in a certain country. Eventually, this can make recommendations about how to change the initial process/case model to encode that knowledge as new rules and activities, such as adding ad hoc tasks for the tasks that were being added manually, triggered based on new rules detected in the historical instances. Kris finished with the caveat that machine learning algorithms can be biased by the training data and may not learn the correct behavior; this is why they look at using ML to assist users before incorporating this learned behavior into the pre-defined process or case models.

bpmNEXT 2019 demos: microservices, robots and intentional processes with @Bonitasoft @Signavio and @Flowable

BPM, Serverless and Microservices: Innovative Scaling on the Cloud with Philippe Laumay and Thomas Bouffard of Bonitasoft

Turns out that my microservices talk this morning was a good lead-in to a few different presentations: Bonitasoft has moved to a serverless microservices architecture, and the pros and cons of this approach. Their key reason was scalability, especially where platform load is unpredictable. The demo showed an example of starting a new case (process instance) in a monolithic model under no load conditions, then the same with a simulated load, where the user response in the new case was significantly degraded. They then demoed the same scenario but scaling the BPM engine by deploying it multiple times in componentized “pods” in Kubernetes, where Kubernetes can automatically scale up further as load increases. This time, the user experience on the loaded system was considerably faster. This isn’t a pure microservices approach in that they are scaling a common BPM engine (hence a shared database even if there are multiple process servers), not embedding the engine within the microservices, but it does allow for easy scaling of the shared server platform. This requires cluster management for communicating between the pods and keeping state in sync. The final step of the demo was to externalize the execution completely to AWS Lambda by creating a BPM Lambda function for a serverless execution.

Performance Management for Robots, with Mark McGregor and Alessandro Manzi of Signavio

Just like human performers, robots in an RPA scenario need to have their performance monitored and managed: they need the right skills and training, and if they aren’t performing as expected, they should be replaced. Signavio does this by using their Process Intelligence (process mining) to discover potential bottleneck tasks to apply RPA and create a baseline for the pre-RPA processes. By identifying tasks that could be automated using robots, Alessandro demonstrated how they could simulate scenarios with and without robots that include cost and time. All of the simulation results can be exported as an Excel sheet for further visualization and analysis, although their dashboard tools provide a good view of the results. Once robots have been deployed, they can use process mining again to compare against the earlier analysis results as well as seeing the performance trends. In the demo, we saw that the robots at different tasks (potentially from different vendors) could have different performance results, with some requiring either replacement, upgrading or removal. He finished with a demo of their “Lights-On” view that combines process modeling and mining, where traffic lights linked to the mining performance analysis are displayed in place in the model in order to make changes more easily.

The Case of the Intentional Process, with Paul Holmes-Higgin and Micha Kiener of Flowable

The last demo of the day was Flowable showing how they combined trigger, sentry, declarative and stage concepts from CMMN with microprocesses (process fragments) to contain chatbot processes. Essentially, they’re using a CMMN case folder and stages as intelligent containers for small chatbot processes; this allows, for example, separation and coordination of multiple chatbot roles when dealing with a multi-product client such as a banking client that does both business banking and personal investments with the bank. The chat needs to switch context in order to provide the required separation of information between business and personal accounts. “Intents” as identified by the chatbot AI are handled as inbound signals to the CMMN stages, firing off the associated process fragment for the correct chatbot role. The process fragment can then drive the chatbot to walk the client through a process for the requested service, such as KYC and signing a waiver for onboarding with a new investment category, in a context-sensitive manner that is aware of the customer scenario and what has happened already. The chatbot processes can even hand the chat over to a human financial advisor or other customer support person, who would see the chat history and be able to continue the conversation in a manner that is seamless to the client. The digital assistant is still there for the advisor, and can detect their intentions and privately offer to kick off processes for them, such as preparing a proposal for the client, or prevent messages that may violate privacy or regulatory compliance. The advisor’s task list contains tasks that may be the result of conversations such as this, but will also include internally created and assigned tasks. The advisor can also provide a QR code to the client via chat that will link to a WhatsApp (or other messaging platform) version of the conversation: less capable than the full Flowable chat interface since it’s limited to text, but preferred by some clients. If the client changes context, in this case switching from private banking questions to a business banking request, the chatbot an switch seamlessly to responding to that request, although the advisor’s view would show separate private and business banking cases for regulatory reasons. Watch the video when it comes out for a great discussion at the end on using CMMN stages in combination with BPMN for reacting to events and context switching. It appears that chatbots have officially moved from “toy” to “useful”, and CMMN just got real.

bpmNEXT 2019 demos: Appian

Usually I blog about the demos in groups, but Malcolm Ross of Appian was the lone demo between the panel and lunch so he gets his own post. Smile

As a reminder, demos are a five-minute Ignite-style presentation (20 slides with an auto-advance every 15 seconds) followed by a live demo and Q&A. Malcolm had a lot to say, however, so had five minutes of slide followed by another four minutes of talk in front of a looping video before he started the actual demo.

Malcolm’s demo is on realigning BPM in the age of intelligent automation, in the context of different automation technologies (RPA, AI, BPM, integration) that are being sold as separate solutions into organizations. Not surprisingly, he positions BPM as the core technology and integration platform, but they also OEM Blue Prism’s RPA into their product suite and can integrate with many other web services to take part in the automation. He demonstrated an invoice processing application where he uploaded an invoice PDF where the data was captured using an RPA bot where BPM was used for exception handling when the bot couldn’t complete its task as well as overall monitoring of processes including the bot tasks. He walked through some of their design-time experience that is focused on integration, showing how connections to services from Blue Prism, Automation Anywhere, AWS machine learning, Google NLP and others can be used to create integration points that can then be called from their BPM processes. Good use case of using BPM and RPA together – they are much more complementary than competitive – by allowing RPA tasks to be orchestrated and monitored as part of a larger BPM process. He also had a great analogy when asked about deciding when to use RPA versus BPM: RPA is like a pain reliever that provides temporary relief, while BPM (and SOA) is like an antibiotic that cures the underlying problem.