bpmNEXT 2018: Bonitasoft, Know Process

We’re in the home stretch here at bpmNEXT 2018, day 3 has only a couple of shorter demo sessions and a few related talks before we break early to head home.

When Artificial Intelligence meets Process-Based Applications, Bonitasoft

Nicolas Chabanoles and Nathalie Cotte from Bonitasoft presented on their integration of AI with process applications, specifically for predictive analytics for automating decisions and making recommendations. They use an extension of process mining to examine case data and activity times in order to predict, for example, if a specific case will finish on time; in the future, they hope to be able to accurately predict the end time for individual cases for better feedback to internal users and customers. The demo was a loan origination application built on Bonita BPM, which was fairly standard, with the process mining and machine learning coming in with how the processes are monitored. Log data is polled from the BPM system into an elastic search database, then machine learning is applied to instance data; configuration of the machine learning is based (at this point) only on the specification of an expected completion time for each instance type to build the predictions model. At that point, predictions can be made for in-flight instances as to whether each one will complete on time, or its probability of completing on time for those predicted to be late — for example, if key documents are missing, or the loan officer is not responding quickly enough to review requests. The loan officer is shown what tasks are likely to be causing the late prediction, and completing those tasks will change the prediction for that case. Priority for cases can be set dynamically based on the prediction, so that cases more likely to be late are set to higher priority in order to be worked earlier. Future plans are to include more business data and human resource data, which could be used to explicitly assign late cases to individual users. The use of process mining algorithms, rather than simpler prediction techniques, will allow suggestions on state transitions (i.e., which path to take) in addition to just setting instance priority.

Understanding Your Models and What They Are Trying To Tell You, KnowProcess

Tim Stephenson of KnowProcess spoke about models and standards, particularly applied to their main use case of marketing automation and customer onboarding. Their ModelMinder application ingests BPMN, CMMN and DMN models, and can be used to search the models for activities, resources and other model components, as well as identify and understand extensions such as calling a REST service from a BPMN service task. The demo showed a KnowProcess repository initially through the search interface; searching for “loan” or “send memo” returned links to models with those terms; the model (process, case or decision) can be displayed directly in their viewer with the location of the search term highlighted. The repository can be stored as files or an engine can be directly indexed. He also showed an interface to Slack that uses a model-minder bot that can handle natural language requests for certain model types and content such as which resources do the work as specified in the models or those that call a specific subprocess, providing a link directly back to the models in the KnowProcess repository. Finishing up the demo, he showed how the model search and reuse is attached to a CRM application, so that a marketing person sees the models as functions that can be executed directly within their environment.

Instead of a third demo, we had a more free-ranging discussion that had started yesterday during one of the Q&As about a standardized modeling language for RPA, led by Max Young from Capital BPM and with contributions of a number of others in the audience (including me). Good starting point but there’s obviously still a lot of work to do in this direction, starting with getting some of the major RPA vendors on board with standardization efforts. The emerging ideas seem to center around defining a grammar for the activities that occur in RPA (e.g., extract data from an Excel file, write data to a certain location in an application screen), then an event and flow language to piece together those primitives that might look something like BPMN or CMMN. I see this as similar to the issue of defining page flows, which are often done as a black box function that is performed within a human activity in a BPMN flow: exposing and standardizing that black box is what we’re talking about. This discussion is a prime example of what makes bpmNEXT great, and keeps me coming back year after year.

bpmNEXT 2018: Intelligence and robots with ITESOFT, K2, BeeckerCo

We’re finishing up day 2 of bpmNEXT with a last section of demos.

Robotics, Customer Interactions and BPM, ITESOFT

Francois Bonnet from ITESOFT presented on customer interactions and automation, and the use of BPMN-driven robots to guide customer experience. In a first for bpmNEXT, the demo included an actual physical human-shaped robot (which was 3D-printed from an open source project) that can do voice recognition, text to speech, video capture, movement tracking and facial recognition. The robot’s actions were driven by a BPMN process model, with activities such as searching for humans, recognizing faces, speaking phrases, processing input and making branching decisions. The process model was shown simultaneously, with the execution path updated in real time as it moved through the process, with robot actions shown as service activities. The scenario was the robot interacting with a customer in a mobile phone shop, recognizing the customer or training a new facial recognition, asking what service is required, then stepping through acquiring a new phone and plan. He walked through how the BPMN model was used, with both synchronous and asynchronous services for controlling the robot and invoking functions such as classifier training, and human activities for interacting with the customer. Interesting use of BPMN as a driver for real robot actions, showing integration of recognition, RPA, AI, image capture and business services such as customer enrolment and customer ID validation.

The Future of Voice in Business Process Automation, K2

Brandon Brown from K2 looked at a more focused use case for voice recognition, and some approaches to voice-first design that is more than just speech-to-text by adding cognitive services through commodity AI services from Google, Amazon and Microsoft. Their goal is to make AI more accessible through low/no-code application builders like K2, creating voice recognition applications such as chatbots. He demonstrated a chatbot on a mobile phone that was able to not just recognize the words that he spoke, but recognize the intent of the interaction and request additional data: essentially a replacement for filling out a form. This might be a complete interaction, or just an assist for starting a more involved process based on the original voice input. He switched over to a computer browser interface to show more of the capabilities, including sentiment analysis based on form input that could adjust the priority of a task or impact process execution. From within their designer environment, cognitive text analytics such as sentiment analysis can be invoked as a step in a process using their Smart Objects, which are effectively wrappers around one or more services and data mapping actions that allow less-technical process designers include cognitive services in their process applications. Good demo of integrating voice-related cognitive services into processes, showing how third-party services make this much more accessible to any level of developer skill.

State Machine Applied to Corporate Loans Process, BeeckerCo

Fernando Leibowich Beker from BeeckerCo finished up the day with a presentation on their process app suite BeBOP based on IBM BPM/ODM focused on financial services customers, followed by a “demo” of mostly prerecorded screencams. Their app generates state tables for processes using ODM business rules, then allows business users to change the state table in order to drive the process execution. The demo showed a typical IBM BPM application for processing a loan origination, but the steps are defined as ad hoc tasks so not part of a process flow; instead, the process flow is driven by the state table to determine which task to execute in which order, and the only real flow is to check the state table, then either invoke the next task or complete the process. Table-driven processes aren’t a new concept — we’ve been doing this since the early days of workflow — although using an ODM decision table to manage the state transition table is an interesting twist. This does put me in mind of the joke I used to tell when I first started giving process-focused presentations at the Business Rules Forum, about how a process person would model an entire decision tree in BPMN, while a rules person would have a single BPMN node that called a decision tree to execute all of the process logic: just because you can do something using a certain method doesn’t mean that you should do it.

We’re done with day 2; tomorrow is only a half-day of sessions with the awards after lunch (which I’ll probably have to monitor remotely since I’ll be headed for the airport by mid-afternoon).

bpmNEXT 2018: All about bots with Cognitive Technology, PMG.net, Flowable

We’re into the afternoon of day 2 of bpmNEXT 2018, with another demo section.

RPA Enablement: Focus on Long-Term Value and Continuous Process Improvement, Cognitive Technology

Massimiliano Delsante of Cognitive Technology presented their myInvenio product for analyzing processes to determine where gaps exist and create models for closing those gaps through RPA task automation. The demo started with loading historical process data for process mining, which created a process model from the data together with activity resources, counts and other metrics; then comparing the model for conformance with a reference model to determine the frequency and performance of conformant and non-conformant cases. The process discovery model can be transformed to a BPMN model, and simulated performance. With a baseline data set of all manual activities, the system identified the cost of each activity, helping to identify which activities would result in the greatest savings if automated, and fed the data for actual resources used into the simulation scenario; adjusting the resources required by specifying the number of RPA robots that could be deployed at specific tasks allows for a what-if simulation for the process performance with an RPA implementation. An analytics dashboard provides visualization of the original process discovery and the simulated changes, with performance trends over time. Predictive analytics can be applied to running processes to, for example, predict which cases will not meet their deadlines, and some root cause analysis for the problems. Doing this analysis requires that you have information about the cost of the RPA robots as well as being able to identify which tasks could be automated with RPA. Good integration of process discovery, simulation, analysis and ongoing monitoring.

Integration is Still Cool, and Core in your BPM Strategy, PMG.net

Ben Alexander from PMG.net focused on integration within BPM as a key element for driving innovation by increasing the speed of application development: integrating services for RPA, ML, AI, IoT, blockchain, chatbots and whatever other hot new technologies can be brought together in a low-code environment such as PMG. His demo showed a vendor onboarding application, adding a function/subprocess for assessing probability of vendor approval using machine learning by calling AzureML, user task assignment using Slack integration or SMS/phone support through a Twilio connector, and RPA bot invocation using a generic REST API. Nice demo of how to put all of these third-party services together using a BPM platform as the main application development and orchestration engine.

Making Process Personal, Flowable

Paul Holmes-Higgin and Micha Keiner from Flowable presented on their Engage product for customer engagement via chat, using chatbots to augment rather than replace human chat, and modeling the chatbot behavior using standard modeling tools. In particular, they have found that a conversation can be modeled as a case with dynamic injection of processes, with the ability to bring intelligence into conversations, and the added benefit of the chat being completely audited. The demo was around the use case of a high-wealth banking client talking to their relationship manager using chat, with simultaneous views of both the client and relationship manager UI in the Flowable Engage chat interface. The client mentioned that she moved to a new home, and the RM initiated the change address process by starting a new case right in the chat by invoking a context-sensitive digital assistant. This provided advice to the RM about address change regulatory rules, and provided a form in situ to collect the address data. The case is now progressed through a combination of chat message to collaborate between human players, forms filled directly in the chat window, and confirmation by the client via chat by presenting them with information to be updated. Potential issues, such as compliance regulations due to a country move, are raised to the RM, and related processes execute behind the scenes that include a compliance officer via a more standard task inbox interface. Once the compliance process completes, the RM is informed via the chat interface. Behind the scenes, there’s a standard address change BPMN diagram, where the chat interface is integrated through service activities. They also showed replacing the human compliance decision with a decision table that was created (and manually edited if necessary) based on a decision tree generated by machine learning on 200,000 historical address change cases; rerunning the scenario skipped the compliance officer step and approved the change instantaneously. Other chat automated tasks that the RM can invoke include setting reminders, retrieving customer information and more using natural language processing, as well as other types of more structured cases and processes. Great demo, and an excellent look at the future of chat interfaces in process and case management.

bpmNEXT 2018 day 2 keynote with @NathanielPalmer

Nathaniel Palmer kicked off day 2 of bpmNEXT 2018 with his ever-prescient views on the next five years of BPM. Bots, decisions and automation are key, with the three R’s (robots, rules and relationships) defining BPM in the years to come. More and more, commercial transactions (or services that form part of those transactions) will happen on servers outside your organization, and often outside of your control; robots and intelligent agents will be doing a big part of that work. He also believes that we’re seeing the beginning of the death of smartphones, to be replaced with other devices and other interfaces such as conversational UI and wearable technology. This is going to radically change how apps have to be designed, and will leave a lot of companies scrambling to catch up with this change as people move more of their interactions off smartphones and laptops. Although more conservative organizations — including government agencies — will continue to support the least common denominator in interaction style (probably email and traditional websites), commercial organizations don’t have that luxury, and need to rethink sooner. He envisions that your fastest-growing competitors will have fewer employees than robots, although some interesting news out of Tesla this week may indicate that it’s premature to replace some human functions.

He spoke about how this will refine application architecture to four tiers: a client tier unique to each platform, a separate delivery tier that optimizes delivery for the platforms, an aggregation tier that integrates services and data, and a services tier that pulls data from both internal and external source. This creates an abstraction between what a task is and how it is performed, and even whether it is automated or performed by a person. Decision as a service for both commercial and government services will become a primary delivery model, allowing decisions (and the automation enabled by them) to be easily plugged into applications; this will require more of a business-first, model-driven approach rather than having decisions built in code by developers.

His Future-Proof BPM architecture — what others are calling a digital transformation platform — brings together a variety of capabilities that can be provided by many vendors or other organizations, and fed by events. In fact, the core capabilities (automation, machine learning, decision management, workflow management) also generate events that feed back into the data flooding into these processes. BPM platforms have the ability to become the orchestrating platforms for this, which is possibly why many of the BPMS vendors are rebranding as low-code application development environments, but be aware of fundamental differences in the underlying architecture: do they support modularity and microservices, or are they just lifting and shifting to monolithic containers in the cloud?

Finishing up, he returned to the concept that intelligent agents can act autonomously in complex transactions, and this will be becoming more common over the next few years. Interestingly, an interview that I did for a European publication is being translated into German, and the translator emailed me this morning to tell me that they needed to change some of my comments on automating loan transactions since that’s not permitted in Germany. My response: not yet, but it will be. We all need to be prepared for a more automated future.

Great audience discussion at the end on how this architecture is manifesting, how to model/represent some of these automation concepts, the role of a smarter event bus, the future of the word “bot” and more. Max Young from Capital BPM took over to discuss the development of a grammar for RPA, with an invitation for the brain trust in the room to start thinking about this in more detail. RPA vendors are creating their own notation, but a vendor-agnostic standard would go a long ways towards helping business people to directly specify automation.

Since they’re pumping out the video on the same day as the presentations, check the bpmNEXT YouTube channel later for a replay of Nathaniel’s presentation.

bpmNEXT 2018: Here’s to the oddballs, with ConsenSys, XMPro and BPLogix

And we’re off with the demo sessions!

Secure, Private Decentralized Business Processes for Blockchains, ConsenSys

Vanessa Bridge of ConsenSys spoke about using BPMN diagrams to create smart contracts and other blockchain applications, while also including privacy, security and other necessary elements: essentially, using BPM to enable Ethereum-based smart contracts (rather than using blockchain as a ledger for BPM transactions and other BPM-blockchain scenarios that I’ve seen in the past). She demonstrated using Camunda BPM for a token sale application, and for a boardroom voting application. For each of the applications, she used BPMN to model the process, particularly the use of BPMN timers to track and control the smart contract process — something that’s not native to blockchain itself. Encryption and other steps were called as services from the BPMN diagram, and the results of each contract were stored in the blockchain. Good use of BPM and blockchain together in a less-expected manner.

Turn IoT Technology into Operational Capability, XMPro

Pieter van Schalkwyk of XMPro looked at the challenges of operationalizing IoT, with a virtual flood of data from sensors and machines that needs to be integrated into standard business workflows. This involves turning big data into smart data via stream processing before passing it on to the business processes in order to achieve business outcomes. XMPro provides smart listeners and agents that connect the data to the business processes, forming the glue between realtime data and resultant actions. His demo showed data being collected from a fan on a cooling tower, bringing in data the sensor logs and comparing it to manufacturer’s information and historical information in order to predict if the fan is likely to fail, create a maintenance work order and even optimize maintenance schedules. They can integrate with a large library of action agents, including their own BPM platform or other communication and collaboration platforms such as Slack. They provide a lot of control over their listener agents, which can be used for any type of big data, not just industrial device data, and integrate complex and accurate prediction models regarding likelihood and remaining useful life predictions. He showed their BPM platform that would be used downstream from the analytical processing, where the internet of things can interact with the internet of people to make additional decisions required in the context of additional information such as 3D drawings. Great example of how to filter through hundreds of millions data points in streaming mode to find the few combinations that require action to be taken. He threw out a comment at the end that this could be used for non-industrial applications, possibly for GDPR data, which definitely made me think about content analytics on content as it’s captured in order to pick out which of the events might trigger a downstream process, such as a regulatory process.

Business Milestones as Configuration, BPLogix

Scott Menter and Joby O’Brien of BPLogix finished up this section on new BPM ideas with their approach to goal orientation in BPM, which is milestone-based and requires understanding the current state of a case before deciding how to move forward. Their Process Director BPM is not BPMN-based, but rather an event-based platform where events are used to determine milestones and drive the process forward: much more of a case management view, usually visualized as a project management-style GANTT chart rather thana flow model. They demonstrated the concept of app events, where changes in state of any of a number of things — form fields, activities, document attachments, etc. — can record a journal entry that uses business semantics and process instance data. This allows events from different parts of the platform to be brought together in a single case journal that shows the significant activity within the case, but also to be triggers for other events such as determining case completion. The journal can be configured to show only certain types of events for specific users — for example, if they’re only interested in events related to outgoing correspondence — and also becomes a case collaboration discussion. Users can select events within the journal and add their own notes, such as taking responsibility for a meeting request. They also showed how machine learning and rules can be used for dynamically changing data; although shown as interactions between fields on forms, this can also be used to generate new app events. Good question from the audience on how to get customers to think about their work in terms of events rather than procedures; case management proponents will say that business people inherently think about events/state changes rather than process. Interesting representation of creating a selective journal based on business semantics rather than just logging everything and expecting users to wade through it for the interesting bits.

We’re off to lunch. I’m a bit out of practice at live-blogging, but hope that I captured some of the flavor of what’s going on here. Back with more this afternoon!

Obligatory futurist keynote at AIIM18 with @MikeWalsh

We’re at the final day of the AIIM 2018 conference, and the morning keynote is with Mike Walsh, talking about business transformation and what you need to think about as you’re moving forward. He noted that businesses don’t need to worry about millenials, they need to worry about 8-year-olds: these days 90% of all 2-year-olds (in the US) know how to use a smart device, making them the truly born-digital generation. What will they expect from the companies of the future?

Machine learning allows us to customize experiences for every user and every consumer, based on analysis of content and data. Consumers will expect organizations to predict their needs, before they could even voice it themselves. In order to do that, organizations need to become algorithmic businesses: be business machines rather than have business models. Voice interaction is becoming ubiquitous, with smart devices listening to us most (all) of the time and using that to gather more data on us. Face recognition will become your de facto password, which is great if you’re unlocking your iPhone X, but maybe not so great if you don’t like public surveillance that can track your every move. Apps are becoming nagging persuaders, telling us to move more, drink more water, or attend this morning’s keynote. Like migratory birds that can sense magnetic north, we are living in a soup of smart data that guides us. Those persuasive recommendations become better at predicting our needs, and more personalized.

Although he started by saying that we don’t need to worry about millenials, 20 minutes into his presentation Walsh is admonishing us to let the youngest members of our team “do stuff rather than just get coffee”. It’s been a while since I worked in a regular office, but do people still have younger people get coffee for them?

He pointed out that rigid processes are not good, but that we need to be performance-driven rather than process-driven: making good decisions in ambiguous conditions in order to solve new problems for customers. Find people who are energized by unknowns to drive your innovation — this advice is definitely more important than considering the age of the person involved. Bring people together in the physical realm (no more work from home) if you want the ideas to spark. Take a look at your corporate culture, and gather data about how your own teams work in order to understand how employees use information and work with each other. If possible, use data and AI as the input when designing new products for customers. He recommended a next action of quantifying what high performance looks like in your organization, then work with high performers to understand how they work and collaborate.

He discussed the myth of the simple relationship between automation and employment, and how automating a task does not, in general, put people out of work, but just changes what their job is. People working together with the automation make for more streamlined (automated) standard processes with the people focused on the things that they’re best at: handling exceptions, building relationships, making complex decision, and innovating through the lens of combining human complexity with computational thinking.

In summary, the new AI era means that digital leaders need to make data a strategic focus, get smart about decisions, and design work rather than doing it. Review decisions made in your organization, and decide which are best made using human insight, and which are better to automate — either way, these could become a competitive differentiator.

Invasion of the bots: intelligent healthcare applications at @UnitedHealthGrp

Dan Abdul, VP of technology at UnitedHealth Group (a large US healthcare company) presented at AIIM 2018 on driving intelligent information in US healthcare, and how a variety of AI and machine learning technologies are adding to that: bots that answer your questions in an online chat, Amazon’s Alexa telling you the best clinic to go to, and image recognition that detects cancer in a scan before most radiologists. The US has an extremely expensive healthcare system, much of that caused by in-patient services in hospitals, yet a number of initiatives (telemedicine, home healthcare, etc.) do little to reduce the hospital visits and the related costs. Intelligent information can help reduce some of those costs through early detection of problems that are easily treatable before they become serious enough to require hospital care, prediction of other conditions such as homelessness that often result in a greater need for healthcare services. These intelligent technologies are intended to replace healthcare practitioners, but assist them by processing more information faster than a person can, and surface insights that might otherwise be missed.

Abdul and his team have built a smart healthcare suite of applications that are based on a broad foundation of data sources: he sees the data as being key, since you can’t look for patterns or detect early symptoms without the data on which to apply the intelligent algorithms. With aggregate data from a wider population and specific data for a patient, intelligent healthcare can provide much more personalized, targeted recommendations for each individual. They’ve made a number of meaningful breakthroughs in applying AI technologies to healthcare services, such as identifying gaps in care based on treatment codes, and doing real-time monitoring and intervention via IoT devices such as fitness trackers.

These ideas are not unique to healthcare, of course; personalized recommendations based on a combination of a specific consumer’s data plus trends from aggregate population data can be applied to anything from social services to preventative equipment maintenance.

AIIM18 keynote with @jmancini77: it’s all about digital transformation

I haven’t been to the AIIM conference since the early to mid 90s; I stopped when I started to focus more on process than content (and it was very content-centric then), then stayed away when the conference was sold off, then started looking at it again when it reinvented itself a few years ago. These days, you can’t talk about content without process, so there’s a lot of content-oriented process here as well as AI, governance and a lot of other related topics.

I arrived yesterday just in time for a couple of late-afternoon sessions: one presentation on digital workplaces by Stephen Ludlow of OpenText that hit a number of topics that I’ve been working on with clients lately, then a roundtable on AI and content hosted by Carl Hillier of ABBYY. This morning, I attended the keynote where John Mancini discussed digital transformation and a report released today by AIIM. He put a lot of emphasis on AI and machine learning technologies; specifically, how they can help us to change our business models and accelerate transformation.

We’re in a different business and technology environment these days, and a recent survey by AIIM shows that a lot of people think that their business is being (or about to be) disrupted, and digital transformation is and important part of dealing with that. However, very few of them are more than a bit of the way towards their 2020 goals for transformation. In other words, people get that this is important, but just aren’t able to change as fast as is required. Mancini attributed this in part to the escalating complexity and chaos that we see in information management, where — like Alice — we are running hard just to stay in place. Given the increasing transparency of organizations’ operations, either voluntarily or through online customer opinions, staying in the same place isn’t good enough. One contributor to this is the number of content management systems that the average organization has (hint: it’s more than one) plus all of the other places where data and content reside, forcing workers to have to scramble around looking for information. Most companies don’t want to have a single monolithic source of content, but do want a federated way to find things when they need it: in part, this fits in with the relabelling of enterprise content management (ECM) as “Content Services” (Gartner’s term) or “Intelligent Information Managment” (AIIM’s term), although I feel that’s a bit of unnecessary hand-waving that just distracts from the real issues of how companies deal with their content.

He went through some other key findings from their report on what technologies that companies are looking at, and what priority that they’re giving them; looks like it’s worth a read. He wrapped up with a few of his own opinions, including the challenge that we need to consider content AND data, not content OR data: the distinction between structure and unstructured information is breaking down, in part because of the nature of natively-digital content and in part because of AI technologies that quickly turn what we think of as content into data.

There’s a full slate of sessions today, stay tuned.

Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make changes while maintaining the integrity of legacy enterprise processes. Borrowell is a fintech company focused on lending applications: free credit score monitoring, and low-interest personal loans for debt consolidation or reducing credit card debt. They partner with existing financial institutions such as Equifax and CIBC to provide the underlying credit monitoring and lending capabilities, with Borrowell providing a technology layer that’s more than just a pretty face: they use a lot of information sources to create very accurate risk models for automated loan adjudication. As Borrowell’s deep learning platforms learn more about individual and aggregate customer behaviour, their risk models and adjudication platform becomes more accurate, reducing the risk of loan defaults while fine-tuning loan rates to optimize the risk/reward curve.

Great application of AI/ML technology to financial services, which sorely need some automated intelligence applied to many of their legacy processes.