AI and BPM: my article for @Bonitasoft on making processes more intelligent

Part of my work as an industry analyst is to write papers and articles (and present webinars), sponsored by vendors, on topics that will be of interest to their clients as well as a broader audience. I typically don’t talk about the sponsor’s products or give them any sort of promotion; it’s intended to be educational thought leadership that will help their clients and prospects to understand the complex technology environment that we work in.

I’ve recently written an article on AI and BPM for Bonitasoft that started from a discussion we had after I contributed articles on adding intelligent technologies to process management to a couple of books, as well as writing here on my blog and giving a few presentations on the topic. From the intro of the article:

In 2016, I was asked to contribute to the Workflow Management Coalition’s book “Best Practices for Knowledge Workers.” My section, “Beyond Checklists”, called for more intelligent adaptive case management to drive innovation while maintaining operational efficiency. By the next year, they published “Intelligent Adaptability,” and I contributed a section called “Machine Intelligence and Automation in ACM [Adaptive Case Management] and BPM” that carried forward these ideas further. Another year on, it’s time to take a look at how the crossover between BPM and artificial intelligence (AI) — indeed, between BPM and a wide range of intelligent technologies — is progressing.

I go on to cover the specific technologies involved and what types of business innovation that we can expect from more intelligent processes. You can read the entire article on Bonita’s website, on their LinkedIn feed and their Medium channel. If you prefer to read it in French, it’s also on the Decideo.fr industry news site, and apparently there’s a Spanish version in the works too.

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!

AlfrescoDay 2018: digital business platform and a whole lot of AWS

I attended Alfresco’s analyst day and a customer day in New York in late March, and due to some travel and project work, just finding time to publish my notes now. Usually I do that while I’m at the conference, but part of the first day was under NDA so I needed to think about how to combine the two days of information.

The typical Alfresco customer is still very content-centric, in spite of the robust Alfresco Process Services (formerly Activiti) offering that is part of their platform, with many of their key success stories presented at the conference were based on content implementations and migrations from ECM competitors such as Documentum. In a way, this is reminiscent of the FileNet conferences of 20 years ago, when I was talking about process but almost all of the customers were only interested in content management. What moves this into a very modern discussion, however, is the focus on Alfresco’s cloud offerings, especially on Amazon AWS.

First, though, we had a fascinating keynote by Sangeet Paul Choudary — and received a copy of his book Platform Scale: How an emerging business model helps startups build large empires with minimum investment — on how business models are shifting to platforms, and how this is disrupting many traditional businesses. He explained how supply-side economies of scale, machine learning and network effects are allowing online platforms like Amazon to impact real-world industries such as logistics. Traditional businesses in telecom, financial services, healthcare and many other verticals are discovering that without a customer-centric platform approach rather than a product approach, they can’t compete with the newer entrants into the market that build platforms, gather customer data and make service-based partnerships through open innovation. Open business models are particularly important, and striking the right balance between an open ecosystem and maintaining control over the platform through key control points. He finished up with a digital transformation roadmap: gaining efficiencies through digitization; then using data collected in the first stage while integrating flows across the enterprise to create one view of the ecosystem; and finally externalizing and harnessing value flows in the ecosystem. This last stage, externalization, is particularly critical, since opening the wrong control points can kills you business or stifle open growth.

This was a perfect lead-in to Chris Wiborg’s (Alfresco’s VP of product marketing) presentation on Alfresco’s partnership with Amazon and the tight integration of many AWS services into the Alfresco platform: leveraging Amazon’s open platform to build Alfresco’s platform. This partnership has given this conference in particular a strong focus on cloud content management, and we are hearing more about their digitial business platform that is made up of content, process and governance services. Wiborg started off talking about the journey from (content) digitization to digital business (process and content) to digital transformation (radically improving performance or reach), and how it’s not that easy to do this particularly with existing systems that favor on-premise monolithic approaches. A (micro-) service approach on cloud platforms changes the game, allowing you to build and modify faster, and deploy quickly on a secure elastic infrastructure. This is what Alfresco is now offering, through the combination of open source software, integration of AWS services to expand their portfolio of capabilities, and automated DevOps lifecycle.

This brings a focus back to process, since their digital business platform is often sold process-first to enable cross-departmental flows. In many cases, process and content are managed by different groups within large companies, and digital transformation needs to cut across both islands of functionality and islands of technology.

They are promoting the idea that differentiation is built and not bought, with the pendulum swinging back from buy toward build for the portions of your IT that contribute to your competitive differentiation. In today’s world, for many businesses, that’s more than just customer-facing systems, but digs deep into operational systems as well. In businesses that have a large digital footprint, I agree with this, but have to caution that this mindset makes it much too easy to go down the rabbit hole of building bespoke systems — or having someone build them for you — for standard, non-differentiating operations such as payroll systems.

Alfresco has gone all-in with AWS. It’s not just a matter of shoving a monolithic code base into a Docker container and running it on EC2, which how many vendors claim AWS support: Alfresco has a much more integrated microservices approach that provides the opportunity to use many different AWS services as part of an Alfresco implementation in the AWS Cloud. This allows you to build more innovative solutions faster, but also can greatly reduce your infrastructure costs by moving content repositories to the cloud. They have split out services such as Amazon S3 (and soon Glacier) for storage services, RDS/Aurora for database services, SNS for notification, security services, networking services, IoT via Alexa, Rekognition for AI, etc. Basically, a big part of their move to microservices (and extending capabilities) is by externalizing to take advantage of Amazon-offered services. They’re also not tied to their own content services in the cloud, but can provide direct connections to other cloud content services, including Box, SharePoint and Google Drive.

We heard from Tarik Makota, an AWS solution architect from Amazon, about how Amazon doesn’t really talk about private versus public cloud for enterprise clients. They can provide the same level of security as any managed hosting company, including private connections between their data centers and your on-premise systems. Unlike other managed hosting companies, however, Amazon is really good at near-instantaneous elasticity — both expanding and contracting — and provides a host of other services within that environment that are directly consumed by Alfresco and your applications, such as Amazon RDS for Aurora, a variety of AI services, serverless step functions. Alfresco Content Services and Process Services are both available as AWS QuickStarts, allowing for full production deployment in a highly-available, highly-redundant environment in the geographic region of your choice in about 45 minutes.

Quite a bit of food for thought over the two days, including their insights into common use cases for Alfresco and AI in content recognition and classification, and some of their development best practices for ensuring reusability across process and content applications built on a flexible modern architecture. Although Alfresco’s view of process is still quite content-centric (naturally), I’m interested to see where they take the entire digital business platform in the future.

Also great to see a month later that Bernadette Nixon, who we met at the Chief Revenue Officer at the event, has moved up to the CEO position. Congrats!

bpmNEXT 2018: Bonitasoft, Know Process

We’re in the home stretch here at bpmNEXT 2018, day 3 has only a couple of shorter demo sessions and a few related talks before we break early to head home.

When Artificial Intelligence meets Process-Based Applications, Bonitasoft

Nicolas Chabanoles and Nathalie Cotte from Bonitasoft presented on their integration of AI with process applications, specifically for predictive analytics for automating decisions and making recommendations. They use an extension of process mining to examine case data and activity times in order to predict, for example, if a specific case will finish on time; in the future, they hope to be able to accurately predict the end time for individual cases for better feedback to internal users and customers. The demo was a loan origination application built on Bonita BPM, which was fairly standard, with the process mining and machine learning coming in with how the processes are monitored. Log data is polled from the BPM system into an elastic search database, then machine learning is applied to instance data; configuration of the machine learning is based (at this point) only on the specification of an expected completion time for each instance type to build the predictions model. At that point, predictions can be made for in-flight instances as to whether each one will complete on time, or its probability of completing on time for those predicted to be late — for example, if key documents are missing, or the loan officer is not responding quickly enough to review requests. The loan officer is shown what tasks are likely to be causing the late prediction, and completing those tasks will change the prediction for that case. Priority for cases can be set dynamically based on the prediction, so that cases more likely to be late are set to higher priority in order to be worked earlier. Future plans are to include more business data and human resource data, which could be used to explicitly assign late cases to individual users. The use of process mining algorithms, rather than simpler prediction techniques, will allow suggestions on state transitions (i.e., which path to take) in addition to just setting instance priority.

Understanding Your Models and What They Are Trying To Tell You, KnowProcess

Tim Stephenson of KnowProcess spoke about models and standards, particularly applied to their main use case of marketing automation and customer onboarding. Their ModelMinder application ingests BPMN, CMMN and DMN models, and can be used to search the models for activities, resources and other model components, as well as identify and understand extensions such as calling a REST service from a BPMN service task. The demo showed a KnowProcess repository initially through the search interface; searching for “loan” or “send memo” returned links to models with those terms; the model (process, case or decision) can be displayed directly in their viewer with the location of the search term highlighted. The repository can be stored as files or an engine can be directly indexed. He also showed an interface to Slack that uses a model-minder bot that can handle natural language requests for certain model types and content such as which resources do the work as specified in the models or those that call a specific subprocess, providing a link directly back to the models in the KnowProcess repository. Finishing up the demo, he showed how the model search and reuse is attached to a CRM application, so that a marketing person sees the models as functions that can be executed directly within their environment.

Instead of a third demo, we had a more free-ranging discussion that had started yesterday during one of the Q&As about a standardized modeling language for RPA, led by Max Young from Capital BPM and with contributions of a number of others in the audience (including me). Good starting point but there’s obviously still a lot of work to do in this direction, starting with getting some of the major RPA vendors on board with standardization efforts. The emerging ideas seem to center around defining a grammar for the activities that occur in RPA (e.g., extract data from an Excel file, write data to a certain location in an application screen), then an event and flow language to piece together those primitives that might look something like BPMN or CMMN. I see this as similar to the issue of defining page flows, which are often done as a black box function that is performed within a human activity in a BPMN flow: exposing and standardizing that black box is what we’re talking about. This discussion is a prime example of what makes bpmNEXT great, and keeps me coming back year after year.

bpmNEXT 2018: Intelligence and robots with ITESOFT, K2, BeeckerCo

We’re finishing up day 2 of bpmNEXT with a last section of demos.

Robotics, Customer Interactions and BPM, ITESOFT

Francois Bonnet from ITESOFT presented on customer interactions and automation, and the use of BPMN-driven robots to guide customer experience. In a first for bpmNEXT, the demo included an actual physical human-shaped robot (which was 3D-printed from an open source project) that can do voice recognition, text to speech, video capture, movement tracking and facial recognition. The robot’s actions were driven by a BPMN process model, with activities such as searching for humans, recognizing faces, speaking phrases, processing input and making branching decisions. The process model was shown simultaneously, with the execution path updated in real time as it moved through the process, with robot actions shown as service activities. The scenario was the robot interacting with a customer in a mobile phone shop, recognizing the customer or training a new facial recognition, asking what service is required, then stepping through acquiring a new phone and plan. He walked through how the BPMN model was used, with both synchronous and asynchronous services for controlling the robot and invoking functions such as classifier training, and human activities for interacting with the customer. Interesting use of BPMN as a driver for real robot actions, showing integration of recognition, RPA, AI, image capture and business services such as customer enrolment and customer ID validation.

The Future of Voice in Business Process Automation, K2

Brandon Brown from K2 looked at a more focused use case for voice recognition, and some approaches to voice-first design that is more than just speech-to-text by adding cognitive services through commodity AI services from Google, Amazon and Microsoft. Their goal is to make AI more accessible through low/no-code application builders like K2, creating voice recognition applications such as chatbots. He demonstrated a chatbot on a mobile phone that was able to not just recognize the words that he spoke, but recognize the intent of the interaction and request additional data: essentially a replacement for filling out a form. This might be a complete interaction, or just an assist for starting a more involved process based on the original voice input. He switched over to a computer browser interface to show more of the capabilities, including sentiment analysis based on form input that could adjust the priority of a task or impact process execution. From within their designer environment, cognitive text analytics such as sentiment analysis can be invoked as a step in a process using their Smart Objects, which are effectively wrappers around one or more services and data mapping actions that allow less-technical process designers include cognitive services in their process applications. Good demo of integrating voice-related cognitive services into processes, showing how third-party services make this much more accessible to any level of developer skill.

State Machine Applied to Corporate Loans Process, BeeckerCo

Fernando Leibowich Beker from BeeckerCo finished up the day with a presentation on their process app suite BeBOP based on IBM BPM/ODM focused on financial services customers, followed by a “demo” of mostly prerecorded screencams. Their app generates state tables for processes using ODM business rules, then allows business users to change the state table in order to drive the process execution. The demo showed a typical IBM BPM application for processing a loan origination, but the steps are defined as ad hoc tasks so not part of a process flow; instead, the process flow is driven by the state table to determine which task to execute in which order, and the only real flow is to check the state table, then either invoke the next task or complete the process. Table-driven processes aren’t a new concept — we’ve been doing this since the early days of workflow — although using an ODM decision table to manage the state transition table is an interesting twist. This does put me in mind of the joke I used to tell when I first started giving process-focused presentations at the Business Rules Forum, about how a process person would model an entire decision tree in BPMN, while a rules person would have a single BPMN node that called a decision tree to execute all of the process logic: just because you can do something using a certain method doesn’t mean that you should do it.

We’re done with day 2; tomorrow is only a half-day of sessions with the awards after lunch (which I’ll probably have to monitor remotely since I’ll be headed for the airport by mid-afternoon).

bpmNEXT 2018: All about bots with Cognitive Technology, PMG.net, Flowable

We’re into the afternoon of day 2 of bpmNEXT 2018, with another demo section.

RPA Enablement: Focus on Long-Term Value and Continuous Process Improvement, Cognitive Technology

Massimiliano Delsante of Cognitive Technology presented their myInvenio product for analyzing processes to determine where gaps exist and create models for closing those gaps through RPA task automation. The demo started with loading historical process data for process mining, which created a process model from the data together with activity resources, counts and other metrics; then comparing the model for conformance with a reference model to determine the frequency and performance of conformant and non-conformant cases. The process discovery model can be transformed to a BPMN model, and simulated performance. With a baseline data set of all manual activities, the system identified the cost of each activity, helping to identify which activities would result in the greatest savings if automated, and fed the data for actual resources used into the simulation scenario; adjusting the resources required by specifying the number of RPA robots that could be deployed at specific tasks allows for a what-if simulation for the process performance with an RPA implementation. An analytics dashboard provides visualization of the original process discovery and the simulated changes, with performance trends over time. Predictive analytics can be applied to running processes to, for example, predict which cases will not meet their deadlines, and some root cause analysis for the problems. Doing this analysis requires that you have information about the cost of the RPA robots as well as being able to identify which tasks could be automated with RPA. Good integration of process discovery, simulation, analysis and ongoing monitoring.

Integration is Still Cool, and Core in your BPM Strategy, PMG.net

Ben Alexander from PMG.net focused on integration within BPM as a key element for driving innovation by increasing the speed of application development: integrating services for RPA, ML, AI, IoT, blockchain, chatbots and whatever other hot new technologies can be brought together in a low-code environment such as PMG. His demo showed a vendor onboarding application, adding a function/subprocess for assessing probability of vendor approval using machine learning by calling AzureML, user task assignment using Slack integration or SMS/phone support through a Twilio connector, and RPA bot invocation using a generic REST API. Nice demo of how to put all of these third-party services together using a BPM platform as the main application development and orchestration engine.

Making Process Personal, Flowable

Paul Holmes-Higgin and Micha Keiner from Flowable presented on their Engage product for customer engagement via chat, using chatbots to augment rather than replace human chat, and modeling the chatbot behavior using standard modeling tools. In particular, they have found that a conversation can be modeled as a case with dynamic injection of processes, with the ability to bring intelligence into conversations, and the added benefit of the chat being completely audited. The demo was around the use case of a high-wealth banking client talking to their relationship manager using chat, with simultaneous views of both the client and relationship manager UI in the Flowable Engage chat interface. The client mentioned that she moved to a new home, and the RM initiated the change address process by starting a new case right in the chat by invoking a context-sensitive digital assistant. This provided advice to the RM about address change regulatory rules, and provided a form in situ to collect the address data. The case is now progressed through a combination of chat message to collaborate between human players, forms filled directly in the chat window, and confirmation by the client via chat by presenting them with information to be updated. Potential issues, such as compliance regulations due to a country move, are raised to the RM, and related processes execute behind the scenes that include a compliance officer via a more standard task inbox interface. Once the compliance process completes, the RM is informed via the chat interface. Behind the scenes, there’s a standard address change BPMN diagram, where the chat interface is integrated through service activities. They also showed replacing the human compliance decision with a decision table that was created (and manually edited if necessary) based on a decision tree generated by machine learning on 200,000 historical address change cases; rerunning the scenario skipped the compliance officer step and approved the change instantaneously. Other chat automated tasks that the RM can invoke include setting reminders, retrieving customer information and more using natural language processing, as well as other types of more structured cases and processes. Great demo, and an excellent look at the future of chat interfaces in process and case management.

bpmNEXT 2018 day 2 keynote with @NathanielPalmer

Nathaniel Palmer kicked off day 2 of bpmNEXT 2018 with his ever-prescient views on the next five years of BPM. Bots, decisions and automation are key, with the three R’s (robots, rules and relationships) defining BPM in the years to come. More and more, commercial transactions (or services that form part of those transactions) will happen on servers outside your organization, and often outside of your control; robots and intelligent agents will be doing a big part of that work. He also believes that we’re seeing the beginning of the death of smartphones, to be replaced with other devices and other interfaces such as conversational UI and wearable technology. This is going to radically change how apps have to be designed, and will leave a lot of companies scrambling to catch up with this change as people move more of their interactions off smartphones and laptops. Although more conservative organizations — including government agencies — will continue to support the least common denominator in interaction style (probably email and traditional websites), commercial organizations don’t have that luxury, and need to rethink sooner. He envisions that your fastest-growing competitors will have fewer employees than robots, although some interesting news out of Tesla this week may indicate that it’s premature to replace some human functions.

He spoke about how this will refine application architecture to four tiers: a client tier unique to each platform, a separate delivery tier that optimizes delivery for the platforms, an aggregation tier that integrates services and data, and a services tier that pulls data from both internal and external source. This creates an abstraction between what a task is and how it is performed, and even whether it is automated or performed by a person. Decision as a service for both commercial and government services will become a primary delivery model, allowing decisions (and the automation enabled by them) to be easily plugged into applications; this will require more of a business-first, model-driven approach rather than having decisions built in code by developers.

His Future-Proof BPM architecture — what others are calling a digital transformation platform — brings together a variety of capabilities that can be provided by many vendors or other organizations, and fed by events. In fact, the core capabilities (automation, machine learning, decision management, workflow management) also generate events that feed back into the data flooding into these processes. BPM platforms have the ability to become the orchestrating platforms for this, which is possibly why many of the BPMS vendors are rebranding as low-code application development environments, but be aware of fundamental differences in the underlying architecture: do they support modularity and microservices, or are they just lifting and shifting to monolithic containers in the cloud?

Finishing up, he returned to the concept that intelligent agents can act autonomously in complex transactions, and this will be becoming more common over the next few years. Interestingly, an interview that I did for a European publication is being translated into German, and the translator emailed me this morning to tell me that they needed to change some of my comments on automating loan transactions since that’s not permitted in Germany. My response: not yet, but it will be. We all need to be prepared for a more automated future.

Great audience discussion at the end on how this architecture is manifesting, how to model/represent some of these automation concepts, the role of a smarter event bus, the future of the word “bot” and more. Max Young from Capital BPM took over to discuss the development of a grammar for RPA, with an invitation for the brain trust in the room to start thinking about this in more detail. RPA vendors are creating their own notation, but a vendor-agnostic standard would go a long ways towards helping business people to directly specify automation.

Since they’re pumping out the video on the same day as the presentations, check the bpmNEXT YouTube channel later for a replay of Nathaniel’s presentation.

bpmNEXT 2018: Here’s to the oddballs, with ConsenSys, XMPro and BPLogix

And we’re off with the demo sessions!

Secure, Private Decentralized Business Processes for Blockchains, ConsenSys

Vanessa Bridge of ConsenSys spoke about using BPMN diagrams to create smart contracts and other blockchain applications, while also including privacy, security and other necessary elements: essentially, using BPM to enable Ethereum-based smart contracts (rather than using blockchain as a ledger for BPM transactions and other BPM-blockchain scenarios that I’ve seen in the past). She demonstrated using Camunda BPM for a token sale application, and for a boardroom voting application. For each of the applications, she used BPMN to model the process, particularly the use of BPMN timers to track and control the smart contract process — something that’s not native to blockchain itself. Encryption and other steps were called as services from the BPMN diagram, and the results of each contract were stored in the blockchain. Good use of BPM and blockchain together in a less-expected manner.

Turn IoT Technology into Operational Capability, XMPro

Pieter van Schalkwyk of XMPro looked at the challenges of operationalizing IoT, with a virtual flood of data from sensors and machines that needs to be integrated into standard business workflows. This involves turning big data into smart data via stream processing before passing it on to the business processes in order to achieve business outcomes. XMPro provides smart listeners and agents that connect the data to the business processes, forming the glue between realtime data and resultant actions. His demo showed data being collected from a fan on a cooling tower, bringing in data the sensor logs and comparing it to manufacturer’s information and historical information in order to predict if the fan is likely to fail, create a maintenance work order and even optimize maintenance schedules. They can integrate with a large library of action agents, including their own BPM platform or other communication and collaboration platforms such as Slack. They provide a lot of control over their listener agents, which can be used for any type of big data, not just industrial device data, and integrate complex and accurate prediction models regarding likelihood and remaining useful life predictions. He showed their BPM platform that would be used downstream from the analytical processing, where the internet of things can interact with the internet of people to make additional decisions required in the context of additional information such as 3D drawings. Great example of how to filter through hundreds of millions data points in streaming mode to find the few combinations that require action to be taken. He threw out a comment at the end that this could be used for non-industrial applications, possibly for GDPR data, which definitely made me think about content analytics on content as it’s captured in order to pick out which of the events might trigger a downstream process, such as a regulatory process.

Business Milestones as Configuration, BPLogix

Scott Menter and Joby O’Brien of BPLogix finished up this section on new BPM ideas with their approach to goal orientation in BPM, which is milestone-based and requires understanding the current state of a case before deciding how to move forward. Their Process Director BPM is not BPMN-based, but rather an event-based platform where events are used to determine milestones and drive the process forward: much more of a case management view, usually visualized as a project management-style GANTT chart rather thana flow model. They demonstrated the concept of app events, where changes in state of any of a number of things — form fields, activities, document attachments, etc. — can record a journal entry that uses business semantics and process instance data. This allows events from different parts of the platform to be brought together in a single case journal that shows the significant activity within the case, but also to be triggers for other events such as determining case completion. The journal can be configured to show only certain types of events for specific users — for example, if they’re only interested in events related to outgoing correspondence — and also becomes a case collaboration discussion. Users can select events within the journal and add their own notes, such as taking responsibility for a meeting request. They also showed how machine learning and rules can be used for dynamically changing data; although shown as interactions between fields on forms, this can also be used to generate new app events. Good question from the audience on how to get customers to think about their work in terms of events rather than procedures; case management proponents will say that business people inherently think about events/state changes rather than process. Interesting representation of creating a selective journal based on business semantics rather than just logging everything and expecting users to wade through it for the interesting bits.

We’re off to lunch. I’m a bit out of practice at live-blogging, but hope that I captured some of the flavor of what’s going on here. Back with more this afternoon!

Obligatory futurist keynote at AIIM18 with @MikeWalsh

We’re at the final day of the AIIM 2018 conference, and the morning keynote is with Mike Walsh, talking about business transformation and what you need to think about as you’re moving forward. He noted that businesses don’t need to worry about millenials, they need to worry about 8-year-olds: these days 90% of all 2-year-olds (in the US) know how to use a smart device, making them the truly born-digital generation. What will they expect from the companies of the future?

Machine learning allows us to customize experiences for every user and every consumer, based on analysis of content and data. Consumers will expect organizations to predict their needs, before they could even voice it themselves. In order to do that, organizations need to become algorithmic businesses: be business machines rather than have business models. Voice interaction is becoming ubiquitous, with smart devices listening to us most (all) of the time and using that to gather more data on us. Face recognition will become your de facto password, which is great if you’re unlocking your iPhone X, but maybe not so great if you don’t like public surveillance that can track your every move. Apps are becoming nagging persuaders, telling us to move more, drink more water, or attend this morning’s keynote. Like migratory birds that can sense magnetic north, we are living in a soup of smart data that guides us. Those persuasive recommendations become better at predicting our needs, and more personalized.

Although he started by saying that we don’t need to worry about millenials, 20 minutes into his presentation Walsh is admonishing us to let the youngest members of our team “do stuff rather than just get coffee”. It’s been a while since I worked in a regular office, but do people still have younger people get coffee for them?

He pointed out that rigid processes are not good, but that we need to be performance-driven rather than process-driven: making good decisions in ambiguous conditions in order to solve new problems for customers. Find people who are energized by unknowns to drive your innovation — this advice is definitely more important than considering the age of the person involved. Bring people together in the physical realm (no more work from home) if you want the ideas to spark. Take a look at your corporate culture, and gather data about how your own teams work in order to understand how employees use information and work with each other. If possible, use data and AI as the input when designing new products for customers. He recommended a next action of quantifying what high performance looks like in your organization, then work with high performers to understand how they work and collaborate.

He discussed the myth of the simple relationship between automation and employment, and how automating a task does not, in general, put people out of work, but just changes what their job is. People working together with the automation make for more streamlined (automated) standard processes with the people focused on the things that they’re best at: handling exceptions, building relationships, making complex decision, and innovating through the lens of combining human complexity with computational thinking.

In summary, the new AI era means that digital leaders need to make data a strategic focus, get smart about decisions, and design work rather than doing it. Review decisions made in your organization, and decide which are best made using human insight, and which are better to automate — either way, these could become a competitive differentiator.

Invasion of the bots: intelligent healthcare applications at @UnitedHealthGrp

Dan Abdul, VP of technology at UnitedHealth Group (a large US healthcare company) presented at AIIM 2018 on driving intelligent information in US healthcare, and how a variety of AI and machine learning technologies are adding to that: bots that answer your questions in an online chat, Amazon’s Alexa telling you the best clinic to go to, and image recognition that detects cancer in a scan before most radiologists. The US has an extremely expensive healthcare system, much of that caused by in-patient services in hospitals, yet a number of initiatives (telemedicine, home healthcare, etc.) do little to reduce the hospital visits and the related costs. Intelligent information can help reduce some of those costs through early detection of problems that are easily treatable before they become serious enough to require hospital care, prediction of other conditions such as homelessness that often result in a greater need for healthcare services. These intelligent technologies are intended to replace healthcare practitioners, but assist them by processing more information faster than a person can, and surface insights that might otherwise be missed.

Abdul and his team have built a smart healthcare suite of applications that are based on a broad foundation of data sources: he sees the data as being key, since you can’t look for patterns or detect early symptoms without the data on which to apply the intelligent algorithms. With aggregate data from a wider population and specific data for a patient, intelligent healthcare can provide much more personalized, targeted recommendations for each individual. They’ve made a number of meaningful breakthroughs in applying AI technologies to healthcare services, such as identifying gaps in care based on treatment codes, and doing real-time monitoring and intervention via IoT devices such as fitness trackers.

These ideas are not unique to healthcare, of course; personalized recommendations based on a combination of a specific consumer’s data plus trends from aggregate population data can be applied to anything from social services to preventative equipment maintenance.