Focus on Insurance Processes: Product Innovation While Managing Risk and Costs – my upcoming webinar with @Signavio

I know, I just announced a banking webinar with Signavio on February 25; this is another one with an insurance focus on March 10 at 1pm ET. From the webinar description:

With customer churn rates approaching 25% in some insurance sectors, insurers are attempting to increase customer retention by offering innovative products that better address today’s market. The ability to create and support innovative products has become a top-level goal for many insurance company executives, and requires agile and automated end-to-end processes for a personalized customer journey.

Similar to the banking webinar, the focus is on more management-level concerns, and I’ll look at some use cases around insurance product innovation and claims.

Head on over to the landing page to sign up for the webinar. If you’re unable to attend, you’ll receive a link to the replay.

My post on the @Trisotech blog: Designing Processes for Agility

In my continuing series over on the Trisotech blog, I’ve just published a post on issues to consider when designing processes for agility, especially the tradeoffs between what goes in a process model versus a decision model. I’ve spent quite a bit of time thinking about this in the past when working with enterprise customers on their processes, and a conversation after a conference presentation last year really brought some of the ideas into focus. From that post:

Assuming that you’re using model-driven process and decision management systems (BPMN and DMN, respectively) for design and implementation, you might assume that it doesn’t really matter how you design your applications, since either could be quickly changed to accommodate changing business needs. It’s true that model-driven process and decision management systems give you agility in that they can be changed relatively quickly with little or no coding, then tested and redeployed in a matter of hours or days. But your design choices can impact understandability of the models as well as agility of the resulting application, and it’s important to have both.

Head on over to their blog to read the entire post.

If you have comments or questions about the post, feel free to engage in the comments on this post, since Trisotech doesn’t allow commenting on their blog.

Focus on Banking Processes: Improve Revenue, Costs and Compliance – my upcoming webinar with @Signavio

I’ll be presenting on two webinars sponsored by Signavio in the upcoming weeks, starting with one on banking processes on February 25 at 1pm ET. In this first webinar, I’ll be taking a look not just at the operational improvements, but at the (executive) management-level concerns of improving revenue, controlling costs and maintaining compliance. From the webinar description:

Today’s retail banks face more challenges than ever before: in addition to competing with each other, they are competing with fintech startups that provide alternatives to traditional banking products and methods. The concerns in the executive suite continue to focus on revenue, costs and compliance, but those top-level goals are more than just numbers. Revenue is tied closely to customer satisfaction and wallet share, with today’s customers expecting personalized banking products and modern omnichannel experiences.

You can sign up for the webinar here. This will be a concise 35 minutes plus Q&A, and I’ll include some use case examples from client onboarding and KYC in retail banking.

ARIS Elements: the cloud “starter edition” for process design

I decided not to get up at 4am (Eastern time) earlier this week to watch the ARIS Elements launch webinar presented by ARIS senior product manager Tom Thaler, but Software AG has published it here — no registration required — and I had a quick view of it as well as checking out the ARIS Elements website, which is already live.

Creating a model in ARIS Elements, showing the seven supported model types

As seen in the webinar, model creation allows you to create seven different types of models: process landscape, BPMN process, event-driven process (EPC), organizational chart, system diagram, data model, and structuring model. It does not include DMN or CMMN models; DMN is in ARIS Advanced and Enterprise editions.

Thaler demonstrated creating a BPMN model, which is similar to many of the other cloud-based modelers, although it’s not clear the extent of their BPMN coverage (for some of the more esoteric event types, for example). What they do provide that is unique, however, is analysis-focused information for different steps such as RACI responsibility assignments that link directly to an organizational chart. BPMN models are checked for validity, even though these are probably not expected to be directly-executable models. Once a model is created, it can be previewed and then published (unless the database has been set for auto-publication). In addition to the visual model, the preview/published versions of BPMN models show a summary tabular view of the process steps, with roles, input, output and IT systems for each. The RACI chart is also generated from the values entered in the process model.

A process landscape/map model can be created to show groups of processes (in the demo, the top level groups were management, core and supporting processes); these can in turn be nested groups of processes for more complex areas such as HR.

A user can set specific models as favorites, which will then appear on their home page for easy access. There is a hierarchical view of the repository by model type.

There are fairly standard user management features to add new users and assign permissions, although this edition does not provide single sign-on.

There are a number of video tutorials available to show how to create different model types and manage settings, and a free trial if you want to get started quickly.

There were a number of good questions in the Q&A (starting at around 38 minutes into the webinar) that exposed some of the other features and limitations of ARIS Elements. Many of these were obviously from people who are currently ARIS users, and looking to see if Elements fits into their plans:

  • Commenting by viewers is not supported
  • BPMN models can be imported
  • There is only one database (multiple databases to separate business units is a feature in ARIS Advanced/Enterprise)
  • Upgrading to a more expensive version would allow all models that were already created to be migrated
  • There is no automation of model review cycles (or any other workflow control), such as having a model reviewed by one or more others before publication; this would have to be done manually
  • There is no document storage (supporting documents can be stored directly in ARIS Advanced/Enterprise)
  • There is no process comparison (available in higher level versions)
  • Migrating from an ARIS on-premise edition to Elements could result in data loss since not all of the model types and features are supported, and is not recommended
  • There are a small number of pre-defined reports available for immediate use, but no report customization

If you look at the pricing page which also shows a feature comparison chart, you’ll see that ARIS Elements is considered the low-end edition of their cloud process modeling product suite. It’s fairly limited (up to 20 users, one database, other limitations) and is priced at 100€ (about $US110) per designer user per month and 50€ per 10 viewer users; that seems somewhat high, but they offer a broader range of model types than competitive process modeling tools, and include a shared repository for collaborative designing and viewing.

ARIS Elements is being positioned in an interesting space: it’s more than just process modeling, but less than the more complete enterprise architecture modeling that you’ll find in ARIS Advanced/Enterprise and competitive EA modeling products. It’s being targeted at “beginners”, although arguably beginners would not be creating a lot of these model types (although might be viewing them). Possibly they’ve had feedback that the Advanced version is just a bit too complex for many situations, and they are attempting to hit the part of the market that doesn’t need full capabilities; or they are offering Elements as a starting point with the goal to migrate many of these customers onto the Advanced/Enterprise editions as soon as they run up against the limitations.

Process is cool (again), and the coolest kid on the block is process mining

I first saw process mining software in 2008, when Fujitsu was showing off their process discovery software/services package, plus an interesting presentation by Anne Rozinat from that year’s academic BPM conference where she tied in concepts of process mining and simulation without really using the term process mining or discovery. Rozinat went on to form Fluxicon, which developed one of the earliest process mining products and really opened up the market, and she spent time with me providing my early process mining education. Fast forward 10+ years, and process mining is finally a hot topic: I’m seeing it from a few mining-only companies (Celonis), and as a part of a suite from process modeling companies (Signavio) or even a larger process automation suite (Software AG). Eindhoven University of Technology, arguably the birthplace of process mining, even offers a free process mining course which is quite comprehensive and covers usage as well as many of the underlying algorithms — I did the course and found it offered some great insights and a few challenges.

Today, Celonis hosted a webinar, featuring Rob Koplowitz of Forrester in conversation with Celonis’ CMO Anthony Deighton, on the role of process improvement in improving digital operations. Koplowitz started with some results from a Forrester survey showing that digital transformation is now the primary driver of process improvement initiatives, and the importance of process mining in that transformation. Process mining continues its traditional role in process discovery and conformance checking but also has a role in process enhancement and guidance. Lucky for those of us who focus on process, process is now cool (again).

Unlike just examining analytics for the part of a process that is automated in a BPMS, process mining allows for capturing information from any system and tracing the entire customer journey, across multiple systems and forms of interaction. Process discovery using a process mining tool (like Celonis) lets you take all of that data and create consolidated process models, highlighting the problem areas such as wait states and rework. It’s also a great way to find compliance problems, since you’re looking at how the processes actually work rather than how they were designed to work.

Koplowitz had some interesting insights and advice in his presentation, not the least of which was to engage business experts to drive change and automation, not just technologists, and use process analytics (including process mining) as a guide to where problems lie and what should/could be automated. He showed how process mining fits into the bigger scope of process improvement, contributing to the discovery and analysis stages that are a necessary precursor to reengineering and automation.

Good discussion on the webinar, and there will probably be a replay available if you head to the landing page.

bpmNEXT 2018: Bonitasoft, Know Process

We’re in the home stretch here at bpmNEXT 2018, day 3 has only a couple of shorter demo sessions and a few related talks before we break early to head home.

When Artificial Intelligence meets Process-Based Applications, Bonitasoft

Nicolas Chabanoles and Nathalie Cotte from Bonitasoft presented on their integration of AI with process applications, specifically for predictive analytics for automating decisions and making recommendations. They use an extension of process mining to examine case data and activity times in order to predict, for example, if a specific case will finish on time; in the future, they hope to be able to accurately predict the end time for individual cases for better feedback to internal users and customers. The demo was a loan origination application built on Bonita BPM, which was fairly standard, with the process mining and machine learning coming in with how the processes are monitored. Log data is polled from the BPM system into an elastic search database, then machine learning is applied to instance data; configuration of the machine learning is based (at this point) only on the specification of an expected completion time for each instance type to build the predictions model. At that point, predictions can be made for in-flight instances as to whether each one will complete on time, or its probability of completing on time for those predicted to be late — for example, if key documents are missing, or the loan officer is not responding quickly enough to review requests. The loan officer is shown what tasks are likely to be causing the late prediction, and completing those tasks will change the prediction for that case. Priority for cases can be set dynamically based on the prediction, so that cases more likely to be late are set to higher priority in order to be worked earlier. Future plans are to include more business data and human resource data, which could be used to explicitly assign late cases to individual users. The use of process mining algorithms, rather than simpler prediction techniques, will allow suggestions on state transitions (i.e., which path to take) in addition to just setting instance priority.

Understanding Your Models and What They Are Trying To Tell You, KnowProcess

Tim Stephenson of KnowProcess spoke about models and standards, particularly applied to their main use case of marketing automation and customer onboarding. Their ModelMinder application ingests BPMN, CMMN and DMN models, and can be used to search the models for activities, resources and other model components, as well as identify and understand extensions such as calling a REST service from a BPMN service task. The demo showed a KnowProcess repository initially through the search interface; searching for “loan” or “send memo” returned links to models with those terms; the model (process, case or decision) can be displayed directly in their viewer with the location of the search term highlighted. The repository can be stored as files or an engine can be directly indexed. He also showed an interface to Slack that uses a model-minder bot that can handle natural language requests for certain model types and content such as which resources do the work as specified in the models or those that call a specific subprocess, providing a link directly back to the models in the KnowProcess repository. Finishing up the demo, he showed how the model search and reuse is attached to a CRM application, so that a marketing person sees the models as functions that can be executed directly within their environment.

Instead of a third demo, we had a more free-ranging discussion that had started yesterday during one of the Q&As about a standardized modeling language for RPA, led by Max Young from Capital BPM and with contributions of a number of others in the audience (including me). Good starting point but there’s obviously still a lot of work to do in this direction, starting with getting some of the major RPA vendors on board with standardization efforts. The emerging ideas seem to center around defining a grammar for the activities that occur in RPA (e.g., extract data from an Excel file, write data to a certain location in an application screen), then an event and flow language to piece together those primitives that might look something like BPMN or CMMN. I see this as similar to the issue of defining page flows, which are often done as a black box function that is performed within a human activity in a BPMN flow: exposing and standardizing that black box is what we’re talking about. This discussion is a prime example of what makes bpmNEXT great, and keeps me coming back year after year.

bpmNEXT 2018: All about bots with Cognitive Technology, PMG.net, Flowable

We’re into the afternoon of day 2 of bpmNEXT 2018, with another demo section.

RPA Enablement: Focus on Long-Term Value and Continuous Process Improvement, Cognitive Technology

Massimiliano Delsante of Cognitive Technology presented their myInvenio product for analyzing processes to determine where gaps exist and create models for closing those gaps through RPA task automation. The demo started with loading historical process data for process mining, which created a process model from the data together with activity resources, counts and other metrics; then comparing the model for conformance with a reference model to determine the frequency and performance of conformant and non-conformant cases. The process discovery model can be transformed to a BPMN model, and simulated performance. With a baseline data set of all manual activities, the system identified the cost of each activity, helping to identify which activities would result in the greatest savings if automated, and fed the data for actual resources used into the simulation scenario; adjusting the resources required by specifying the number of RPA robots that could be deployed at specific tasks allows for a what-if simulation for the process performance with an RPA implementation. An analytics dashboard provides visualization of the original process discovery and the simulated changes, with performance trends over time. Predictive analytics can be applied to running processes to, for example, predict which cases will not meet their deadlines, and some root cause analysis for the problems. Doing this analysis requires that you have information about the cost of the RPA robots as well as being able to identify which tasks could be automated with RPA. Good integration of process discovery, simulation, analysis and ongoing monitoring.

Integration is Still Cool, and Core in your BPM Strategy, PMG.net

Ben Alexander from PMG.net focused on integration within BPM as a key element for driving innovation by increasing the speed of application development: integrating services for RPA, ML, AI, IoT, blockchain, chatbots and whatever other hot new technologies can be brought together in a low-code environment such as PMG. His demo showed a vendor onboarding application, adding a function/subprocess for assessing probability of vendor approval using machine learning by calling AzureML, user task assignment using Slack integration or SMS/phone support through a Twilio connector, and RPA bot invocation using a generic REST API. Nice demo of how to put all of these third-party services together using a BPM platform as the main application development and orchestration engine.

Making Process Personal, Flowable

Paul Holmes-Higgin and Micha Keiner from Flowable presented on their Engage product for customer engagement via chat, using chatbots to augment rather than replace human chat, and modeling the chatbot behavior using standard modeling tools. In particular, they have found that a conversation can be modeled as a case with dynamic injection of processes, with the ability to bring intelligence into conversations, and the added benefit of the chat being completely audited. The demo was around the use case of a high-wealth banking client talking to their relationship manager using chat, with simultaneous views of both the client and relationship manager UI in the Flowable Engage chat interface. The client mentioned that she moved to a new home, and the RM initiated the change address process by starting a new case right in the chat by invoking a context-sensitive digital assistant. This provided advice to the RM about address change regulatory rules, and provided a form in situ to collect the address data. The case is now progressed through a combination of chat message to collaborate between human players, forms filled directly in the chat window, and confirmation by the client via chat by presenting them with information to be updated. Potential issues, such as compliance regulations due to a country move, are raised to the RM, and related processes execute behind the scenes that include a compliance officer via a more standard task inbox interface. Once the compliance process completes, the RM is informed via the chat interface. Behind the scenes, there’s a standard address change BPMN diagram, where the chat interface is integrated through service activities. They also showed replacing the human compliance decision with a decision table that was created (and manually edited if necessary) based on a decision tree generated by machine learning on 200,000 historical address change cases; rerunning the scenario skipped the compliance officer step and approved the change instantaneously. Other chat automated tasks that the RM can invoke include setting reminders, retrieving customer information and more using natural language processing, as well as other types of more structured cases and processes. Great demo, and an excellent look at the future of chat interfaces in process and case management.

bpmNEXT 2018: Complex Modeling with MID GmbH, Signavio and IYCON

The final session of the first day of bpmNEXT 2018 was focused on advanced modeling techniques.

Designing the Data-Driven Company, MID GmbH

Elmar Nathe of MID GmbH presented on their enterprise decision maps, which provides an aggregated visualization of strategic, tactical and operational decisions with business events. They provide a variety of modeling tools, but see decisions as key to understanding how organizations are driven by data and events. Clearly a rich decision modeling environment, including support for PMML for including predictive models and other data scientist analysis tools, plus links to other model types such as ERDs that can show what data contributes to which decision model, and business process models. Much more of an enterprise architecture approach to model-driven design that can incorporate the work of data scientists.

Using Customer Journeys to Connect Theory with Reality, Signavio

Till Reiter and Enrico Teterra of Signavio started with a great example of an Ignite presentation, with few words, lots of graphics and a bit of humor, discussing their new notation for modeling an outside-in view of the customer journey rather than just having an undifferentiated “customer” swimlane in a BPMN diagram. The demo walked through their customer journey mapping tool, and how their collaboration hub overlays on that to allow information about each component of the journey map to be discussed amongst process modeling users. The journey map contains a lot of information about KPIs and other process metrics in a form most consumable by process owners and modelers, but also has a notebook/dashboard view for analysts to determine problems with the process and identify potential resolution actions. This includes a variety of analysis tools including process discovery, where process mining techniques are applied to determine which paths in the process model may be contributing to specific problems such as cycle time, then overlay this on the process model to assist with root cause analysis. Although their product does a good job of combing CJMs, process models and process analysis, this was more of a walkthrough of a set of pre-calculated dashboard screens rather than an actual demo — a far cry from the experimental features that Gero Decker showed off in their demo at the first bpmNEXT.

Discovering the Organizational DNA, IYCON and Knowledge Consultants

The final presentation of this section was with Jude Chagas Pereira of IYCON and Frank Kowalkowski of Knowledge Consultants presenting IYCON’s Afterspyre modeling tool for creating a catalog of complex business objects, their attributes and their linkages to create organizational DNA diagrams. Ranking these with machine learning algorithms for semantic and sentiment analysis allows identification of process improvement opportunities. They have a number of standard business analysis techniques built in, and robust analytics focused on problem solving. The demo walked through their catalog, drilling down into the “Strategy DNA” section and into “Technology Solutions” subsection to show an enumeration of the platforms currently in place together with attributes such as technology risk and obsolescence, which can be used to rank technology upgrade plans. Relationships between business objects can be auto-detected based on existing data. Levels including Objectives, Key Processes, Technology Solutions, Database Technology and Datacenter and their interrelationships are mapped into a DNA diagram and an alluvial diagram, starting at any point in the catalog and drilling down a specific number of levels as selected by the modeling analyst. These diagrams can then be refined further based on factors such as scaling the individual markers based on actual performance. They showed sentiment analysis for a hotel rank on a review site, which included extracting specific phrases that related to certain sentiments. They also demonstrated a two-model comparison, which compared the models for two different companies to determine the overlap and unique processes; a good indicator for a merger/acquisition (or even divestiture) level of difficulty. They finished up with affinity modeling, such as the type used by Amazon when they tell you what books that other people bought who also bought the book that you’re looking at: easy to do in a matrix form with a small data set, but computationally intensive once you get into non-trivial amounts of data. Affinity modeling is most commonly used in marketing to analyze buying habits and offering people something that they are likely to buy, even if that’s what they didn’t plan to buy at first — this sort of “would you like fries with that” technique can increase purchase value by 30-40%. Related to that is correlation modeling, which can be used as a first step for determining causation. Impressive semantic data-driven analytics tool for modeling a lot of different organizational characteristics.

That’s it for day one; if everyone else is as overloaded with information as I am, we’re all ready for tonight’s wine tasting! Check the Twitter stream for opinions and photos from other attendees.

Prepping for OPEXWeek presentation on customer journey mapping – share your ideas!

I’m headed off to OPEX Week in Orlando later this month, where I’ll give a presentation on customer journey mapping and how it results in process improvement as well as customer satisfaction/value. Although customer journey mapping is commonly used to talk about user experience/navigation on customer-facing websites, I want to look at the bigger picture of what we used to call “outside-in processes”, where internal processes are turned on their head to show the process from the customer’s point of view. Once you start thinking about what the customer is trying to accomplish, it can completely change how you perform and set priorities on the internal work, as well as changing the user experience presented to the customer.

I’m preparing a few slides to guide the presentation, and if you have any good stories to share, feel free to let me know by commenting on this post or tweeting to me.

I’m also sitting on a panel the following day on low code and BPM, which I’ve recently written a paper on (sponsored by TIBCO).

Presenting at OPEXWeek in January: customer journey mapping and lowcode

I’ll be on stage for a couple of speaking slots at the OPEX Week Business Transformation Summit 2018 in Orlando the week of January 22nd:

  • Tuesday afternoon, I’ll lead a breakout session in the customer-centric transformation track on increasing customer satisfaction through customer journey mapping and process improvement.
  • Wednesday morning, I’ll be on a panel in the RPA track on how low-code platforms are transforming BPM.

I was last at OPEX Week in 2012, when it was still called PEX Week (for Process Excellence Network) – I was on a BPM blogger panel that time around – and it will be interesting to see how it’s changed since then. Looks like a lot more automation technology in the current version, with the expectation that digital transformation isn’t going to come about just by modeling your business.

If you’re going to be there, look me up at one of my sessions or around the conference on Tuesday and Wednesday.