bpmNEXT 2018: Complex Modeling with MID GmbH, Signavio and IYCON

The final session of the first day of bpmNEXT 2018 was focused on advanced modeling techniques.

Designing the Data-Driven Company, MID GmbH

Elmar Nathe of MID GmbH presented on their enterprise decision maps, which provides an aggregated visualization of strategic, tactical and operational decisions with business events. They provide a variety of modeling tools, but see decisions as key to understanding how organizations are driven by data and events. Clearly a rich decision modeling environment, including support for PMML for including predictive models and other data scientist analysis tools, plus links to other model types such as ERDs that can show what data contributes to which decision model, and business process models. Much more of an enterprise architecture approach to model-driven design that can incorporate the work of data scientists.

Using Customer Journeys to Connect Theory with Reality, Signavio

Till Reiter and Enrico Teterra of Signavio started with a great example of an Ignite presentation, with few words, lots of graphics and a bit of humor, discussing their new notation for modeling an outside-in view of the customer journey rather than just having an undifferentiated “customer” swimlane in a BPMN diagram. The demo walked through their customer journey mapping tool, and how their collaboration hub overlays on that to allow information about each component of the journey map to be discussed amongst process modeling users. The journey map contains a lot of information about KPIs and other process metrics in a form most consumable by process owners and modelers, but also has a notebook/dashboard view for analysts to determine problems with the process and identify potential resolution actions. This includes a variety of analysis tools including process discovery, where process mining techniques are applied to determine which paths in the process model may be contributing to specific problems such as cycle time, then overlay this on the process model to assist with root cause analysis. Although their product does a good job of combing CJMs, process models and process analysis, this was more of a walkthrough of a set of pre-calculated dashboard screens rather than an actual demo — a far cry from the experimental features that Gero Decker showed off in their demo at the first bpmNEXT.

Discovering the Organizational DNA, IYCON and Knowledge Consultants

The final presentation of this section was with Jude Chagas Pereira of IYCON and Frank Kowalkowski of Knowledge Consultants presenting IYCON’s Afterspyre modeling tool for creating a catalog of complex business objects, their attributes and their linkages to create organizational DNA diagrams. Ranking these with machine learning algorithms for semantic and sentiment analysis allows identification of process improvement opportunities. They have a number of standard business analysis techniques built in, and robust analytics focused on problem solving. The demo walked through their catalog, drilling down into the “Strategy DNA” section and into “Technology Solutions” subsection to show an enumeration of the platforms currently in place together with attributes such as technology risk and obsolescence, which can be used to rank technology upgrade plans. Relationships between business objects can be auto-detected based on existing data. Levels including Objectives, Key Processes, Technology Solutions, Database Technology and Datacenter and their interrelationships are mapped into a DNA diagram and an alluvial diagram, starting at any point in the catalog and drilling down a specific number of levels as selected by the modeling analyst. These diagrams can then be refined further based on factors such as scaling the individual markers based on actual performance. They showed sentiment analysis for a hotel rank on a review site, which included extracting specific phrases that related to certain sentiments. They also demonstrated a two-model comparison, which compared the models for two different companies to determine the overlap and unique processes; a good indicator for a merger/acquisition (or even divestiture) level of difficulty. They finished up with affinity modeling, such as the type used by Amazon when they tell you what books that other people bought who also bought the book that you’re looking at: easy to do in a matrix form with a small data set, but computationally intensive once you get into non-trivial amounts of data. Affinity modeling is most commonly used in marketing to analyze buying habits and offering people something that they are likely to buy, even if that’s what they didn’t plan to buy at first — this sort of “would you like fries with that” technique can increase purchase value by 30-40%. Related to that is correlation modeling, which can be used as a first step for determining causation. Impressive semantic data-driven analytics tool for modeling a lot of different organizational characteristics.

That’s it for day one; if everyone else is as overloaded with information as I am, we’re all ready for tonight’s wine tasting! Check the Twitter stream for opinions and photos from other attendees.

Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.
Vega Unity 7 - December 2017

OpenSpan at Pegaworld 2016: RPA meets BPM

Less than two months ago, Pega announced their acquisition of OpenSpan, a software vendor in the robotic process automation (RPA) market. That wasn’t my first exposure to OpenSpan, however: I looked at them eight years ago in the context of mashups. Here at PegaWorld 2016, we’re getting a first peek at the unified roadmap on how Pega and OpenSpan will fit together. Also, a whole new mess of acronyms.

I’m at the OpenSpan session at Pegaworld 2016, although some of these notes date from the time of the analyst briefing back in April. Today’s presentation featured Anna Convery of Pega (formerly OpenSpan); Robin Gomez, Director of Operational Intelligence at Radial (a BPO) providing an introduction to RPA; and Girish Arora, Senior Information Oficer at AIG, on their use of OpenSpan.

Back in the 1990’s, a lot of us who were doing integration of BPM systems into enterprises used “screen scraping” to push commands to and pull data from the screens of legacy systems; since the legacy systems didn’t support any sort of API calls, our apps had to pretend to be a human worker to allow us to automate integration between systems and even hide those ugly screens. Gomez covered a good history of this, including some terms that I had hoped to never see again (I’m looking at you, HLLAPI). RPA is like the younger, much smarter offspring of screen scraping: it still pushes and pulls commands and data, automating desktop activities by simulating user interaction, but it’s now event-driven, incorporating rules and machine learning.

As with BPM and other process automation, Gomez talked about how the goal of RPA is to automate repeatable tasks, reduce error rates, improve standardization, reduce requirement for knowledge about multiple systems, shorten worker onboarding time, and create a straight-through process. At Radial, they were looking for the combination of robotic desktop automation (RDA) that provides personal robots to assist workers’ repetitive tasks, and RPA that completely replaces the worker on an unattended desktop. I’m not sure if every vendor makes a distinction between what OpenSpan calls RDA and RPA; it’s really the same technology, although there are some additional monitoring and virtualization bits required for the headless version.

OpenSpan provides the usual RPA desktop automation capabilities, but also includes the (somewhat creepy) ability to track and analyze worker behavior: basically, what they’re typing into what application in what context, and present it in their Opportunity Finder. This information can be mined for patterns in order to understand how people do their job — much the way that process mining works, but based on user interactions rather than system log files — and automate the parts that are done the same way each time. This can be an end in itself, or a stepping stone to a replacement of the desktop apps entirely, providing interim relief while a full Pega BPM/CRM implementation is being developed, for example. Furthermore, the analytics about the user activities on the desktop can feed into requirements for any replacement initiative, both the general flow as well as an analysis of the decisions made based on what data was presented.

OpenSpan and Pega aren’t (exactly) competitive technologies: OpenSpan can be used for desktop automation where replacement is not an option, or can be used to as a quick fix while performing desktop process discovery to accelerate a full Pega desktop replacement project. OpenSpan paves the cowpaths, while a Pega implementation is usually a more fundamental innovation that may not be warranted in all situations. I can also imagine scenarios where a current Pega customer uses OpenSpan to automate the interaction between Pega and legacy applications that still exist on the desktop. From a Pega sales standpoint, OpenSpan may also act as the camel’s nose in the tent to get into net new clients.

IMG_9784There are a wide variety of use cases, some of them saving just a few minutes but applicable to thousands of workers (e.g., logging in to multiple systems each morning), others replacing a significant portion of knowledge work for a smaller number of workers (e.g., financial reconciliations). Arora talked about what they have done at AIG, in the context of processes that require a mix of human-required and fully automatable steps; he sees their opportunity as moving from RDA (where people are still involved, gaining 10-20% in efficiency) to RPA (fully automated, gaining 40-50% efficiency). Of course, they could just swap out their legacy systems for something that was built this century, but that’s just too difficult to change — expensive, risky and time-consuming — so they are filling in the automation gaps using OpenSpan. They have RDA running on every desktop to assist workers with a variety of tasks ranging from simple to complex, and want to start moving some of those to RPA to roll out unattended automation.

OpenSpan is typically deployed without automation to start gathering user analytics, with initial automation of manual procedures within a few weeks. As Pega cognitive technologies are added to OpenSpan, it should be possible for the RPA processes to continue to recognize patterns and recommend optimizations to a worker’s flow, becoming a sort of virtual personal assistant. I look forward to seeing some of that as OpenSpan is integrated into the Pega technology family.

OpenSpan is Windows-only .NET technology, with no plans to change that at the time of our original analyst briefing in April. We’ll see.

bpmNEXT 2016 demo session: Signavio and Princeton Blue

Second demo round, and the last for this first day of bpmNEXT 2016.

Process Intelligence – Sven Wagner-Boysen, Signavio

Signavio allows creating a BPMN model with definitions of KPIs for the process such as backlog size and end-to-end cycle time. The demo today was their process intelligence application, which allows a process model to be uploaded as well as an activity log of historical process instance data from an operational system — either a BPMS or some other system such as an ERP or CRM system — in CSV format. Since the process model is already known (in theory), this doesn’t do process mining to derive the model, but rather aggregates the instance data and creates a dashboard that shows the problem areas relative to the KPIs defined in the process model. Drilling down into a particular problem area shows some aggregate statistics as well as the individual instance data. Hovering over an instance shows the trace overlaid on the defined process model, that is, what path that that instance took as it executed. There’s an interesting feature to show instances that deviate from the process model, typically by skipping or repeating steps where there is no explicit path in the process model to allow that. This is similar in nature to what SAP demonstrated in the previous session, although it is using imported process log data rather than a direct connection to the history data. Given that Signavio can model DMN integrated with BPMN, future versions of this could include intelligence around decisions as well as processes; this is a first version with some limitations.

Leveraging Cognitive Computing and Decision Management to Deliver Actionable Customer Insight – Pramod Sachdeva, Princeton Blue

Sentiment analysis of unstructured social media data, creating a dashboard of escalations and activities integrated with internal customer data. Uses Watson for much of the analysis, IBM ODM to apply rules for escalation, and future enhancements may add IBM BPM to automatically spawn action/escalation processes. Includes a history of sentiment for the individual, tied to service requests that responded to social media activity. There are other social listening and sentiment analysis tools that have been around for a while, but they mostly just drive dashboards and visualizations; the goal here is to apply decisions about escalations, and trigger automated actions based on the results. Interesting work, but this was not a demo up to the standards of bpmNEXT: it was only static screenshots and some additional PowerPoint slides after the Ignite portion, effectively just an extended presentation.

bpmNEXT 2016 demo session: 8020 and SAP

My panel done — which probably set some sort of record for containing exactly 50% of the entire female attendees at the conference — we’re on to the bpmNEXT demo session: each is 5 minutes of Ignite-style presentation, 20 minutes of demo, and 5 minutes for Q&A. For the demos, I’ll just try capture some of the high points of each, and I highly recommend that you check out the video of the presentations when they are published after the conference.

Process Design & Automation for a New Economy – Ian Ramsay, 8020 BPM

A simplified, list-based process designer that defines a list of real-world business entities (e.g., application), a list of states unique to each entity (e.g., approved), lists of individuals and groups, lists of stages and tasks associated with each stage. Each new process has a list of start events that happen when a process is instantiated, one or more tasks in the middle, then a list of end events that define when the process is done. Dragging from the lists of entities, states, groups, individuals, stages and tasks onto the process model creates the underlying flow and events, building a more comprehensive process model behind the scenes. This allows a business specialist to create a process model without understanding process modeling or even simple flowcharting, just by identifying the relationships between the different states of business entity, the stages of a business process, and the people involved. Removing an entity from a process modifies the model to remove that entity while keeping the model syntactically correct. Interesting alternative to BPMN-style process modeling, from someone who helped create the BPMN standard, where the process model is a byproduct of entity-state modeling.

Process Intelligence for the Digital Age: Combining Intelligent Insights with Process Mining – Tarun Kamal Khiani and Joachim Meyer, SAP, and Bastian Nominacher, Celonis

Combining SAP’s Operational Process Intelligence analytics and dashboard (which was shown in last year’s bpmNEXT as well as some other briefings that I’ve documented) with Celonis’ process mining. Drilling down on a trouble item from the OPInt dashboard, such as late completion of a specific process type, to determine the root cause of the problem; this includes actionable insights, that is, being able to trigger an operational activity to fix the problem. That allows a case-by-case problem resolution, but adding in the Celonis HANA-based process mining capability allows past process instance data to be mined and analyzed. Adjusting the view on the mined data allows outliers and exceptions to be identified, transforming the straight-through process model to a full model of the instance data. For root cause analysis, this involved filtering down to only processes that took longer than a specific number of days to complete, then manually identifying the portions of the model where the lag times or certain activities may be causing the overly-long cycle time. Similar to other process mining tools, but nicely integrated with SAP S4 processes via the in-memory HANA data mart: no export or preprocessing of the process instance history log, since the process mining is applied directly to the realtime data. This has the potential to be taken further by looking at doing realtime recommendations based on the process mining data and some predictive modeling, although that’s just my opinion.

Good start to the demos with some new ideas on modeling and realtime process mining.

Wearable Workflow by @wareFLO at BPMCM15

Charles Webster gave a breakout session on wearable workflow, looking at some practical examples of combining wearables — smart glasses, watches and even socks — with enterprise processes, allowing people wearing these devices to have device events integrated directly into their work without having to break to consult a computer (or at least a device that self-identifies as a computer). Webster is a doctor, and has a lot of great case studies in healthcare, such as detecting when a healthcare worker hasn’t washed their hands before approaching a patient by instrumenting the soap dispenser and the worker. Interestingly, the technology for the hand hygiene project came from smart dog collars, and we’re now seeing devices such as Intel’s Curie that are making this much more accessible by combining sensors and connectivity as we commercialize the internet of things (IoT).

He was an early adopter of Google Glass, and talked to us about the experience of having a wearable integrated into his lifestyle, such as for voice-controlled email and photography, plus some of the ideas for Google Glass that he has for healthcare workflows where electronic health records (EHR) and other device information can be integrated with work patterns. Google Glass, however, was not a commercial success since it is too bulky and geeky-looking, as well as requiring frequent recharging if you’re using it a lot. It requires more miniaturization to be considered as a possibility for most people, but that’s a matter of time, and probably a short amount of time, especially if they’re integrated directly into eyeglass frames that likely have a lot of unused volume that could be filled with electronic components.

Webster talked about a university curriculum for healthcare technology and IoT that he designed, which would include the following courses:

  • Wearable human factors and workflow ergonomics
  • Data and process mining wearable data, since wearables generate so much more interesting data that needs to be analyzed and correlated
  • Designing and prototyping wearable products

IMG_20150623_104530He is working on a prototype for a 3D-printed, Arduino-based wearable interactive robot, MrRIMP, intended to be used by pediatric healthcare professionals to amuse and distract their young patients during medical examinations and procedures. He showed us a video of a demo of he and MrRIMP interacting, and the different versions that he’s created. Great ideas about IoT, wearables and healthcare.

bpmNEXT 2015 Day 2 Demos: Kofax, IBM, Process Analytica

Our first afternoon demo session included two mobile presentations and one on analytics, hitting a couple of the hot buttons of today’s BPM.

Kofax: Integrating Mobile Capture and Mobile Signature for Better Multichannel Customer Engagement Processes

John Reynolds highlighted the difficulty in automating processes that involve customers if you can’t link the real world — in the form of paper documents and signatures — with your digital processes. Kofax started in document scanning, and they’ve expanded their repertoire to include all manner of capture that can make processes more automated and faster to complete. Smartphones become intelligent scanners and signature capture devices, reducing latency in capture information from customers. John demonstrated the Kofax Mobile Capture app, both natively and embedded within a custom application, using physical documents and his iPhone; it captures images of a financial statement, a utility bill and a driver’s license, then pre-processes them on the device to remove irregularities that might impact automated character recognition and threshold them to binary images to reduce the data transmission size. These can then be directly injected into a customer onboarding process, with both the scanned image and the extracted data included, for automated or manual validation of the documents to continue the process. He showed the back-end tool used to train the recognition engine by manually identifying the data fields on sample images, which can accept a variety of formats for the same type of document, e.g., driver’s licenses from different states. This is done by a business person who understands the documents, not developers. Similarly, you can also use their Kapow Design Studio to train their system on how to extract information from a website (John was having the demo from hell, and his Kapow license had expired) by marking the information on the screen and walking through the required steps to extract the required data fields. They take on a small part of the process automation, mostly around the capture of information for front-end processes such as customer onboarding, but are seeing many implementations moving toward an “app” model of several smaller applications and processes being used for an end-to-end process, rather than a single monolithic process application.

IBM: Mobile Case Management and Capture in Insurance

Mike Marin and Jonathan Lee continued on the mobile theme, stressing that mobile is no longer an option for customer-facing and remote worker functionality. They demonstrated IBM Case Manager for an insurance example, showing how mobile functionality could be used to enhance the claims process by mobile capture, content management and case handling. Unlike the Kofax scenario where the customer uses the mobile app, this is a mobile app for a knowledge worker, the claims adjuster, who may need a richer informational context and more functionality such as document type classification than a customer would use. They captured the (printed and filled) claims form and a photo of the vehicle involved in the claim using a smartphone, then the more complete case view on a tablet that showed more case data and related tasks. The supervisor view shows related cases plus a case visualizer that shows a timeline view of the case. They finished with a look at the new IBM mobile UI design concepts, which presented a more modern mobile interface style including a high-level card view and a smoother transition between information and functions.

Process Analytica: Process Discovery and Analytics in Healthcare Systems

Robert Shapiro shifted the topic to process mining/discovery and analytics, specifically in healthcare applications. He started with a view of process mining, simulation and other analytical techniques, and how to integrate with different types of healthcare systems via their history logs. Looking at their existing processes based on the history data, missed KPIs and root causes can be identified, and potential solutions derived and compared in a systematic and analytic manner. Using their Optima process analytics workbench, he demonstrated importing and analyzing an event log to create a BPMN model based on the history of events: this is a complete model that includes interrupting and non-interrupting boundary events, and split and merge gateways based on the patterns of events, with probabilistic weights and/or decision logic calculated for the splitting gateways. Keeping in mind that the log events come from systems that have no explicit process model, the automatic derivation of the boundary events and gateways and their characteristics provides a significant step in process improvement efforts, and can be further analyzed using their simulation capabilities. Most of the advanced analysis and model derivation (e.g., for gateway and boundary conditions) is dependent on capturing data value changes in the event logs, not just activity transitions; this is an important distinction since many event logs don’t capture that information.

bpmNEXT 2015 Day 1: The Business of BPM

I can’t believe it’s already the third year of bpmNEXT, my favorite BPM conference, organized by Nathaniel Palmer and Bruce Silver. It’s a place to meet up with other BPM industry experts and hear about some of the new things that are coming up in the industry: a meeting of peers, including CEOs and CTOs from smaller BPM companies, BPM architects and product management experts from larger vendors, industry analysts and more. The goal is a non-partisan friendly meeting of the minds rather than a competitive arena, and it’s great to see a lot of familiar faces here, plus some new faces of people who I only know online or through phone calls.

Hanging with Denis and Jakob

We’re at the lovely Canary Hotel in Santa Barbara, and will have the chance for a wine tasting with some of the local wineries tonight: Slone Vineyards, Happy Canyon, Grassini, Au Bon Climat, and Margerum. But first, we have some work to do.

This year, we started with an optional half day program on the business of BPM, including keynotes and a panel, before kicking off the usual DEMO-style presentations. Because of the large volume of great content, I’ll just publish summaries at the break points; all of the presentations will be available online after the conference (as they were in 2014 and 2013) if you want to learn more.

BPM 2020: Outlook for the Next Five Years

Bruce Silver opening remarksBruce Silver kicked off the conference and summarized the themes and presenters here at bpmNEXT:

  • Breaking old barriers: between BPM and (business and enterprise) architecture, which will be covered in presentations by Comindware and Trisotech; between process modeling and decision modeling, with Sapiens and Signavio presentations; and between BPM and case management, with Camunda, Safira, Cryo, Kofax and IBM presentations.
  • Expanding BPM horizons: the internet of things, with presentations from SAP and W4; cognitive computing and expert systems, with BP3, Fujitsu, IBM and Living Systems; and resourcing optimization with process mining, from Process Analytica.
  • Reaffirming core values: business empowerment, covered by Omny.link and Oracle; and embracing continual change, with Bonitasoft.

Hearing Bruce talk about the future to BPM in the context of the presentations to be given here over the next couple of days makes you realize just how much thought goes into the bpmNEXT program, and selecting presenters that provide maximum value. If this fascinates you, you should consider being here next year, as an attendee or a presenter.

Nathaniel Palmer then gave us his view of what BPM will look like in five years: data-driven, goal-oriented, adaptive and with intelligent automation, so that processes understand, evolve and self-optimize to meet the work context and requirements. He sees the key challenges as the integration of rules, relationships and robots into processes and operations, including breaking down the artificial barrier that exists between the modeling and automation of rules and process. Today’s consumers — and business people — are expecting to interact with services through their mobile devices, and are starting to include the quality of mobile services as a primary decision criteria. Although we are primarily doing that via our phones and tablets now, there are also devices such as Amazon Echo that are there to lower the threshold to interaction (and therefore to purchasing) by being a dedicated, voice-controlled gateway to Amazon; Jibo, a home-automation “robot” that aims to become a personal assistant for your home, interfacing with rather than automating tasks; and wearables that can notify and accept instructions.

bpmNEXT attendeesToday, most BPM is deployed as a three-tier, MVC-type architecture that presents tasks via a worklist/inbox metaphor; Nathanial thinks that we need to re-envision this as a four-tier architecture: a client tier native to each platform, a delivery tier that optimizes delivery for the platform, an aggregation tier that integrates services and data, and a services tier that provides the services (which is, arguably, the same as the bottom two tiers of a standard three-tier architecture). Tasks are machine-discoverable for automated integration and actions, and designed by context rather than procedure. Key enablers for this in include standards such as BPAF, and techniques for automated analysis including process mining.

Reinventing BPM for the Age of the Customer

Clay Richardson of Forrester — marking what I think is the first participation by a large analyst firm at bpmNEXT — presented some of Forrester’s research on how organizations are retooling for improving customer. Although still critical for automation and information management, BPM has evolved to support customer engagement, especially via mobile applications and innovation. 42% of their customers surveyed consider it either critical or high priority to reengineer business processes for mobile, meaning that this is no longer about just putting a mobile interface on an existing product, but reworking these processes to leverages things such as events generated by sensors and devices, providing a much richer informational context for processes. Digital transformation provides new opportunities for using BPM to drive rapid customer-centric innovation: digitizing the customer lifecycle and end-to-end experiences as well as quickly integrating services behind the scenes. Many companies now are using customer journey maps to connect the dots between process changes and customer experience, using design thinking paradigms.

We saw Forrester’s BPM TechRadar — similar to Gartner’s Hype Cycle — showing the key technologies related to BPM, and where they are on their maturity curves: BPM suites, business rules, process modeling and document capture are all at or past their peak, whereas predictive analytics, social collaboration, low-code platforms and dynamic case management are still climbing. They see BPM platforms as moving towards more customer-centricity, being used to create customer-facing applications in addition to automated integration and internal human-centric workflow. There’s also an interesting focus on the low-code application development platform market, as some BPM vendors reposition their products as process-centric app dev — including both traditional technical developers and less technical citizen developers — rather than BPMS.

We’re off on a break now, but will be back to finish the Business of BPM program with a panel and a keynote before we start on the demo program this afternoon.

Process Intelligence at KofaxTransform

It’s after lunch on the second (last) day of Kofax Transform, and the bar for keeping my attention in a session has gone up somewhat. To that end, I’m in a session with Scott Opitz and Rich Rabin from the Kofax Altosoft division, but not sure it’s going to meet that bar since Opitz started out by stating that what the TotalAgility (KTA) sessions call process is a much more complex than what they call process, and I’m a bit more on KTA’s side of this definition.

Altosoft process intelligence is really about the simple milestone-based monitoring processes of operational intelligence, with the processes being executed on multiple systems, more like SAP’s SAP Operational Process Intelligence based on HANA or IBM Business Monitor; you rarely have all of your process milestones in a single system, and even if you do, that system may not have adequate operational intelligence capabilities. Instead, operational intelligence systems pick up the breadcrumbs left by the processes — such as events, database records or log files — and provide an analytics layer, usually after importing that data into a dedicated analytics datamart.

There are really two main things to measure with process intelligence: performance and quality/compliance. To get there, however, you need to know what the process is supposed to look like in order to measure patterns of behavior. Altosoft’s process intelligence does what they call “swimlane analysis” — looking at which tasks are done in which order, a form of process mining discovery algorithm since there is no a priori process model — to identify operational patterns and derive a process model from runtime data, showing the most common/expected paths as well as the outliers. Not just process mining as an analysis tool, it then shows the live process monitoring data points against those models, and provides some good interactive filtering capabilities, allowing you to find missing steps that may indicate that the task wasn’t performed or (more likely for steps with manual logging) that the task was not documented.

Since the Insight platform is a complete BI environment, this information can also be combined with more traditional BI analytics and dashboards, providing real-time alerts as well as historical analysis. They also have ways to use a predefined process model and measure against that; this then becomes more of a conformance analysis to see how closely the actual runtime data matches the a prioiri model.

TotalAgility Product Update At KofaxTransform

In a breakout session at Kofax Transform, Dermot McCauley gave us an update on the TotalAgility product vision and strategy. He described five vital communities impacted by their product innovation: information all-stars who ensure that the right information is seen by the right people at the right time, performance improvers focused on operational excellence, customer obsessives who focus on customer satisfaction, visionary leaders who challenge the status quo, and change agents using technology thought-leadership to drive business value. I think that this a great way to think about product vision, and Dermot stated that he spends his time thinking about how to serve these five communities and help them to achieve their goals.

TotalAgility Product VisionTotalAgility is positioned to be the link between systems of engagement and systems of record, making that first mile of customer engagement faster, simpler, more efficient, and customer-friendly. It includes four key components: multichannel capture and output, adaptive process management, embedded actionable analytics, and collaboration. Note that some of this represents product vision rather than released product, but this gives you an idea of where they are and what they’re planning.

Multichannel capture and output includes scanning in all forms, plus capture from electronic formats including documents, forms and even social media, with a goal to be able to ingest information in any type and any format. On the processing and output side, their recent acquisitions fill in the gaps with e-signature and signature verfication, and outbound correspondence management.

TotalAgility Product Components and SPAsAdaptive process management includes pre-defined routine workflows and ad hoc collaboration, plus goal-based and analytics-driven adaptive processes. These can be automated intelligent processes, or richer context used when presenting tasks to a knowledge worker.

Embedded actionable analytics are focused on the process at hand, driving next-best-action decisions or recommendations, and detecting and predicting patterns within processes.

Collaboration includes identifying suitable and available collaborators, and supporting unanticipated participants.

AP AgilityThe goal is to provide a platform for building smart process applications (SPAs), both for Kofax with their Mortgage Agility and other SPAs, and for partners to create their own vertical solutions. McCauley walked through how Kofax AP Agility uses the TotalAgility platform for AP processing with ERP integration, procurement, invoice capture and actionable analytics; then Mortgage Agility that brings in newer capabilities of the platform such as e-signature and customer correspondence management with a focus on customer engagement as well as internal efficiencies.

TotalAgility Deployment OptionsHe walked through deployment options of on-premise (including multi-tenancy on-premise for a BPO or shared service center) and Microsoft Azure public cloud (multi-tenant or own instance), and touched on the integration into and usage of Kapow and e-signatures in the TotalAgility platform. They’re also working on bringing more of the analytics into TotalAgility to allow for predictions, pattern detection, recommendations and other analytics-based processing.

TotalAgility Innovation ThemesGoing forward, they have four main innovation themes:

  • Platform optimization for better performance
  • Portfolio product integrations for a harmonized design time and runtime
  • Pervasive mobility
  • Context-aware analytics

KofaxAgility Mobile Extraction InnovationHe showed some specific examples that could be developed in the future as part of the core platform, including real-time information extraction during document capture on a mobile device, and process improvement analytics for lightweight process mining; the audience favorite (from a show of hands) was the real-time extraction during mobile capture.