Integrating your enterprise content with cloud business applications? I wrote a paper on that!

Just because there’s a land rush towards SaaS business applications like Salesforce for some of your business applications, it doesn’t mean that your content and data are all going to be housed on that platform. In reality, you have a combination of cloud applications, cloud content that may apply across several applications, and on-premise content; users end up searching in multiple places for information in order to do a single transaction.

In this paper, sponsored by Intellective (who have a bridging product for enterprise content/data with SaaS business applications), I wrote about some of the architecture and design issues that you need to consider when you’re linking these systems together. Here’s the introduction:

Software-as-a-service (SaaS) solutions provide significant utility and value for standard business applications, including customer relationship management (CRM), enterprise resource planning (ERP), supply chain management (SCM), human resources (HR), accounting, insurance claims management, and email. These “systems of engagement” provide a modern and agile user experience that guides workers through actions and enables collaboration. However, they rarely replace the core “systems of record”, and don’t provide the range of content services required by most organizations.

This creates an issue when, for example, a customer service worker’s primary environment is Salesforce CRM, but for every Salesforce activity they may also need to access multiple systems of record to update customer files, view regulatory documentation or initiate line-of-business (LOB) processes not supported in Salesforce. The worker spends too much time looking for information, risks missing relevant content in their searches, and may forget to update the same information in multiple systems.

The solution is to integrate enterprise content from the systems of record – data, process and documents – directly with the primary user-facing system of engagement, such that the worker sees a single integrated view of everything required to complete the task at hand. The worker completes their work more efficiently and accurately because they’re not wasting time searching for information; data is automatically updated between systems, reducing data entry effort and errors.

Head on over to get the full paper (registration required).

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!

AlfrescoDay 2018: digital business platform and a whole lot of AWS

I attended Alfresco’s analyst day and a customer day in New York in late March, and due to some travel and project work, just finding time to publish my notes now. Usually I do that while I’m at the conference, but part of the first day was under NDA so I needed to think about how to combine the two days of information.

The typical Alfresco customer is still very content-centric, in spite of the robust Alfresco Process Services (formerly Activiti) offering that is part of their platform, with many of their key success stories presented at the conference were based on content implementations and migrations from ECM competitors such as Documentum. In a way, this is reminiscent of the FileNet conferences of 20 years ago, when I was talking about process but almost all of the customers were only interested in content management. What moves this into a very modern discussion, however, is the focus on Alfresco’s cloud offerings, especially on Amazon AWS.

First, though, we had a fascinating keynote by Sangeet Paul Choudary — and received a copy of his book Platform Scale: How an emerging business model helps startups build large empires with minimum investment — on how business models are shifting to platforms, and how this is disrupting many traditional businesses. He explained how supply-side economies of scale, machine learning and network effects are allowing online platforms like Amazon to impact real-world industries such as logistics. Traditional businesses in telecom, financial services, healthcare and many other verticals are discovering that without a customer-centric platform approach rather than a product approach, they can’t compete with the newer entrants into the market that build platforms, gather customer data and make service-based partnerships through open innovation. Open business models are particularly important, and striking the right balance between an open ecosystem and maintaining control over the platform through key control points. He finished up with a digital transformation roadmap: gaining efficiences through digitization; then using data collected in the first stage while integrating flows across the enterprise to create one view of the ecosystem; and finally externalizing and harnessing value flows in the ecosystem. This last stage, externalization, is particularly critical, since opening the wrong control points can kills you business or stifle open growth.

This was a perfect lead-in to Chris Wiborg’s (Alfresco’s VP of product marketing) presentation on Alfresco’s partnership with Amazon and the tight integration of many AWS services into the Alfresco platform: leveraging Amazon’s open platform to build Alfresco’s platform. This partnership has given this conference in particular a strong focus on cloud content management, and we are hearing more about their digitial business platform that is made up of content, process and governance services. Wiborg started off talking about the journey from (content) digitization to digital business (process and content) to digital transformation (radically improving performance or reach), and how it’s not that easy to do this particularly with existing systems that favor on-premise monolithic approaches. A (micro-) service approach on cloud platforms changes the game, allowing you to build and modify faster, and deploy quickly on a secure elastic infrastructure. This is what Alfresco is now offering, through the combination of open source software, integration of AWS services to expand their portfolio of capabilities, and automated DevOps lifecycle.

This brings a focus back to process, since their digital business platform is often sold process-first to enable cross-departmental flows. In many cases, process and content are managed by different groups within large companies, and digital transformation needs to cut across both islands of functionality and islands of technology.

They are promoting the idea that differentiation is built and not bought, with the pendulum swinging back from buy toward build for the portions of your IT that contribute to your competitive differentiation. In today’s world, for many businesses, that’s more than just customer-facing systems, but digs deep into operational systems as well. In businesses that have a large digital footprint, I agree with this, but have to caution that this mindset makes it much too easy to go down the rabbit hole of building bespoke systems — or having someone build them for you — for standard, non-differentiating operations such as payroll systems.

Alfresco has gone all-in with AWS. It’s not just a matter of shoving a monolithic code base into a Docker container and running it on EC2, which how many vendors claim AWS support: Alfresco has a much more integrated microservices approach that provides the opportunity to use many different AWS services as part of an Alfresco implementation in the AWS Cloud. This allows you to build more innovative solutions faster, but also can greatly reduce your infrastructure costs by moving content repositories to the cloud. They have split out services such as Amazon S3 (and soon Glacier) for storage services, RDS/Aurora for database services, SNS for notification, security services, networking services, IoT via Alexa, Rekognition for AI, etc. Basically, a big part of their move to microservices (and extending capabilities) is by externalizing to take advantage of Amazon-offered services. They’re also not tied to their own content services in the cloud, but can provide direct connections to other cloud content services, including Box, SharePoint and Google Drive.

We heard from Tarik Makota, an AWS solution architect from Amazon, about how Amazon doesn’t really talk about private versus public cloud for enterprise clients. They can provide the same level of security as any managed hosting company, including private connections between their data centers and your on-premise systems. Unlike other managed hosting companies, however, Amazon is really good at near-instantaneous elasticity — both expanding and contracting — and provides a host of other services within that environment that are directly consumed by Alfresco and your applications, such as Amazon RDS for Aurora, a variety of AI services, serverless step functions. Alfresco Content Services and Process Services are both available as AWS QuickStarts, allowing for full production deployment in a highly-available, highly-redundant environment in the geographic region of your choice in about 45 minutes.

Quite a bit of food for thought over the two days, including their insights into common use cases for Alfresco and AI in content recognition and classification, and some of their development best practices for ensuring reusability across process and content applications built on a flexible modern architecture. Although Alfresco’s view of process is still quite content-centric (naturally), I’m interested to see where they take the entire digital business platformin the future.

Also great to see a month later that Bernadette Nixon, who we met at the Chief Revenue Officer at the event, has moved up to the CEO position. Congrats!

All of the bpmNEXT video coverage

I was scrolling through some of my unread RSS feeds and saw Kris Verlaenen’s posts about last month’s bpmNEXT conference: like me, he was live-blogging the event. However, he also went back and added in each of the videos for the presentation to his posts – nice touch!

His posts:

You can also go to the bpmNEXT YouTube channel and see all of the videos including those from previous years, and read my coverage of the event here.

Upcoming webinar on digital transformation in financial services featuring @BPMdotcom and @ABBYY_USA – and my white paper

Something strange about receiving an email about an upcoming webinar, featuring two people who I know well…

 …then scrolling down to see that ABBYY is featuring the paper that I wrote for them as follow-on bonus material!

Nathaniel Palmer and Carl Hillier are both intelligent speakers with long histories in the industry, tune in to hear them talk about the role that content capture and content analytics play in digital transformation.

Low-Code webinar with @TIBCO – new ways for business and IT to develop and innovate together

Liveappas_rev1_1200I’m back at the webinars this Thursday (April 26), with the first of two parts in a series on low-code and how it enables business and IT to work better together. Together with Roger King and Nicolas Marzin of TIBCO, we’re doing another one of our free-ranging “fireside chat” discussions, such as we did on case management last November. This time, we dig into more of the technical and governance issues of how low-code application development platforms are used across organizations by both business developers and IT.

You can sign up for the webinar here.

I’m also putting the finishing touches on a white paper that goes into more of these concepts in depth. Sign up for the webinar and you’ll get a link to the paper afterwards.

bpmNEXT 2018: Last session with a Red Hat demo, Serco presentation and DMN TCK review

We’re on the final session of bpmNEXT 2018 — it’s been an amazing three days with great demos and wonderful conversations.

Exploiting Cloud Infrastructure for Efficient Business Process Execution, Red Hat

Kris Verlaenen, project lead for jBPM as part of Red Hat, presented on cloud BPM infrastructure, specifically for execution and monitoring. Cloud makes BPM lightweight, scalable, embedable and able to take advantage of the larger cloud app ecosystem. They are introducing some new cloud infrastructure, including a controller for managing server deployments, a smart router for delegating and aggregating requests from applications to servers, and monitoring that aggregates process statistics across servers and containers. The demo showed using Red Hat’s OpenShift container application platform (actually MiniShift running on his laptop) to create a new environment and deploy an IT hardware ordering BPM application. He walked through using the application to create a new order and see the milestone-based monitoring of the order, then the hardware provider’s view of their steps in the process to provide information and advance the process to the next stage. The process engine and monitoring engine can be deployed in different containers on different hardware, in any combination of cloud providers and on-premise infrastructure. Applications and servers can be bundled into a single immutable image for easy provisioning — more of a microservices style — or can be deployed independently. Multiple versions of the same application can be deployed, allowing current instances to play out in the original version while new instances use the most recent version, or other strategies that would allow new instances of any version to be created, while monitoring can aggregate instance data from all versions in all containers.

Kris is also live-blogging the conference, check out his posts. He has gone back and included the video of each presentation when they are released (something that I didn’t do for page load performance reasons) as well as providing his commentary on each presentation.

Dynamic Work Assignment, Serco

Lloyd Dugan of Serco has the unenviable position of being the last presenter of the conference, although he gave a presentation of dynamic work assignment implementation rather than an actual demo (with a quick view of the simple process model in the Trisotech animator near the end, plus an animation of the work assignment in action). His company is a call center business process outsourcer, where knowledge workers use a case management application implemented in BPMN, driven by events such as inbound calls and documents, as well as timers. Real-time work prioritization and assignment is necessary because of SLAs around inbound calls, and the task management model is moving from work being selected (and potentially cherry-picked) by workers, to push assignments. Tasks are scored and assigned using decision models that include task type and SLAs, and worker eligibility based on each individual’s skills and training. Although work assignment products exist, this one is specifically for the complex rules around the US Affordable Care Act administration, which requires a combination of decision tables, database table-driven rules, and lower-level coding to provide the right combination of flexibility and performance.

DMN TCK (Technical Compatibility Kit) Working Group

Keith Swenson of Fujitsu (but presenting here in his role on the DMN standards) started on the idea of a set of standardized DMN technical compatibility tests based on conversations at bpmNEXT in 2016, and he presented today on where they’re at with the TCK. Basically, the TCK provides a way for DMN vendors to demonstrate their compliance with the standard by providing a set of DMN models, input data, and expected results, testing decision tables, boxed expressions and FEEL. Vendors who can demonstrate that they pass all of the TCK tests are listed on a github site along with information about individual test results, providing a way for DMN customers to assess the compliance level of vendors. Keith wrote an update on this last September that provides a good summary up to that point, and in today’s presentation he walked through some of the additional things that they’ve done including identifying sections of the DMN specification that require clarifications or additions due to ambiguity that can lead to different implementations. DMN 1.2 is coming out this year, which will require a new set of tests specifically for that version while maintaining the previous version tests; they are also trying to improve testing of error cases and introducing more real-world decision models. If you create and use DMN models, or make a DMN-compliant decision management product, or you’re otherwise interested in the DMN TCK, you can find out here how to get involved in the working group.

That’s it for bpmNEXT 2018. There will be voting for the best in show and some wrapup after lunch, but we’re pretty much done for this year. Another amazing year that makes me proud to be a part of this community.

bpmNEXT 2018: Bonitasoft, Know Process

We’re in the home stretch here at bpmNEXT 2018, day 3 has only a couple of shorter demo sessions and a few related talks before we break early to head home.

When Artificial Intelligence meets Process-Based Applications, Bonitasoft

Nicolas Chabanoles and Nathalie Cotte from Bonitasoft presented on their integration of AI with process applications, specifically for predictive analytics for automating decisions and making recommendations. They use an extension of process mining to examine case data and activity times in order to predict, for example, if a specific case will finish on time; in the future, they hope to be able to accurately predict the end time for individual cases for better feedback to internal users and customers. The demo was a loan origination application built on Bonita BPM, which was fairly standard, with the process mining and machine learning coming in with how the processes are monitored. Log data is polled from the BPM system into an elastic search database, then machine learning is applied to instance data; configuration of the machine learning is based (at this point) only on the specification of an expected completion time for each instance type to build the predictions model. At that point, predictions can be made for in-flight instances as to whether each one will complete on time, or its probability of completing on time for those predicted to be late — for example, if key documents are missing, or the loan officer is not responding quickly enough to review requests. The loan officer is shown what tasks are likely to be causing the late prediction, and completing those tasks will change the prediction for that case. Priority for cases can be set dynamically based on the prediction, so that cases more likely to be late are set to higher priority in order to be worked earlier. Future plans are to include more business data and human resource data, which could be used to explicitly assign late cases to individual users. The use of process mining algorithms, rather than simpler prediction techniques, will allow suggestions on state transitions (i.e., which path to take) in addition to just setting instance priority.

Understanding Your Models and What They Are Trying To Tell You, KnowProcess

Tim Stephenson of KnowProcess spoke about models and standards, particularly applied to their main use case of marketing automation and customer onboarding. Their ModelMinder application ingests BPMN, CMMN and DMN models, and can be used to search the models for activities, resources and other model components, as well as identify and understand extensions such as calling a REST service from a BPMN service task. The demo showed a KnowProcess repository initially through the search interface; searching for “loan” or “send memo” returned links to models with those terms; the model (process, case or decision) can be displayed directly in their viewer with the location of the search term highlighted. The repository can be stored as files or an engine can be directly indexed. He also showed an interface to Slack that uses a model-minder bot that can handle natural language requests for certain model types and content such as which resources do the work as specified in the models or those that call a specific subprocess, providing a link directly back to the models in the KnowProcess repository. Finishing up the demo, he showed how the model search and reuse is attached to a CRM application, so that a marketing person sees the models as functions that can be executed directly within their environment.

Instead of a third demo, we had a more free-ranging discussion that had started yesterday during one of the Q&As about a standardized modeling language for RPA, led by Max Young from Capital BPM and with contributions of a number of others in the audience (including me). Good starting point but there’s obviously still a lot of work to do in this direction, starting with getting some of the major RPA vendors on board with standardization efforts. The emerging ideas seem to center around defining a grammar for the activities that occur in RPA (e.g., extract data from an Excel file, write data to a certain location in an application screen), then an event and flow language to piece together those primitives that might look something like BPMN or CMMN. I see this as similar to the issue of defining page flows, which are often done as a black box function that is performed within a human activity in a BPMN flow: exposing and standardizing that black box is what we’re talking about. This discussion is a prime example of what makes bpmNEXT great, and keeps me coming back year after year.

bpmNEXT 2018: Intelligence and robots with ITESOFT, K2, BeeckerCo

We’re finishing up day 2 of bpmNEXT with a last section of demos.

Robotics, Customer Interactions and BPM, ITESOFT

Francois Bonnet from ITESOFT presented on customer interactions and automation, and the use of BPMN-driven robots to guide customer experience. In a first for bpmNEXT, the demo included an actual physical human-shaped robot (which was 3D-printed from an open source project) that can do voice recognition, text to speech, video capture, movement tracking and facial recognition. The robot’s actions were driven by a BPMN process model, with activities such as searching for humans, recognizing faces, speaking phrases, processing input and making branching decisions. The process model was shown simultaneously, with the execution path updated in real time as it moved through the process, with robot actions shown as service activities. The scenario was the robot interacting with a customer in a mobile phone shop, recognizing the customer or training a new facial recognition, asking what service is required, then stepping through acquiring a new phone and plan. He walked through how the BPMN model was used, with both synchronous and asynchronous services for controlling the robot and invoking functions such as classifier training, and human activities for interacting with the customer. Interesting use of BPMN as a driver for real robot actions, showing integration of recognition, RPA, AI, image capture and business services such as customer enrolment and customer ID validation.

The Future of Voice in Business Process Automation, K2

Brandon Brown from K2 looked at a more focused use case for voice recognition, and some approaches to voice-first design that is more than just speech-to-text by adding cognitive services through commodity AI services from Google, Amazon and Microsoft. Their goal is to make AI more accessible through low/no-code application builders like K2, creating voice recognition applications such as chatbots. He demonstrated a chatbot on a mobile phone that was able to not just recognize the words that he spoke, but recognize the intent of the interaction and request additional data: essentially a replacement for filling out a form. This might be a complete interaction, or just an assist for starting a more involved process based on the original voice input. He switched over to a computer browser interface to show more of the capabilities, including sentiment analysis based on form input that could adjust the priority of a task or impact process execution. From within their designer environment, cognitive text analytics such as sentiment analysis can be invoked as a step in a process using their Smart Objects, which are effectively wrappers around one or more services and data mapping actions that allow less-technical process designers include cognitive services in their process applications. Good demo of integrating voice-related cognitive services into processes, showing how third-party services make this much more accessible to any level of developer skill.

State Machine Applied to Corporate Loans Process, BeeckerCo

Fernando Leibowich Beker from BeeckerCo finished up the day with a presentation on their process app suite BeBOP based on IBM BPM/ODM focused on financial services customers, followed by a “demo” of mostly prerecorded screencams. Their app generates state tables for processes using ODM business rules, then allows business users to change the state table in order to drive the process execution. The demo showed a typical IBM BPM application for processing a loan origination, but the steps are defined as ad hoc tasks so not part of a process flow; instead, the process flow is driven by the state table to determine which task to execute in which order, and the only real flow is to check the state table, then either invoke the next task or complete the process. Table-driven processes aren’t a new concept — we’ve been doing this since the early days of workflow — although using an ODM decision table to manage the state transition table is an interesting twist. This does put me in mind of the joke I used to tell when I first started giving process-focused presentations at the Business Rules Forum, about how a process person would model an entire decision tree in BPMN, while a rules person would have a single BPMN node that called a decision tree to execute all of the process logic: just because you can do something using a certain method doesn’t mean that you should do it.

We’re done with day 2; tomorrow is only a half-day of sessions with the awards after lunch (which I’ll probably have to monitor remotely since I’ll be headed for the airport by mid-afternoon).