I’ve been remiss with blogging the past couple of months, mostly because I’ve been involved in several pretty cool projects that have been keeping me busy. As I mentioned in yesterday’s post, I recently wrote a paper for Flowable about end-to-end automation and the business model transformation that it enabled.
I’ve been working on a video series for a process mining startup, Futuroot, which specializes in process intelligence for SAP systems. We’re doing these as conversational videos between me and a couple of the Futuroot team, each video about 20 minutes of free-ranging conversation. In the first episode, I talk with Rajee Bhattacharyya, Futuroot’s Chief Innovation Officer, and Anand Argade, their Director of Product Development. Here’s a short teaser from the video:
I recently created a paper for Flowable on end-to-end automation, including a look at how the Gartner “hyperautomation” term fits into the picture. End-to-end automation is really about enabling business model transformation, not just making the same widgets a little bit faster, and I walk through some of the steps and technologies that are required.
Check it out on the Flowable site at the link above (registration required).
I’m doing a live webinar next week, Thursday March 10 at 11am Eastern, together with Kramer Reeves of Work-Relay. If you’re in BPM, you probably know who Kramer is — he’s been in the industry almost as long as I have, with a career ranging from startups to IBM — and may know that he’s now CEO of Work-Relay. They’re a small BPM vendor that specializes in integrating with Salesforce to handle all the “other” processes that are not part of Salesforce; they also have an interesting GANTT-chart-ish timeline view of processes that helps with less rigid, milestone-driven goals.
Kramer and I will be having a 20-minute freeform discussion about removing the surprises, obstacles and barriers — namely, points of potential failure — from your business operations. We will be covering the current state of intelligent process automation, how we got here, and how process automation products need to work within a broader business operations context.
We’re keeping it short and won’t have time for Q&A, but I’m always open for questions here or on othersocial platforms. You’ll also be able to watch the video on demand at some point after we broadcast live next week.
Now that I’m starting to produce more video — see my short videos on the Trisotech blog and the citizen developer series on Bizagi — I’ve been combing through my portfolio of previous interviews and presentations, and it’s been a real blast from the past. These stretch back to my days at FileNet (2000-2001, or what I refer to as “the longest 16 months of my life”) where I did a lot of public conference presentations and internal educational courses on the emerging field of BPM, but most of the pre-YouTube content has been lost to time.
I’ve created a playlist of all of the ones that I can find on my YouTube channel and I’ll add new content of my own on the main video page. Click Subscribe over there to be notified of new videos when I publish them.
Here’s the earliest video that I can find of an interview, talking about TIBCO’s first release of ActiveMatrix BPMat the 2010 TIBCO conference. This was recorded and published by (now retired) Den Howlett. I discussed the trend of BPM suites moving to an all-in-one application development environment, a trend that swept through most of the mainstream vendors over the ensuing years and is still popular with many of them.
By the way, the review that I wrote of AMX BPM a few months later, after a few more in-depth briefings, is still one of the most-read posts on this blog.
If there’s something that the last 1.5 years has taught me, it’s that speaking for online conferences and even recorded video can be almost as much fun as giving a presentation in person. I give partial credit for that observation to Denis Gagne of Trisotech, who encouraged me to turn my guest posts on their blog into short videos. He also provided some great feedback on making a video that is more like me just chatting about a topic that I’m interested in, and less like a formal presentation.
In addition to these videos, I’m working with Bizagi to publish a series of eight short video interviews about citizen development, and I’ll be keynoting with a summary of those topics at their Catalyst conference on October 14.
In case you miss listening to me blather on about process while waving my hands around to illustrate a point, here’s a video that Daniel Rayner recorded for his Process Pioneers YouTube channel of a conversation that we had recently.
We chatted about challenges that organizations face when implementing BPM (both documentation/management and full automation projects), and some of the wins possible. The “pivot on a dime” quote was about how having insights into your business processes, and possibly also automation of those processes, allows you to change your business model and direction quickly in the face of disruption.
Back in May, I did a webinar with ASG Technologies on the importance and handling of (unstructured) content within processes. Almost every complex customer-facing process contains some amount of unstructured content, and it’s usually critical to the successful completion of one or more processes. But if you’re going to have unstructured content attached to your processes, you need to be concerned about governance of that content to ensure that people have the right amount of information to complete a step, but not so much that it violates the customer’s privacy. If everything is in a well-behaved content management system, that governance is an easier task — although still often mishandled — but when you start adding in network file shares and direct process instance attachments, it gets a lot tougher.
I also wrote a white paper for them on the topic, and I just noticed that it’s been published at this link (registration required). From the abstract of the paper:
Process automation typically provides control over what specific tasks and structured data are available to each participant in the process, but the content that drives and supports the process must also be served up to participants when necessary for completing a task. This requires governance policies that control who can access what content at each point in a process, based on security rules, privacy laws and the specific participant’s access clearance.
In this paper, we examine what is required for a governance-first approach to content within customer-facing processes, and finding the “Goldilocks balance” of just the right amount of information available to the right people at the right time
Back in 2008, I started attending the annual academic research BPM conference, which was in Milan that year. I’m not an academic, but this wasn’t just an excuse for a week in Europe: the presentations I saw there generated so many ideas about the direction that the industry would/should take. Coincidentally, 2008 was also the first year that I saw process mining offered as a product: I had a demo with Keith Swenson of Fujitsu showing me their process discovery product/service in June, then saw Anne Rozinat’s presentation at the academic conference in September (she was still at Eindhoven University then, but went on to create Fluxicon and their process mining tool).
Over the years, I met a lot of people at this conference who accepted me as a bit of a curiosity; I brought the conference some amount of publicity through my blog posts, and pushed a lot of software vendors to start showing up to see the wild and wonderful ideas on display. They even invited me to give a keynote in 2011 on the changing nature of work. Two of the people who I met along the way, Marlon Dumas of University of Tartu and Marcello La Rosa of University of Melbourne, went on to form their own process mining company, Apromore.
I’ve recently written a white paper for Apromore to help demystify the use of process mining alongside more traditional process modeling techniques by business analysts. From the introduction:
Process modeling and process mining are complementary, not competitive, techniques: a business analyst needs both in their toolkit. Process mining provides exact models of the system-based portions of processes, while manual modeling and analysis captures human activities, documents informal procedures, and identifies the many ways that people “work around” systems.
I’m definitely a process person, but my start in the business was through document-driven imaging and workflow systems. It’s important to keep in mind, no matter where you lie on the spectrum of interest between process and content, that they are often intertwined: unstructured content may be a driver for process, or be the product of a process. Processes sometimes exist only to manage the content, and sometimes content only exists as supporting documentation for a process. A few years ago, I wrote about several of the process/content use cases that I see in practice for the Alfresco blog.
One thing I didn’t cover at that time is the use of processes (and rules) to govern access to content: although a good content management system will let the right person see the right content, not all unstructured content is stored in a content management system at all, much less a good one. Even if content is in a content management system, it may not be appropriate to just let everyone root around in there to find whatever documents that they might want to see. Access to content is often contextual, that is, when someone is acting in a certain role and performing a certain task, they should see specific content. In another context, they might see different content. This is even more important when you open up your processes and content to external participants, including customers and business partners.
I’ve had the chance to talk about some of these ideas in more detail in a couple of places. First, my most recent guest post on the Trisotech blog is called In financial services, process rules content, and looks at how this can work in financial applications such as insurance underwriting:
There are a lot of laggards [which have] a somewhat disorganized collection of content related to and created by processes, stored on multiple internal systems, with little or no internal access control, and no external access. In fact, I would say that in every insurance and financial operation that I’ve visited as a consultant, I’ve seen some variation of this lack of content governance, and the very real impacts on operational performance as well as privacy concerns. This is definitely a situation where process can come to the rescue for getting control over access to unstructured content.
Secondly, I’m presenting a webinar and writing a short paper for ASG Technologies on content governance in customer-facing processes. The webinar will be on May 5th, and the white paper available as a follow-on shortly after that. You can register here to attend. Hope to see you there!
I had a quick briefing with Daniel Meyer, CTO of Camunda, about today’s release. With this new version 7.15, they are rebranding from Camunda BPM to Camunda Platform (although most customers just refer to the product as “Camunda” since they really bundle everything in one package). This follows the lead of other vendors who have distanced themselves from the BPM (business process management) moniker, in part because what the platforms do is more than just process management, and in part because BPM is starting to be considered an outdated term. We’ve seen the analysts struggle with naming the space, or even defining it in the same way, with terms like “digital process automation”, “hyperautomation” and “digitalization” being bandied about.
An interesting pivot for Camunda in this release is their new support for low-code developers — which they distinguish as having a more technical background than citizen developers — after years of primarily serving the needs of professional technical (“pro-code”) developers. The environment for pro-code developers won’t change, but now it will be possible for more collaboration between low-code and pro-code developers within the platform with a number of new features:
Create a catalog of reusable workers (integrations) and RPA bots that can be integrated into process models using templates. This allows pro-code developers to create the reusable components, while low-code developers consume those components by adding them to process models for execution. RPA integration is driving some amount of this need for collaboration, since low-code developers are usually the ones on the front-end of RPA initiatives in terms of determining and training bot functionality, but previously may have had more difficult integrating those into process orchestrations. Camunda is extending their RPA Bridge to add Automation Anywhere integration to their existing UIPath integration, which gives them coverage of a significant portion of the RPA market. I covered a bit of their RPA Bridge architecture and their overall view on RPA in one of my posts from their October 2020 CamundaCon. I expect that we will soon see Blue Prism integration to round out the main commercial RPA products, and possibly an open source alternative to appeal to their community customers.
DMN support, including DRD and decision tables, in their Cawemo collaborative modeler. This is a good way to get the citizen developers and business analysts involved in modeling decisions as well as processes.
A form builder. Now, I’m pretty sure I’ve heard Jakob Freund claim that they would never do this, but there it is: a graphical form designer for creating a rudimentary UI without writing code. This is just a preliminary release, only supporting text input fields, so isn’t going to win any UI design awards. However, it’s available in the open source and commercial versions as well as accessible as a library in bpmn.io, and will allow a low-code developer to do end-to-end development: create process and decision models, and create reusable “starter” UIs for attaching to start events and user activities. When this form builder gets a bit more robust in the next version, it may be a decent operational prototyping tool, and possibly even make it into production for some simple situations.
They’ve also added some nice enhancements to Optimize, their monitoring and analytics tool, and have bundled it into the core commercial product. Optimize was first released mid-2017 and is now used by about half of their customers. Basically, it pumps the operational data exhaust out of the BPM engine database and into an elastic search environment; with the advent of Optimize 3.0 last year, they could also collect tracking events from other (non-Camunda) systems into the same environment, allowing end-to-end processes to be tracked across multiple systems. The new version of Optimize, now part of Camunda Platform 7.15, adds some new visualizations and filtering for problem identification and tracking.
Overall, there’s some important things in this release, although it might appear to be just a collection of capabilities that many of the all-in-one low-code platforms have had all along. It’s not really in Camunda’s DNA to become a proprietary all-in-one application development platform like Appian or IBM BPM, or even make low-code a primary target, since they have a robust customer base of technical developers. However, these new capabilities create an important bridge between low-code developers who have a better understanding of the business needs, and pro-code developers with the technical chops to create robust systems. It also provides a base for Camunda customers who want to build their own low-code environment for internal application development: a reasonably common scenario in large companies that just can’t fit their development needs into a proprietary application development platform.