Bruce Silver points out that it’s been 10 years since the finalization of BPMN 2.0, the standard notation that we use for modeling (and sometimes executing) business processes. The standard wasn’t published until some time after that, and there have been revisions over the years, but BPMN 2.0 was the start of a wave of standardization in the BPMS market since it included the notation (a serialization file format) that made model interchange between products possible. There’s always been some amount of controversy swirling around BPMN: some consider it too difficult for non-technical people to understand, some consider it too restrictive for technical implementations, and more. I believe that a subset of BPMN is a good tool for process modeling by business people, and that it’s a powerful process implementation tool for developers, but I agree that it’s not the only tool you should have in your toolbox.
Bruce’s post takes us back to some basic definitions of a process, and why BPMN is a better fit for modeling processes. He also covers some of the things that are not great about the standard:
The ability to understand the process behavior in detail simply by inspecting the diagram, unfortunately, was not top of mind in the BPMN 2.0 task force in 2010. They were solely focused on making the diagrams executable on an automation engine. But to most BPMN users today, precise description based on the diagram alone is much more important.
To help with the adoption of the standard, Bruce developed conventions for the use of the standard, publishing his BPMN Method & Style books and training. Some modeling vendors have even incorporated this into their product, so that you can validate your models against his conventions.
Regardless of whether you love or hate BPMN, it’s impossible to deny the impact that it has had on the BPMS market.
This post by Charity Majors of Honeycomb popped up in my feed today, and really resonated relative to our somewhat in-bred world of process automation. She is talking about the need to move between software development teams in order to keep building skills, even if it means that you move from a “comfortable” position as the project expert to a newbie role:
There is a world of distance between being expert in this system and being an actual expert in your chosen craft. The second is seniority; the first is merely .. familiarity
I see this a lot with people becoming technical experts at a particular vendor product, when it’s really a matter of familiarity with the product rather than a superior skill at application development or even process automation technology. Being dedicated to a single product means that you think about solving problems in the context of that product, not about how process automation problems in general could be solved with a wider variety of technology. Dedication to a single product may make you a better technician but does not make you a senior engineer/architect.
Majors uses a great analogy of escalators: becoming an expert on one project (or product) is like riding one long escalator. When you get to the top, you either plateau out, or move laterally and start another escalator ride from its bottom up to the next level. Considering this with vendor products in our area, this would be like building expertise in IBM BPM for a couple of years, then moving to building Bizagi expertise for a couple of years, then moving to Camunda for a couple of years. At the end of this, you would have an incredibly broad knowledge of how to solve process automation projects on a variety of different platforms, which makes you much more capable of making the type of decisions at the senior architecture and design level.
This broader knowledge base also reduces risk: if one vendor product falls out of favor in the market, you can shift to others that are already in your portfolio. More importantly, because you already understand how a number of different products work, it’s easier to take on a completely new product. Even if that means starting at the bottom of another escalator.
Process automation has emerged as a linchpin for digital transformation, powering innovation across a company. Process automation is equally sought after to improve an organization’s top line as well as its bottom line – helping to improve customer service, lower costs and drive business growth.
I’m definitely on board with this statement. Companies that are most likely to emerge successfully from the current disruption are taking a hard look at their business processes, and considering how to include more intelligent automation.
The report is based on the results of a survey that they commissioned, which included 400 IT decision makers in the US and Europe. Almost all of those interviewed (97%) agreed that process automation is vital to digital transformation, and I was encouraged that half of of the current initiatives are focused on growth rather than just efficiency or firefighting. As I’ve been saying for a while, efficiency and productivity are table stakes: you have to consider those, but you’re not going to get the biggest benefit until you start looking at what intelligent automation can do for top-line growth and customer satisfaction.
The survey included a few questions on the impact of the pandemic, with 80% of respondents saying that they are doing more automation because of remote work and (I assume) fewer workers in some cases. This is not unexpected, with 68% reporting that key business processes had breakdowns due to remote work, and most companies are working harder on automation initiatives in order to survive the current disruption.
When I write a present about the type of digital transformation that the pandemic is forcing on firms in order to survive, I usually use examples from financial services and insurance, since that’s where I do most of my consulting. However, we see examples all around us as consumers, as every business of every size struggles to transform to an online model to be able to continue providing us with goods and services. And once both the consumers and the businesses see the benefits of doing some (not all) transactions online, there will be no going back to the old way of doing things.
I recently moved, and just completed the closing on the sale of my previous home. It’s been quite a while since I last did this, but it was always (and I believe still was until a few months ago) a very paper-driven, personal service type of transaction. This time was much easier, and almost all online; in fact, I’ve never even met anyone from my lawyer’s office face-to-face, I didn’t use a document courier, and I only saw my real estate agent in person once. All documents were digitally signed, and I had a video call with my lawyer me to walk through the documents and verify that it was me doing the signing. I downloaded the signed documents directly, although the law office would have been happy to charge me to print and mail a copy. To hand over the keys, my real estate agent just left their lockbox (which contained the keys for other agents to do showings) and gave the code to my lawyer to pass on to the other party once the deal was closed. Payments were all done as electronic transfers.
My lawyer’s firm is obviously still struggling with this paradigm, and provided the option to deliver paper documents, payments and keys by courier (in fact, I had to remind them to remove the courier fee from their standard invoice). In fact, they no longer offer in-person meetings: it has to be a video call. Yes, you can still sign physical documents and courier them back and forth, but that’s going to add a couple of days to the process and is more cumbersome than signing them digitally. Soon, I expect to see pricing from law firms that strongly encourages their clients to do everything digitally, since it costs them more to handle the paper documents and can create health risks for their employees.
Having gone through a real estate closing once from the comfort of my own home, I am left with one question: why would we ever go back to the old way of doing this? I understand that there are consumers who won’t or can’t adopt to new online methods of doing business with organizations, but those are becoming fewer every day. That’s not because the millennial demographic is taking over, but because people of all ages are learning that some of the online methods are better for them as well as the companies that they deal with.
Generalizing from my personal anecdote, this is happening in many businesses now: they are making the move to online business models in response to the pandemic, then finding that for many operations, this is a much better way of doing things. Along the way, they may also be automating some processes or eliminating manual tasks, like my lawyer’s office eliminating the document handling steps that used to be done. Not just more efficient for the company, but better for the clients.
As you adjust your business to compensate for the pandemic, design your customer-facing processes so that they make it easier (if possible) for your customer to do things online than the old way of doing things. That will almost always be more efficient for your business, and can greatly improve customer satisfaction. This does not mean that you don’t need people in your organization, or that your customers can’t talk to someone when required: automating processes and tasks means that you’re freeing up people to focus on resolving problems and improving customer communications, rather than performing routine tasks.
As one of my neighbourhood graffiti artists so eloquently put it, “6 feet apart but close 2 my ❤”.
The last time that I was on a plane was mid-February, when I attended the OpenText analyst summit in Boston. For people even paying attention to the virus that was sweeping through China and spreading to other Asian countries, it seemed like a faraway problem that wasn’t going to impact us. How wrong we were. Eight months later, many businesses have completely changed their products, their markets and their workforce, much of this with the aid of technology that automates processes and supply chains, and enables remote work.
By early April, OpenText had already moved their European regional conference online, and this week, I’m attending the virtual version of their annual OpenText World conference, in a completely different world than in February. Similar to many other vendors that I cover (and have attended virtual conferences for in the past several months), OpenText’s broad portfolio of enterprise automation products has the opportunity to make gains during this time. The conference opened with a keynote from CEO Mark Barrenechea, “Time to Rethink Business”, highlighting that we are undergoing a fundamental technological (and societal) disruption, and small adjustments to how businesses work aren’t going to cut it. Instead of the overused term “new normal”, Barrenechea spoke about “new equilibrium”: how our business models and work methods are achieving a stable state that is fundamentally different than what it was prior to 2020. I’ve presented about a lot of these same issues, but I really like his equilibrium analogy with the idea that the landscape has changed, and our ball has rolled downhill to a new location.
He announced OpenText Cloud Edition (CE) 20.4, which includes five domain-oriented cloud platforms focused on content, business network, experience, security and development. All of these are based on the same basic platform and architecture, allowing them to updated on a quarterly basis.
The Content Cloud provides the single source of truth across the organization (via information federation), enables collaboration, automates processes and provides information governance and security.
The Business Network Cloud deals directly with the management and automation of supply chains, which has increased in importance exponentially in these past several months of supply chain disruption. OpenText has used this time to expand the platform in terms of partners, API integrations and other capabilities. Although this is not my usual area of interest, it’s impossible to ignore the role of platforms such as the Business Network Cloud in making end-to-end processes more agile and resilient.
The Experience Cloud is their customer communications platform, including omnichannel customer engagement tools and AI-driven insights.
The Security and Protection Cloud provides a collection of security-related capabilities, from backup to endpoint protection to digital forensics. This is another product class that has become incredibly important with so many organizations shifting to work from home, since protecting information and transactions is critical regardless of where the worker happens to be working.
The Developer Cloud is a new bundling/labelling of their software development (including low-code) tools and APIs, with 32 services across eight groupings including capture, storage, analysis, automation, search, integration, communicate and security. The OpenText products that I’ve covered in the past mostly live here: process automation, low-code application development, and case management.
Barrenechea finished with their Voyager program, which appears to be an enthusiastic rebranding of their training programs.
Next up was a prerecorded AppWorks strategy and roadmap with Nic Carter and Nick King from OpenText product management. It was fortunate that this was prerecorded (as much as I feel it decreases the energy of the presentation and doesn’t allow for live Q&A) since the keynote ran overtime, and the AppWorks session could be started when I was ready. Which begs the question why it was “scheduled” to start at a specific time. I do like the fact that OpenText puts the presentation slides in the broadcast platform with the session, so if I miss something it’s easy to skip back a slide or two on my local copy.
Process Suite (based on the Cordys-heritage product) was rolled into the AppWorks branding starting in 2018, and the platform and UI consolidated with the low-code environment between then and now. The sweet spot for their low-code process-centric applications is around case management, such as service requests, although the process engine is capable of supporting a wide range of application styles and developer skill levels.
They walked through a number of developer and end-user feature enhancements in the 20.4 version, then covered new automation features. This includes enhanced content and Brava viewer integration, but more significantly, their RPA service. They’re not creating/acquiring their own RPA tool, or just focusing on one tool, but have created a service that enables connectors to any RPA product. Their first connector is for UiPath and they have more on the roadmap — very similar rollout to what we saw at CamundaCon and Bizagi Catalyst a few weeks ago. By release 21.2 (mid-2021), they will have an open source RPA connector so that anyone can build a connector to their RPA of choice if it’s not provided directly by OpenText.
There are some AppWorks demos and discussion later, but they’re in the “Demos On Demand” category so I’m not sure if they’re live or “live”.
I checked out the content service keynote with Stephen Ludlow, SVP of product management; there’s a lot of overlap between their content, process, AI and appdev messages, so important to see how they approach it from all directions. His message is that content and process are tightly linked in terms of their business usage (even if on different systems), and business users should be able to see content in the context of business processes. They integrate with and complement a number of mainstream platforms, including Microsoft Office/Teams, SAP, Salesforce and SuccessFactors. They provide digital signature capabilities, allowing an external party to digitally sign a document that is stored in an OpenText content server.
An interesting industry event that was not discussed was the recent acquisition of Alfresco by Hyland. Alfresco bragged about the Documentum customers that they were moving onto Alfresco on AWS, and now OpenText may be trying to reclaim some of that market by offering support services for Alfresco customers and provide an OpenText-branded version of Alfresco Community Edition, unfortunately via a private fork. In the 2019 Forrester Wave for ECM, OpenText takes the lead spot, Microsoft and Hyland are some ways back but still in the leaders category, and Alfresco is right on the border between leaders and strong performers. Clearly, Hyland believes that acquiring Alfresco will allow it to push further up into OpenText’s territory, and OpenText is coming out swinging.
I’m finding it a bit difficult to navigate the agenda, since there’s no way to browse the entire agenda by time, but it seems to require that you know what product category that you’re interested in to see what’s coming up in a time-based format. That’s probably best for customers who only have one or two of their products and would just search in those areas, but for someone like me who is interested in a broader swath of topics, I’m sure that I’m missing some things.
That’s it for me for today, although I may try to tune in later for Poppy Crum‘s keynote. I’ll be back tomorrow for Muhi Majzoub’s innovation keynote and a few other sessions.
IBM made an interesting announcement earlier this month: they are spinning off their Managed Infrastructure Services (that is, when they run your old-school data center on their premises) to a separate company, leaving the hybrid cloud services in the mothership. This will let them really call themselves a cloud company; to quote the press release, “IBM will move from a company with more than half of its revenues in services to one with a majority in high-value cloud software and solutions”. Also, and this is only my guess, it opens the door to completely selling off the managed infrastructure services NewCo.
Hat tip to Bloor Research for posting about this, and for their comment that IBM’s hybrid cloud “isn’t quite cloud”.
I’ve been writing and presenting a lot over the past several months about the disruption that the pandemic has brought to many aspects of business, and how successful businesses are harnessing technology to respond to that disruption. In short, the ones that use the technology to become more flexible are much more likely to be coming out of this as a success.
I usually work with financial services clients on their technology strategy and execution, but this story caught my eye on how farmers are embracing Zoom calls and much more to make their operations work better. To quote the article, “the pandemic has sped up the adoption of technology in the agricultural industry as farmers spend more time with digital tools and programs and less time having face-to-face meetings”, which is exactly what’s happening in many other industries. If you thought small family farms were low-tech, think again: the farmer interviewed here uses his iPhone to monitor conditions in his fields, market his products, and track weather predictions from wherever he is. And starting this year, due to social distancing protocols, he orders his seed and supplies online, and uses Zoom to talk to experts about problems that arise during the growing season.
He thinks it’s working out well, which probably means that he’ll continue to work this way in the future. This is a theme that I’m hearing in many other types of businesses: once they’ve had to use technology and reorganize their business to accommodate the current disruption, they’re probably not going back to the old way of doing things.
There is definitely a big lesson here for businesses of any size: failure to innovate is going to cause failure, period.
I’ve been writing some guest posts over on the Trisotech blog, but haven’t mentioned them here for a while. Here’s a recap of what I’ve posted over there the past few months:
In May, I wrote about designing loosely-coupled processes to reduce fragility. I had previously written about Conway’s Law and the problems of functional silos within an organization, but then the pandemic disruption hit and I wanted to highlight how we can avoid the sorts of cascading supply chain process failures that we saw early on. A big part of this is not having tightly-coupled end-to-end processes, but separating out different parts of the process so that they can be changed and scaled independently of each other, but still form part of an overall process.
In July, I helped to organize the DecisionCAMP conference, and wrote about the BPMN-CMMN-DMN “triple crown”: not just the mechanics of how the three standards work together, but why you would choose one over the other in a specific design situation. There are some particular challenges with the skill sets of business analysts who are expected to model organizations using these standards, since they will end up using more of the one that they’re most familiar with regardless of its suitability to the task at hand, as well as challenges for the understandability of multi-model representations that require a business operations reader of the models to be able to see how this BPMN diagram, that CMMN model and this other DMN definition all fit together.
In August, I focused on better process analysis using analytical techniques, namely process mining, and gave a quick intro to process mining for those who haven’t seen it in action. For several months now, we haven’t been able to do a lot of business “as is” analysis through job shadowing and interviews, and I put forward the idea that this is the time for business analysts to start learning about process mining as another tool in their kit of analysis techniques.
In early September, I wrote about another problem that can arise due to the current trend towards distributed (work from home) processes: business email compromise fraud, and how to foil it with better processes. I don’t usually write about cybersecurity topics, but I have my own in-home specialist, and this topic overlapped nicely with my process focus and the need for different types of compliance checks to be built in.
Disclosure: Trisotech is my customer, and I am compensated for writing posts for publication on their site. However, they have no editorial control or input into the topics that I wrote about, and no input into what I write here on my own blog.
By the time we got to day 3 of the virtual Bizagi Catalyst 2020, Bizagi had given up on their event streaming platform and just published all of the pre-recorded presentations for on-demand viewing. We were supposed to do a live wrap-up at the end of the day with Rob Koplowitz of Forrester Research, Bizagi CEO Gustavo Gómez and myself, moderated by Bizagi’s Senior Director of Product Marketing Rachel Brennan, so we went ahead and recorded that yesterday. It’s now up on the on-demand page, check it out:
This was my first time speaking at — or attending! — Bizagi Catalyst, and I’m looking forward to more of them in the future. Hopefully somewhere more exciting than my own home office.
Interesting analysis and visualization of 25 years of Gartner hype cycles by Mark Mine, Director of the Technology Innovation Group at the Walt Disney Studios:
As Cory Doctorow pointed out in his post (where I first saw this), “His key insight is that while the Gartner Hype Cycle isn’t much of a predictive tool, it’s a fantastic historical record: looking back on it sheds a lot of insight on how we felt about technology and what those feelings meant for each technology’s future.”
Keep this in mind when you’re looking at the next hype cycle: although Gartner may be intending to predict (and even drive) the future of technology, they’re not all that accurate. However, the history of the data is a fascinating look into technological culture.