I had a quick briefing with Daniel Meyer, CTO of Camunda, about today’s release. With this new version 7.15, they are rebranding from Camunda BPM to Camunda Platform (although most customers just refer to the product as “Camunda” since they really bundle everything in one package). This follows the lead of other vendors who have distanced themselves from the BPM (business process management) moniker, in part because what the platforms do is more than just process management, and in part because BPM is starting to be considered an outdated term. We’ve seen the analysts struggle with naming the space, or even defining it in the same way, with terms like “digital process automation”, “hyperautomation” and “digitalization” being bandied about.
An interesting pivot for Camunda in this release is their new support for low-code developers — which they distinguish as having a more technical background than citizen developers — after years of primarily serving the needs of professional technical (“pro-code”) developers. The environment for pro-code developers won’t change, but now it will be possible for more collaboration between low-code and pro-code developers within the platform with a number of new features:
Create a catalog of reusable workers (integrations) and RPA bots that can be integrated into process models using templates. This allows pro-code developers to create the reusable components, while low-code developers consume those components by adding them to process models for execution. RPA integration is driving some amount of this need for collaboration, since low-code developers are usually the ones on the front-end of RPA initiatives in terms of determining and training bot functionality, but previously may have had more difficult integrating those into process orchestrations. Camunda is extending their RPA Bridge to add Automation Anywhere integration to their existing UIPath integration, which gives them coverage of a significant portion of the RPA market. I covered a bit of their RPA Bridge architecture and their overall view on RPA in one of my posts from their October 2020 CamundaCon. I expect that we will soon see Blue Prism integration to round out the main commercial RPA products, and possibly an open source alternative to appeal to their community customers.
DMN support, including DRD and decision tables, in their Cawemo collaborative modeler. This is a good way to get the citizen developers and business analysts involved in modeling decisions as well as processes.
A form builder. Now, I’m pretty sure I’ve heard Jakob Freund claim that they would never do this, but there it is: a graphical form designer for creating a rudimentary UI without writing code. This is just a preliminary release, only supporting text input fields, so isn’t going to win any UI design awards. However, it’s available in the open source and commercial versions as well as accessible as a library in bpmn.io, and will allow a low-code developer to do end-to-end development: create process and decision models, and create reusable “starter” UIs for attaching to start events and user activities. When this form builder gets a bit more robust in the next version, it may be a decent operational prototyping tool, and possibly even make it into production for some simple situations.
They’ve also added some nice enhancements to Optimize, their monitoring and analytics tool, and have bundled it into the core commercial product. Optimize was first released mid-2017 and is now used by about half of their customers. Basically, it pumps the operational data exhaust out of the BPM engine database and into an elastic search environment; with the advent of Optimize 3.0 last year, they could also collect tracking events from other (non-Camunda) systems into the same environment, allowing end-to-end processes to be tracked across multiple systems. The new version of Optimize, now part of Camunda Platform 7.15, adds some new visualizations and filtering for problem identification and tracking.
Overall, there’s some important things in this release, although it might appear to be just a collection of capabilities that many of the all-in-one low-code platforms have had all along. It’s not really in Camunda’s DNA to become a proprietary all-in-one application development platform like Appian or IBM BPM, or even make low-code a primary target, since they have a robust customer base of technical developers. However, these new capabilities create an important bridge between low-code developers who have a better understanding of the business needs, and pro-code developers with the technical chops to create robust systems. It also provides a base for Camunda customers who want to build their own low-code environment for internal application development: a reasonably common scenario in large companies that just can’t fit their development needs into a proprietary application development platform.
The last time that I was on a plane was mid-February, when I attended the OpenText analyst summit in Boston. For people even paying attention to the virus that was sweeping through China and spreading to other Asian countries, it seemed like a faraway problem that wasn’t going to impact us. How wrong we were. Eight months later, many businesses have completely changed their products, their markets and their workforce, much of this with the aid of technology that automates processes and supply chains, and enables remote work.
By early April, OpenText had already moved their European regional conference online, and this week, I’m attending the virtual version of their annual OpenText World conference, in a completely different world than in February. Similar to many other vendors that I cover (and have attended virtual conferences for in the past several months), OpenText’s broad portfolio of enterprise automation products has the opportunity to make gains during this time. The conference opened with a keynote from CEO Mark Barrenechea, “Time to Rethink Business”, highlighting that we are undergoing a fundamental technological (and societal) disruption, and small adjustments to how businesses work aren’t going to cut it. Instead of the overused term “new normal”, Barrenechea spoke about “new equilibrium”: how our business models and work methods are achieving a stable state that is fundamentally different than what it was prior to 2020. I’ve presented about a lot of these same issues, but I really like his equilibrium analogy with the idea that the landscape has changed, and our ball has rolled downhill to a new location.
He announced OpenText Cloud Edition (CE) 20.4, which includes five domain-oriented cloud platforms focused on content, business network, experience, security and development. All of these are based on the same basic platform and architecture, allowing them to updated on a quarterly basis.
The Content Cloud provides the single source of truth across the organization (via information federation), enables collaboration, automates processes and provides information governance and security.
The Business Network Cloud deals directly with the management and automation of supply chains, which has increased in importance exponentially in these past several months of supply chain disruption. OpenText has used this time to expand the platform in terms of partners, API integrations and other capabilities. Although this is not my usual area of interest, it’s impossible to ignore the role of platforms such as the Business Network Cloud in making end-to-end processes more agile and resilient.
The Experience Cloud is their customer communications platform, including omnichannel customer engagement tools and AI-driven insights.
The Security and Protection Cloud provides a collection of security-related capabilities, from backup to endpoint protection to digital forensics. This is another product class that has become incredibly important with so many organizations shifting to work from home, since protecting information and transactions is critical regardless of where the worker happens to be working.
The Developer Cloud is a new bundling/labelling of their software development (including low-code) tools and APIs, with 32 services across eight groupings including capture, storage, analysis, automation, search, integration, communicate and security. The OpenText products that I’ve covered in the past mostly live here: process automation, low-code application development, and case management.
Barrenechea finished with their Voyager program, which appears to be an enthusiastic rebranding of their training programs.
Next up was a prerecorded AppWorks strategy and roadmap with Nic Carter and Nick King from OpenText product management. It was fortunate that this was prerecorded (as much as I feel it decreases the energy of the presentation and doesn’t allow for live Q&A) since the keynote ran overtime, and the AppWorks session could be started when I was ready. Which begs the question why it was “scheduled” to start at a specific time. I do like the fact that OpenText puts the presentation slides in the broadcast platform with the session, so if I miss something it’s easy to skip back a slide or two on my local copy.
Process Suite (based on the Cordys-heritage product) was rolled into the AppWorks branding starting in 2018, and the platform and UI consolidated with the low-code environment between then and now. The sweet spot for their low-code process-centric applications is around case management, such as service requests, although the process engine is capable of supporting a wide range of application styles and developer skill levels.
They walked through a number of developer and end-user feature enhancements in the 20.4 version, then covered new automation features. This includes enhanced content and Brava viewer integration, but more significantly, their RPA service. They’re not creating/acquiring their own RPA tool, or just focusing on one tool, but have created a service that enables connectors to any RPA product. Their first connector is for UiPath and they have more on the roadmap — very similar rollout to what we saw at CamundaCon and Bizagi Catalyst a few weeks ago. By release 21.2 (mid-2021), they will have an open source RPA connector so that anyone can build a connector to their RPA of choice if it’s not provided directly by OpenText.
There are some AppWorks demos and discussion later, but they’re in the “Demos On Demand” category so I’m not sure if they’re live or “live”.
I checked out the content service keynote with Stephen Ludlow, SVP of product management; there’s a lot of overlap between their content, process, AI and appdev messages, so important to see how they approach it from all directions. His message is that content and process are tightly linked in terms of their business usage (even if on different systems), and business users should be able to see content in the context of business processes. They integrate with and complement a number of mainstream platforms, including Microsoft Office/Teams, SAP, Salesforce and SuccessFactors. They provide digital signature capabilities, allowing an external party to digitally sign a document that is stored in an OpenText content server.
An interesting industry event that was not discussed was the recent acquisition of Alfresco by Hyland. Alfresco bragged about the Documentum customers that they were moving onto Alfresco on AWS, and now OpenText may be trying to reclaim some of that market by offering support services for Alfresco customers and provide an OpenText-branded version of Alfresco Community Edition, unfortunately via a private fork. In the 2019 Forrester Wave for ECM, OpenText takes the lead spot, Microsoft and Hyland are some ways back but still in the leaders category, and Alfresco is right on the border between leaders and strong performers. Clearly, Hyland believes that acquiring Alfresco will allow it to push further up into OpenText’s territory, and OpenText is coming out swinging.
I’m finding it a bit difficult to navigate the agenda, since there’s no way to browse the entire agenda by time, but it seems to require that you know what product category that you’re interested in to see what’s coming up in a time-based format. That’s probably best for customers who only have one or two of their products and would just search in those areas, but for someone like me who is interested in a broader swath of topics, I’m sure that I’m missing some things.
That’s it for me for today, although I may try to tune in later for Poppy Crum‘s keynote. I’ll be back tomorrow for Muhi Majzoub’s innovation keynote and a few other sessions.
The second day of the Appian World 2020 virtual conference started with CTO Michael Beckley, who immediately set me straight on something that I assumed yesterday: at least some of the keynotes were pre-recorded, not live. So their statement on their website, that keynotes are “live” from 10am-noon, and other references to “live” keynotes just means that they are being broadcast at that time, not being broadcast live. Since there’s no interaction with the audience during keynotes it’s difficult to tell, and the content of most keynotes has been well done in any case. This may have been a special case for Beckley, since he has health conditions that make him higher risk, although this was still recorded in the Appian auditorium where there would have been some number of support staff.
Beckley went into more detail on the COVID-19 apps that they have developed, with a highlight on their latest Workforce Safety and Readiness that helps to manage how workers return to a workplace. He walked through the employee view of the app, where they can record their own health check information, plus the HR manager view that allows them to set the questions, policies and information that will be seen by the employees. They’ve put this together pretty quickly using their own low-code platform, and are offering it at a reasonable price to their customers.
Next up was a customer presentation by Michael Woolley, Principal of IT Retail Systems at The Vanguard Group. They’re a huge wealth management firm spread over several countries, and they’re building Appian applications including ones that will be used by 6,000 employees. It appears that they are replacing their legacy workflow system of 20 years, which has hundreds of workflows. [I think the legacy system may be an IBM FileNet system, since I have a memory of doing some work for Vanguard over 20 years ago to develop requirements and technical design for just such a system – flashback!] They wanted to move to a modern low-code cloud platform, and although their standard workflow is pretty straightforward financial services transactional flows, they are incorporating business rules as well as BPM and case management, and RPA for interacting with legacy line of business systems. They are also planning to include AI/ML within the case management stages. He discussed their basic architecture as well as their development organization, and finished with some best practices for large projects such as this: it’s a multi-year program that covers many different workflows, so isn’t a greenfield application and has complex migration considerations.
Deputy CTO Malcolm Ross returned to follow on from his talk yesterday, when he talked about AI and RPA, to discuss how they’re improving low-code development. He showed some pretty cool AI-augmented development that they are releasing in June, which looks at the design of a process as you’re building it, and recommends the next node that you will want to add based on the current content and goals of the process. I’m definitely interested in seeing where they go with this. He had a number of detailed product updates, including cloud security, details on testing/deployment cycles for application packages, and administrative tools such as (system) Health Check. They continue to push new features into their SAIL user interface layer, making it easier for developers to create new experiences on any form factor — one of the strikes against most low-code platforms is that their UI development is not as flexible as customers require, and Appian is definitely raising the bar on what’s possible. He finished up with their multi-channel communication add-ons, which allow the use of tools such as Twilio directly within an Appian application.
The final presentation of the morning keynote was Kristie Grinnell, Global CIO and Chief Supply Chain Officer at General Dynamics Information Technology with a presentation on how they are using Appian to help manage their 30,000 employees spread over 400 customer locations. They are a government contractor, and have to manage all things around being an outsourced IT company, such as assigning people to customer projects, timesheet adjustments and invoicing, while maintaining compliance and auditability. She spoke about some of their specific Appian applications that they have developed, and the benefits: an employee pay adjustment request application (to adjust people’s pay for when they work more hours than they were paid for) reduced backlog from three weeks to three days, and reduced errors. They also developed an international travel approval app (likely not getting used much these days), since most of their employees have a high security clearance and specific risks need to be managed during travel, which reduced the approval time from days to hours. Most of their applications to date have been administrative, but they are keen to look at how applying AI/ML to their existing data can help them to make better decisions in the future.
CMO Kevin Spurway and Malcolm Ross closed the keynotes with announcements of their awards to partners, resellers, app market contributors, and hackathon winners. On an optimistic note, Spurway announced that next year’s Appian World will be in San Diego, April 11-14, 2021. Here’s hoping.
This is the end of my Appian World 2020 coverage — some good information in the keynotes. As noted yesterday, the breakout session format isn’t sufficiently compelling to have me spend time there, but if you’re an Appian customer, you’ll probably find things of interest.
Another week, another virtual event! Appian World is happening two days this week, and will be available on-demand after. This has a huge number of sessions on several parallel tracks, which are pre-recorded, with keynotes in advance (not clear if the keynotes are actually live, or pre-recorded). From their site:
Keynote sessions are live from 10:00 AM – 12:00 PM EDT on May 12th and 13th. All breakout sessions will become available on-demand at 12:00 PM EDT on their scheduled day, immediately following the live keynote. Speakers will be available from 12:00 PM – 3:00 PM EDT for live Q&A on their scheduled session day.
They’re using the INXPO platform, and apparently using every possible feature. Here’s a bit of navigation help:
There’s a Lobby tab with a video greeting from Appian CMO Kevin Spurway. It has links to the agenda, solutions showcase and lounge, which is a bit superfluous since those are all top-level tabs visible at all times.
The Agenda tab lists the sessions for today, including the keynote (for some reason it showed as Live from 8:30am although the keynotes didn’t start until 10am), then all of the breakout sessions for the day, which you can dip into in any order since they are all pre-recorded and are made available at the same time.
The Sessions tab is where you can drill down and watch any of the sessions when they are live, but you can also do this directly from the Agenda tab. Sessions has them organized into tracks, such as Full Stack Automation Theater and Low-Code Development Theater.
The Solutions Showcase tab is virtual exhibit hall, with booths for partners and a pavilion of Appian product booths. These can have a combination of pre-recorded video, documents to download, and links to chat with them. It’s a bit overwhelming, although I supposed people will go through some of the virtual booths after the sessions, since the sessions run only 10-3 each day. I suppose that many of these partners signed on for Appian World before it moved to a virtual event, so Appian needed to provide a way for them to show their offerings.
The Lounge tab is a single-threaded chat for all attendees. Not a great forum for discussion: as I’ve mentioned on all of the other virtual conference coverage in the past couple of weeks, a separate discussion platform like Slack that allows for multi-threaded discussions where audience members can both lead and participate in discussions with each other is much, much better for audience engagement.
The Games tab has results for some games that they’re running — this is common at conferences, such as how many people send out tweets with the conference hashtag, or getting your ID scanned by a certain number of booths, but not something that adds value for my conference experience.
The keynote speakers appeared on a stage in Appian’s own auditorium, empty (except supposedly for each other and production staff). CEO Matt Calkins was up first, and talked about how the world has changed in 2020, and how their low-code application development can help with the changes that are being forced on organizations by the pandemic. He talked about the applications that they have built in the past couple of months: a COVID-19 workforce tracking app, a loan coordination app that uses AI and RPA for automation, and a workforce safety & readiness app that manages how businesses reopen to their workforce coming back to work. They have made these free or low-cost for their customers for the near term.
His theme for the keynote is automation: using human and digital workers, including RPA and AI, to get the best results. He mentioned BPM as part of their toolbox, and focused on the idea that the goal is to automate, and the specific tool doesn’t matter. They bought an RPA company and have rebranded it as AppianRPA: it’s cloud-native and Java-based, which is different from many other RPA products, but is more appealing to the CIO-level buyer for organization-wide implementations. They are pushing an open agenda, where they can interact with other RPA products and cloud platforms: certainly as a BPM vendor, interaction with other automation tools is part of the fabric.
They have a few new things that I haven’t seen in briefings (to be fair, I think I’ve dropped off their analyst relations radar). Their “Automation Planner” can make recommendations for what type of automation to use for any particular task. Calkins also spoke about their intelligent document processing (IDP), which addresses what they believe is one of the biggest automation challenges that companies have today.
The Appian platform offers full-stack automation — workflow, case management, RPA, AI, business rules — with a “data anywhere” philosophy of integrating with systems to allow processing data in place, and their low-code development for which they have become known. If you’re a fan of the all-in-one proprietary platform, Appian is definitely one of main contenders. They have a number of vertical solutions now, and are starting to offer standardized all-inclusive subscription pricing for different sizes of installations that removes a lot of uncertainty about total cost of ownership. He also highlighted some of the vertical applications created by partners PWC, Accenture and KPMG.
I always like hearing Calkins talk (or chatting with him in person), because he’s smart and well-spoken, and ties together a lot of complex ideas well. He covered a lot of information about Appian products, direction, customers and partners in a 30-minute keynote, and it’s definitely worth watching the replay.
Next up was a “stories from the front line of COVID-19” panel featuring Darin Cline, EVP of Operations of Bank of the West (part of BNP Paribas), and Darren Blake, COO of Bexley Health Neighbourhood Care in the UK National Health Service, moderated by Appian co-founder Mark Wilson. This was done remotely rather than in their studio, with each of the three having an Appian World backdrop: a great branding idea that was similar to what Celonis did with their remote speakers at Celosphere, although each person’s backdrop also had their own company’s logo — nice touch.
Blake talked about how they saw the wave of COVID-19 coming based on data that they were seeing from around the world, and put plans in place to leverage their existing Appian-based staff tracker to implement emergency measures around staff management and redeployment. They support home-based services as well as their patients’ visits to medical facilities, and had to manage staff and patient visits for non-COVID ailments as well as COVID responses and even dedicated COVID testing clinics without risking cross-contamination. Cline talked about how they needed to change their operations to allow people to continue accessing banking services even with lockdowns that happened in their home state of California. He said this disruption has pushed them to become a much more agile organization, both in business and IT departments: this is one of those things that likely is never going back to how it was pre-COVID. He credited their use of Appian for low-code development as part of this, and said that they are now taking advantage of it as never before. Blake echoed that they also have become much more agile, creating and deploying new capabilities in their systems in a matter of a few days: the vision of all low-code, but rarely the reality.
Interesting to hear these customers stories, where they stepped up and started doing agile development in the low-code platform that they were already using, listening to the voice of the customer in cooperation with their business people, executives and implementation partners such as Appian. So many things that companies said were just not possible actually are: fast low-code implementation, work from home, and other changes that are here to stay. These are organizations that are going to hit the ground running as the pandemic recedes — as Blake points out, this is going to be with us for at least two years until a vaccine is created, and will have multiple waves — since they have experienced a digital revolution that has fundamentally changed how they work.
Great customer panel: often these are a bit dry and unfocused, but this one was fascinating since they’ve had a bit of time to track how the pandemic has impacted their business and how they’ve been able to respond to it. In both cases, this is the new normal: Cline explicitly said that they are never going back to having so many people in their offices again, since both their distributed workforce and their customers have embraced online interactions.
Next up was deputy CTO Malcolm Ross (who I fondly remember as providing my first Appian demo in 2006) with a product update. He showed a demo that included integration of RPA, AI, IDP, Salesforce and SAP within the low-code BPM framework that ties it all together. It’s been a while since I’ve had an Appian briefing, and some nice new functionality for how integrations are created and configured with a minimum of coding. They have built-in integrations (i.e., “no code”) to many different other systems. Their AI is powered by Google’s AI services, and includes all of the capabilities that you would find there, bundled into Appian’s environment. This “Appian AI” is at the core of their IDP offering, which does classification and data extraction on unstructured documents, to map into structured data: they have a packaged use case that they provide with their product that includes manual correction when AI classification/extraction doesn’t have a sufficient level of confidence. Because there’s AI/ML behind IDP, it will become smarter as human operators correct the errors.
He went through a demo of their RPA, including how the bots can interact with other Appian automation components such as IDP. There is, as expected, another orchestration (process) model within RPA that shows the screen/task flow: it would be good if they could look at converging this modeling format with their BPM modeling, even though it would be a simple subset. Regardless, a lot of interesting capabilities here for management of robotic resources. If you’re an existing Appian customer, you’re probably already looking at their RPA. Even if you’re already using another RPA product, Appian’s Robotic Workforce Manager allows you to manage Blue Prism, Automation Anywhere and UiPath bots as well as AppianRPA bots.
The last part of the morning keynotes was a panel featuring Austan Goolsbee, Former Chairman of President Obama’s White House Council of Economic Advisers, and Arthur Laffer, Economist and Member of President Reagan’s Economic Policy Advisory Board, moderated by Matt Calkins. This was billed as a “left versus right” economists’ discussion on how to reopen the (US) economy, and quickly lost my interest: it’s not that I’m not interested in the topic, but prefer to find a much wider set of opinions than these two Americans who turned it into a political debate, flinging around phrases such as “Libertarian ideal”. Not really a good fit as part of a keynote at a tech vendor conference. I think this really highlights some of the differences between in-person and virtual conferences: the virtual tech conferences should stick to their products and customers, and drop the “thought leaders” from unrelated areas. The celebrity speakers have a slight appeal to some attendees in person, but not so much in the virtual forum even if they are live conversations. IBM Think had a couple of high profile speakers that I skipped, since I can just go and watch their TED Talk or YouTube channel, and they didn’t really fit into the flow of the conference.
The remaining three hours of day 1 were (pre-recorded) breakout sessions available simultaneously on demand, with live Q&A with the speakers for the entire period. This allows them to have a large number of sessions — an overwhelming 30+ of them — but I expect that engagement for each specific session will be relatively low. It’s not clear if the Q&A with the speaker is private or if you would share the same Q&A with other people who happened to be looking at that session at the same time; even if they were, the session starts when you pop in, so everyone would be at a different point in the presentation and probably asking different questions. It looks like a similar lineup of breakout sessions will be available tomorrow for the afternoon portion, following another keynote.
I poked into a couple of the breakout sessions, but they’re just a video that starts playing from the beginning when you enter, no way to engage with other audience members, and no motivation to watch at a particular time. I sent a question for one speaker off into the void, but never saw a response. Some of them are really short (I saw one that was 8 minutes) and others are longer (Neil Ward-Dutton‘s session was 36 minutes) but there’s no way to know how long each one is without starting it. This is a good way to push out a lot of content simultaneously, but there’s extremely low audience engagement. I was also invited to a “Canada Drop-In Centre” via Google Meet; I’m not that interested in any sort of Canadian-specific experience, a broader based engagement (like Slack) would have been a better choice, possibly with separate channels for regions but also session discussions and Q&A. They also don’t seem to be providing slide decks for any of the presentations, which I like to have to remind me of what was said (or to skip back if I missed something).
This was originally planned as an in-person conference, and Appian had to pivot on relatively short notice. They did a great job with the keynotes, including a few of the Appian speakers appearing (appropriately distanced) in their own auditorium. The breakout sessions didn’t really grab me: too many, all pre-recorded, and you’re basically an audience of one when you’re in any of them, with little or no interactivity. Better as a set of on-demand training/content videos rather than true breakout sessions, and I’m sure there’s a lot of good content here for Appian customers or prospects to dig deeper into product capabilities but these could be packaged as a permanent library of content rather than a “conference”. The key for virtual conferences seems to be keeping it a bit simpler, with more timely and live sessions from one or two tracks only.
I’ll be back for tomorrow’s keynotes, and will have a look through the breakout sessions to see if there’s anything that I want to watch right now as opposed to just looking it up later.
The ability to build apps quickly is a cornerstone in our industry of model-driven development and low-code, and it’s encouraging to see some good offerings on the table already in response to our current situation.
Appian was first out of the blocks with a COVID-19 Response Management application for collecting and managing employee health status, travel history and more in a HIPAA-compliant cloud. You can read about it on their blog, and sign up for it online. Their blog post says that it’s free to any enterprise or government agency, although the signup page says that it’s free to organizations with over 1,000 employees — not sure which is accurate, since the latter seems to exclude non-customers under 1,000 employees. It’s free only for six months at this point.
Pegasystems followed closely behind with a COVID-19 Employee Safety and Business Continuity Tracker, which seems to have similar functionality to the Appian application. It’s an accelerator, so you download it and configure it for your own needs, a familiar process if you’re an existing Pega customer — which you will have to be, because it’s only available for Pega customers. The page linked above has a link get the app from the Pega Marketplace, where it will be free through December 31, 2020.
As a founding member of OMG’s BPM+ Health community, Trisotech has been involved in developing shareable clinical pathways for other medical conditions (using visual models in BPMN, CMMN and/or DMN), and I imagine that these new tools might be the first bits of new shareable clinical pathways targeted at COVID-19, possibly packaged as consumable microservices. You can click on the tools and try them out without any type of registration or preparation: they ask a series of questions and provide an assessment based on the underlying business rules, and you can also upload files containing data and download the results.
My personal view is that making these apps available to non-customers is sure to be a benefit, since they will get a chance to work with your company’s platform and you’ll gain some goodwill all around.
Lately, I’ve been thinking about cake. Not (just) because I’m headed to Vienna, home of the incomparable Sacher Torte, nor because I’ll be celebrating my birthday while attending the BPM2019 academic research conference while there. No, I’ve been thinking about technical architectural layer cake models.
In 2014, an impossibly long time ago in computer-years, I wrote a paper about what one of the analyst firms was then calling Smart Process Applications (SPA). The idea is that a vendor would provide a SPA platform, then the vendor, customer or third parties would create applications using this platform — not necessarily using low-code tooling, but at least using an integrated set of tools layered on top of the customer’s infrastructure and core business systems. Instances of these applications — the actual SPAs — could then be deployed by semi-technical analysts who just needed to configure the SPA with the specifics of the business function. The paper that I wrote was sponsored by Kofax, but many other vendors provided (and still provide) similar functionality.
The SPA platforms included a number of integrated components to be used when creating applications: process management (BPM), content capture and management (ECM), event handling, decision management (DM), collaboration, analytics, and user experience.
The concept (or at least the name) of SPA platforms has now morphed into a “digital transformation”, “digital automation” or “digital business” platforms, but the premise is the same: you buy a monolithic platform from a vendor that sits on top of your core business systems, then you build applications on top of that to deploy to your business units. The tooling offered by the platform is now more likely to include a low-code development environment, which means that the applications built on the platform may not need a separate “configure and deploy” layer above them as in the SPA diagram here. Or this same model could be used, with non-low-code applications developed in the layer above the platform, then low-code configuration and deployment of those just as in the SPA model. Due to pressure suggestions from analysts, many BPMS platforms became these all-in-one platforms under the guise of iBPMS, but some ended up with a set of tools with uneven capabilities: great functionality for their core strengths (BPM, etc.) but weaker in functionality that they had to partner to include or hastily build in order to be included in the analyst ranking.
The monolithic vendor platform model is great for a lot of businesses that are not in the business of software development, but some very large organizations (or small software companies) want to create their own platform layer out of best-of-breed components. For example, they may want to pick BPM and DM from one vendor, ECM from multiple others, collaboration and user experience from still another, plus event handling and analytics using open source tools. In the SPA diagram above, that turns the dark blue platform layer into “Build” rather than “Buy”, although the impact is much the same for the developers who are building the applications on top of the platform. This is the core of what I’m going to be presenting at CamundaCon next month in Berlin, with some ideas on how the market divides between monolithic and best-of-breed platforms, and how to make a best-of-breed approach work (since that’s the focus of this particular audience).
And yes, there will be cake, or at least some updated technical architectural layer cake models.
It made me think of my standard routine when I’m walking through a business operations area and want to pinpoint where the existing systems aren’t doing what the workers really need them to do: I look for the spreadsheets and email. These are the best indicator of shadow IT at work, where someone in the business area creates an application that is not sanctioned or supported by IT, usually because IT is too busy to "do it right". Instead of accessing data from a validated source, it’s being copied to a spreadsheet, where scripts are performing calculations using business logic that was probably valid at that point that it was written but hasn’t been updated since that person left the company. Multiple copies of the spreadsheet (or a link to an unprotected copy on a shared drive) are forwarded to people via email, but there’s no way to track who has it or what they’ve done with it. If the data in the source system changes, the spreadsheet and all of its copies stay the same unless manually updated.
Don’t get me wrong: I love spreadsheets. I once claimed that you could take away every other tool on my desktop and I could just reproduce it in Excel. Spreadsheets and email fill the gaps between brittle legacy systems, but they aren’t a great solution. That’s where low-code platforms fit really well: they let semi-technical business analysts (or semi-business technical analysts) create applications that can access realtime business data, assign and track tasks, and integrate other capabilities such as decision management and analytics.
I gave a keynote at bpmNEXT this year about creating your own digital automation platform using a BPMS and other technology components, which is what many large enterprises are doing. However, there are many other companies — and even departments within those large companies — for which a low-code platform fills an important gap. I’ll be doing a modified version of that presentation at this year’s CamundaCon in Berlin, and I’m putting together a bit of a chart on how to decide when to build your own platform and when to use a monolithic low-code platform for building business applications. Just don’t use spreadsheets and email.
I had an afternoon with AppWorks at OpenText Enterprise World: a roadmap session followed by a technical deep dive. AppWorks is their low-code tool that includes process management, case management, and access to content and other information, supported across mobile and desktop and platforms. It contains a number of pre-packaged components, and a technical developer can create new components that can be accessed as services from the AppWorks environment. They’ve recently made it into the top-right corner of the Forrester Wave for [deep] digital process automation platforms, with their strength in case management and content integration listed as some of their strongest features, as well as Magellan’s AI and analytics, and the OpenText Cloud deployment platform.
The current release has focused on improving end-user flexibility and developer ease-of-use, but also on integration capabilities with the large portfolio of other OpenText tools and products. Some new developer features such as an expression editor and a mobile-first design paradigm, plus an upcoming framework for end-user UI customization in terms of themes and custom forms. Runtime performance has been improved by making applications into true single-page applications.
There are four applications built on the current on-premise AppWorks: Core for Legal, Core for Quality Management, Contract Center and People Center. These are all some combination of content (from the different content services platforms available) plus case or process management, customized for a vertical application. I didn’t hear a commitment to migrate these to the cloud, but there’s no reason that this won’t happen.
Some interesting future plans, such as how AppWorks will be used as a low-code development tool for OT2 applications. They have a containerized version of AppWorks available as a developer preview as a stepping stone to next year’s cloud edition. There was a mention of RPA although not a clear direction at present: they can integrate with third-party RPA tools now and may be mulling over whether to build/buy their own capability. There’s also the potential to build process intelligence/mining and reporting functionality based on their Magellan machine learning and analytics. There were a lot of questions from the audience, such as whether they will be supporting GitHub for source code control (probably but not yet scheduled) and better REST support.
Nick King, the director of product management for AppWorks, took us through a technical session that was primarily an extended live demonstration of creating a complex application in AppWorks. Although the initial part of creating the layout and forms is pretty accessible to non-technical people, the creation of BPMN diagrams, web service integration, and case lifecycle workflows are clearly much more technical; even the use of expressions in the forms definition is starting to get pretty technical. Also, based on the naming of components visible at various points, there is still a lot of the legacy Cordys infrastructure under the covers of AppWorks; I can’t believe it’s been 12 years since I first saw Cordys (and thought it was pretty cool).
There are a lot of nice things that just happen without configuration, much less coding, such as the linkages between components within a UI layout. Basically, if an application contains a number of different building blocks such as properties, forms and lifecycle workflows, those components are automatically wired together when assembled on a single page layout. Navigation breadcrumbs and action buttons are generated automatically, and changes in one component can cause updates to other components without a screen refresh.
OpenText, like every other low-code application development vendor, will likely continue to struggle with the issues of what a non-technical business analyst versus a technical developer does within a low-code environment. As a JAVA developer at one of my enterprise clients said recently upon seeing a low-code environment, “That’s nice…but we’ll never use it.” I hope that they’re wrong, but fear that they’re right. To address that, it is possible to use the AppWorks environment to write “pro-code” (technical lower-level code) to create services that could be added to a low-code application, or to create an app with a completely different look and feel than is possible using AppWorks low-code. If you were going to do a full-on BPMN process model, or make calls to Magellan for sentiment analysis, it would be more of a pro-code application.
I’ve been quiet here for a while – the result of having too much real work, I suppose – but wanted to highlight a webinar that I’ll be doing on December 13th with TrackVia and one of their customers, First Guaranty Mortgage Corporation, on automating back office processes:
With between 300 to 800 back-office processes to monitor and manage, it’s no wonder financial services leaders look to automate error-prone manual processes. Yet, IT resources are scarce and reserved for only the most strategic projects. Join Sandy Kemsley, industry analyst, Pete Khanna, CEO of TrackVia, and Sarah Batangan, COO of First Guaranty Mortgage Corporation, for an interactive discussion about how financial services are digitizing the back-office to unlock great economic value — with little to no IT resources.
During this webinar, you’ll learn about:
Identifying business-critical processes that need to be faster
Key requirements for automating back office processes
Role of low-code workflow solutions in automating processes
Results achieved by automating back office processes
I had a great discussion with Pete Khanna, CEO of TrackVia, while sitting on a panel with him back in January at OPEX Week, and we’ve been planning to do this webinar ever since then. The idea is that this is more of a conversational format: I’ll do a bit of context-setting up front, then it will become more of a free-flowing discussion between Sarah Batangan (COO of First Guaranty), Pete and myself based around the topics shown above.
Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.
First, to effectively surf the tsunami of information, I use two primary tools:
An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.
Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.
Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.
You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.
Time to kick back, enjoy the nice weather, and read a good blog!