Cake

My first Sacher Torte in Vienna, 2007. Yes, if you buy a whole one it comes in a fancy box.

Lately, I’ve been thinking about cake. Not (just) because I’m headed to Vienna, home of the incomparable Sacher Torte, nor because I’ll be celebrating my birthday while attending the BPM2019 academic research conference while there. No, I’ve been thinking about technical architectural layer cake models.

In 2014, an impossibly long time ago in computer-years, I wrote a paper about what one of the analyst firms was then calling Smart Process Applications (SPA). The idea is that a vendor would provide a SPA platform, then the vendor, customer or third parties would create applications using this platform — not necessarily using low-code tooling, but at least using an integrated set of tools layered on top of the customer’s infrastructure and core business systems. Instances of these applications — the actual SPAs — could then be deployed by semi-technical analysts who just needed to configure the SPA with the specifics of the business function. The paper that I wrote was sponsored by Kofax, but many other vendors provided (and still provide) similar functionality.

Layer cake diagram from my 2014 white paper on Smart Process Application platforms.

The SPA platforms included a number of integrated components to be used when creating applications: process management (BPM), content capture and management (ECM), event handling, decision management (DM), collaboration, analytics, and user experience.

The concept (or at least the name) of SPA platforms has now morphed into a “digital transformation”, “digital automation” or “digital business” platforms, but the premise is the same: you buy a monolithic platform from a vendor that sits on top of your core business systems, then you build applications on top of that to deploy to your business units. The tooling offered by the platform is now more likely to include a low-code development environment, which means that the applications built on the platform may not need a separate “configure and deploy” layer above them as in the SPA diagram here. Or this same model could be used, with non-low-code applications developed in the layer above the platform, then low-code configuration and deployment of those just as in the SPA model. Due to pressure suggestions from analysts, many BPMS platforms became these all-in-one platforms under the guise of iBPMS, but some ended up with a set of tools with uneven capabilities: great functionality for their core strengths (BPM, etc.) but weaker in functionality that they had to partner to include or hastily build in order to be included in the analyst ranking.

The monolithic vendor platform model is great for a lot of businesses that are not in the business of software development, but some very large organizations (or small software companies) want to create their own platform layer out of best-of-breed components. For example, they may want to pick BPM and DM from one vendor, ECM from multiple others, collaboration and user experience from still another, plus event handling and analytics using open source tools. In the SPA diagram above, that turns the dark blue platform layer into “Build” rather than “Buy”, although the impact is much the same for the developers who are building the applications on top of the platform. This is the core of what I’m going to be presenting at CamundaCon next month in Berlin, with some ideas on how the market divides between monolithic and best-of-breed platforms, and how to make a best-of-breed approach work (since that’s the focus of this particular audience).

And yes, there will be cake, or at least some updated technical architectural layer cake models.

Spreadsheets and email

I had a laugh at the xkcd comic from a few days ago:

Spreadsheets

It made me think of my standard routine when I’m walking through a business operations area and want to pinpoint where the existing systems aren’t doing what the workers really need them to do: I look for the spreadsheets and email. These are the best indicator of shadow IT at work, where someone in the business area creates an application that is not sanctioned or supported by IT, usually because IT is too busy to "do it right". Instead of accessing data from a validated source, it’s being copied to a spreadsheet, where scripts are performing calculations using business logic that was probably valid at that point that it was written but hasn’t been updated since that person left the company. Multiple copies of the spreadsheet (or a link to an unprotected copy on a shared drive) are forwarded to people via email, but there’s no way to track who has it or what they’ve done with it. If the data in the source system changes, the spreadsheet and all of its copies stay the same unless manually updated.

Don’t get me wrong: I love spreadsheets. I once claimed that you could take away every other tool on my desktop and I could just reproduce it in Excel. Spreadsheets and email fill the gaps between brittle legacy systems, but they aren’t a great solution. That’s where low-code platforms fit really well: they let semi-technical business analysts (or semi-business technical analysts) create applications that can access realtime business data, assign and track tasks, and integrate other capabilities such as decision management and analytics.

I gave a keynote at bpmNEXT this year about creating your own digital automation platform using a BPMS and other technology components, which is what many large enterprises are doing. However, there are many other companies — and even departments within those large companies — for which a low-code platform fills an important gap. I’ll be doing a modified version of that presentation at this year’s CamundaCon in Berlin, and I’m putting together a bit of a chart on how to decide when to build your own platform and when to use a monolithic low-code platform for building business applications. Just don’t use spreadsheets and email.

OpenText Enterprise World 2019: AppWorks roadmap and technical deep dive

I had an afternoon with AppWorks at OpenText Enterprise World: a roadmap session followed by a technical deep dive. AppWorks is their low-code tool that includes process management, case management, and access to content and other information, supported across mobile and desktop and platforms. It contains a number of pre-packaged components, and a technical developer can create new components that can be accessed as services from the AppWorks environment. They’ve recently made it into the top-right corner of the Forrester Wave for [deep] digital process automation platforms, with their strength in case management and content integration listed as some of their strongest features, as well as Magellan’s AI and analytics, and the OpenText Cloud deployment platform.

The current release has focused on improving end-user flexibility and developer ease-of-use, but also on integration capabilities with the large portfolio of other OpenText tools and products. Some new developer features such as an expression editor and a mobile-first design paradigm, plus an upcoming framework for end-user UI customization in terms of themes and custom forms. Runtime performance has been improved by making applications into true single-page applications.

There are four applications built on the current on-premise AppWorks: Core for Legal, Core for Quality Management, Contract Center and People Center. These are all some combination of content (from the different content services platforms available) plus case or process management, customized for a vertical application. I didn’t hear a commitment to migrate these to the cloud, but there’s no reason that this won’t happen.

Some interesting future plans, such as how AppWorks will be used as a low-code development tool for OT2 applications. They have a containerized version of AppWorks available as a developer preview as a stepping stone to next year’s cloud edition. There was a mention of RPA although not a clear direction at present: they can integrate with third-party RPA tools now and may be mulling over whether to build/buy their own capability. There’s also the potential to build process intelligence/mining and reporting functionality based on their Magellan machine learning and analytics. There were a lot of questions from the audience, such as whether they will be supporting GitHub for source code control (probably but not yet scheduled) and better REST support.

Nick King, the director of product management for AppWorks, took us through a technical session that was primarily an extended live demonstration of creating a complex application in AppWorks. Although the initial part of creating the layout and forms is pretty accessible to non-technical people, the creation of BPMN diagrams, web service integration, and case lifecycle workflows are clearly much more technical; even the use of expressions in the forms definition is starting to get pretty technical. Also, based on the naming of components visible at various points, there is still a lot of the legacy Cordys infrastructure under the covers of AppWorks; I can’t believe it’s been 12 years since I first saw Cordys (and thought it was pretty cool).

There are a lot of nice things that just happen without configuration, much less coding, such as the linkages between components within a UI layout. Basically, if an application contains a number of different building blocks such as properties, forms and lifecycle workflows, those components are automatically wired together when assembled on a single page layout. Navigation breadcrumbs and action buttons are generated automatically, and changes in one component can cause updates to other components without a screen refresh.

OpenText, like every other low-code application development vendor, will likely continue to struggle with the issues of what a non-technical business analyst versus a technical developer does within a low-code environment. As a JAVA developer at one of my enterprise clients said recently upon seeing a low-code environment, “That’s nice…but we’ll never use it.” I hope that they’re wrong, but fear that they’re right. To address that, it is possible to use the AppWorks environment to write “pro-code” (technical lower-level code) to create services that could be added to a low-code application, or to create an app with a completely different look and feel than is possible using AppWorks low-code. If you were going to do a full-on BPMN process model, or make calls to Magellan for sentiment analysis, it would be more of a pro-code application.

Webinar: Unlocking Back Office Value by Automating Processes

I’ve been quiet here for a while – the result of having too much real work, I suppose Winking smile – but wanted to highlight a webinar that I’ll be doing on December 13th with TrackVia and one of their customers, First Guaranty Mortgage Corporation, on automating back office processes:

With between 300 to 800 back-office processes to monitor and manage, it’s no wonder financial services leaders look to automate error-prone manual processes. Yet, IT resources are scarce and reserved for only the most strategic projects. Join Sandy Kemsley, industry analyst, Pete Khanna, CEO of TrackVia, and Sarah Batangan, COO of First Guaranty Mortgage Corporation, for an interactive discussion about how financial services are digitizing the back-office to unlock great economic value — with little to no IT resources.

During this webinar, you’ll learn about:

  • Identifying business-critical processes that need to be faster
  • Key requirements for automating back office processes
  • Role of low-code workflow solutions in automating processes
  • Results achieved by automating back office processes

I had a great discussion with Pete Khanna, CEO of TrackVia, while sitting on a panel with him back in January at OPEX Week, and we’ve been planning to do this webinar ever since then. The idea is that this is more of a conversational format: I’ll do a bit of context-setting up front, then it will become more of a free-flowing discussion between Sarah Batangan (COO of First Guaranty), Pete and myself based around the topics shown above.

You can register for the webinar here.

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!

Low-Code webinar with @TIBCO – new ways for business and IT to develop and innovate together

Liveappas_rev1_1200I’m back at the webinars this Thursday (April 26), with the first of two parts in a series on low-code and how it enables business and IT to work better together. Together with Roger King and Nicolas Marzin of TIBCO, we’re doing another one of our free-ranging “fireside chat” discussions, such as we did on case management last November. This time, we dig into more of the technical and governance issues of how low-code application development platforms are used across organizations by both business developers and IT.

You can sign up for the webinar here.

I’m also putting the finishing touches on a white paper that goes into more of these concepts in depth. Sign up for the webinar and you’ll get a link to the paper afterwards.

Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.
Vega Unity 7 - December 2017

A Perfect Combination: Low Code and Case Management

The paper that I wrote on low code and case management has just been published – consider it a Christmas gift! It’s sponsored by TIBCO, and you can find it here (registration required).

This is an accompaniment to the webinar that I did recently with Roger King and Nicolas Marzin, which is available for replay on demand.

What’s in a name? BPM and DPA

The term “business process management” (BPM) has always been a bit problematic because it means two things: the operations management practice of discovering, modeling and improving business processes, which may have no technology involved whatsoever; and the suite of technologies associated with automating processes. I’ve often heard – and sometimes participated in – arguments on the distinction between BPM-the-discipline and BPM-the-technology. Many people use “BPMS” (BPM system or suite) to define the technology while reserving “BPM” for the discipline, but that’s not sufficiently universal to avoid confusion.

Gartner iBPMS in 2011To compound the confusion, the components of a BPMS have grown from completely process-focused modeling and execution to more complete application development suites that may include decision management, analytics, content management and much more. Gartner relabelled this market “iBPMS” starting around 2011 when they realized that BPM suites were doing much more than just BPM:

The intelligent business process management suite (iBPMS) market is the natural evolution of the earlier BPMS market, adding more capabilities for greater intelligence within business processes. Capabilities such as validation (process simulation, including “what if”) and verification (logical compliance), optimization, and the ability to gain insight into process performance have been included in many BPMS offerings for several years. iBPMSs have added enhanced support for human collaboration such as integration with social media, mobile-enabled process tasks, streaming analytics and real-time decision management.

The term iBPMS makes it sound like what we were doing before wasn’t intelligent, which clearly is not the case, but it also made it obvious that we needed a different name to describe these technologies that we’re using to automate our business functions.

Since then, we’ve moved through a number of different names and acronyms in an attempt to describe these systems: for the more case-oriented (with little or no predefined processes), we have “case management” (confused with the non-technical term used in social sciences and healthcare) which is sometimes abbreviated as CM (confused with the abbreviation for content management, which is also abbreviated as ECM but has now be rebranded as content services) plus the variations of advanced or adaptive case management (ACM), and dynamic case management (DCM). Although there are differences between case management and BPM, there are also a lot of similarities and the distinction in products is sometimes a bit fuzzy. However, using the term “process” causes a certain amount of angst amongst the case managementerati.

This year, Forrester started using the term “digital process automation” (DPA), which is pretty much what Gartner is calling iBPMS. Forrester’s use of DPA seems to have been slightly preceded by the term “digital business automation”. Although “digital” and “automation” are a bit redundant in this context – we’re not going to do analog mechanical automation of most businesses – I think that the use of “business” rather than “process” is a much better fit. However, due to Forrester’s recent DPA wave report, vendors are leaping onto the DPA bandwagon, so we might be stuck there for a while.

From their report in February 2017, “Traditional BPM Gives Way To Digital Process Automation”, Forester describes why this shift is necessary without actually describing the differences between [i]BPM[S] and DPA; instead, this seems to be coming about because organizations took what should have been model-driven development (aka low-code) BPMS and used it in waterfall development environments, thereby turning what should have been agile into legacy. In other words, they seem to be hoping that changing the name of the class of tools will change how organizations use the tools. Call me a cynic, but I’m not completely hopeful about that.

I’m not arguing that the current low code, process/case-centric platforms that combine a full suite of business automation tools aren’t a step forward from yesterday’s BPM platforms in terms of enabling automation as a part of digital transformation. But what is going to change within customer organizations to prevent them from undermining the inherent rapid application development capabilities by enforcing antiquated software development lifecycle methods?

Bonus reading: check back on my review of a Gartner presentation from 2006 on the future of BPM, which looked forward as far as 2017! They were correct that the primary value of BPM moved from productivity to visibility to innovation, and I correctly predicted that their predictions would happen much faster than they expected.