Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.
Vega Unity 7 - December 2017

ITESOFT | W4 Secure Capture and Process Automation digital business platform

It’s been three years since I looked at ITESOFT | W4’s BPMN+ product, which was prior to W4’s acquisition by ITESOFT. At that time, I had just seen W4 for the first time at bpmNEXT 2014, and had this to say about it:

For the last demo of this session, Jean-Loup Comeliau of W4 on their BPMN+ product, which provides model-driven development using BPMN 2, UML 2, CMIS and other standards to generate web-based process applications without generating code: the engine interprets and executes the models directly. The BPMN modeling is pretty standard compared to other process modeling tools, but they also allow UML modeling of the data objects within the process model; I see this in more complete stack tools such as TIBCO’s, but this is less common from the smaller BPM vendors. Resources can be assigned to user tasks using various rules, and user interface forms are generated based on the activities and data models, and can be modified if required. The entire application is deployed as a web application. The data-centricity is key, since if the models change, the interface and application will automatically update to match. There is definitely a strong message here on the role of standards, and how we need more than just BPMN if we’re going to have fully model-driven application development.

A couple of weeks ago, I spoke with Laurent Hénault and François Bonnet (the latter whom I met when he demoed at bpmNEXT in 2015 and 2016) about what’s happened in their product since then. From their company beginnings over 30 years ago in document capture and workflow, they have expanded their platform capabilities and relabelled it as digital process automation since it goes beyond BPM technology, a trend I’m seeing with many other BPM vendors. It’s not clear how many of their 650+ customers are using many of the capabilities of the new platform versus just their traditional imaging and workflow functions, but they seem to be expanding on the original capabilities rather than replacing them, which will make transitioning customers easier.

31 ITESOFT W4 platform as part of enterprise architectureThe new platform, Secure Capture and Process Automation (SCPA), provides capabilities for capture, business automation (process, content and decisions), analytics and collaborative modeling, and adds some nice extras in the area of document recognition, fraud detection and computer-aided process design. Using the three technology pillars of omni-channel capture, process automation, and document fraud detection, they offer several solutions including eContract for paperless customer purchase contracts, including automatic fraud detection on documents uploaded by the customer; and the cloud-based Streamline for Invoices for automated invoice processing.

Their eContract solution provides online forms with e-signature, document capture, creation of an eIDAS-compliant contract and other services required to complete a complex purchase contract bundled into a single digital case. The example shown was an online used car purchase with the car loan offered as part of the contract process: by bundling all components of the contract and the loan into a single online transaction, they were able to double the purchase close rate. Their document fraud detection comes into play here, using graphometric handwriting analysis and content verification to detect if a document uploaded by a potential customer has been falsified or modified. Many different types of documents can be analyzed for potential fraud based on content: government ID, tax forms, pay slips, bank information, and public utility invoices may contain information in multiple formats (e.g., plain text plus encoded barcode); other documents such as medical records often contain publicly-available information such as the practitioner’s registration ID. They have a paper available for more information on combatting incoming document fraud.

07 ITESOFT W4 Verifiable documentsTheir invoice processing solution also relies heavily on understanding certain types of documents: 650,000 different supplier invoice types are recognized, and they maintain a shared supplier database in their cloud capture environment to allow these formats to be added and modified for use by all of their invoice processing customers. There’s also a learning environment to capture new invoice types as they occur. Keep in mind that the heavy lifting in invoice processing is all around interpreting the vendor invoice: once you have that sorted out, the rest of the process of interacting with the A/P system is straightforward, and the payment of most invoices that relate to a purchase order can be fully automated. Streamline for Invoices won the Accounts Payable/Invoicing product of the year at the 2017 Document Manager Awards.

After a discussion of their solutions and some case studies, we dug into a more technical demo. A few highlights:

  • 09 ITESOFT W4 Web Modeler - concurrent updates of modelThe Web Modeler provides a fully BPMN-compliant collaborative process modeling environment, with synchronous model changes and (persistent) discussion thread between users. This is a standalone business analyst tool, and the model must be exported as a BPMN file for import to the engine for execution, so there’s no round-tripping. A cool feature is the ability to scroll back through the history of changes to the model by dragging a timeline slider: each changed snapshot is shown with the specific author.
  • Once the business analyst’s process model has been imported into the BPMN+ Composer tool, the full application can be designed: data model, full process model, low code forms-based user experience, and custom code (if required). This allows a more complex BPMN model to be integrated into a low code application – something that isn’t allowed by many of the low code platforms that provide only simple linear flows – as well as developer code for “beyond the norm” integration such as external portals.
  • Supervisor dashboards provide human task monitoring, including task assignment rules and skills matrix that can be changed in real time, and performance statistics.

The applications developed with their tools generally fall into the case management category, although they are document/data based rather than CMMN. Like many BPM vendors, they are finding that there is not the same level of customer demand for CMMN as there was for BPMN, and data-driven case management paradigms are often more understandable to business people.

They’ve OEM’d some of the components (the capture OCR, which is from ABBYY, and the web modeler from another French company) but put them together into a seamless offering. The platform is built on a standard Java stack; some of the components can be scaled independently and containerized (using Microsoft Azure), allowing customers to choose which data should exist on which private and public cloud infrastructure.

ITESOFT | W4 SCPA 2017-12 briefing

28 ITESOFT W4 timeline demoThey also showed some of the features that they demoed at the 2017 bpmNEXT (which I unfortunately missed): process guidance and correction that goes beyond just BPMN validation to attempt to add data elements, missing tasks, missing pathways and more; a GANTT-type timeline model of a process (which I’ve seen in BPLogix for years, but is sadly absent in many products) to show expected completion times and bottlenecks, and the same visualization directly in a live instance that auto-updates as tasks are completed within the instance. I’m not sure if these features are fully available in the commercial product, but they show some interesting views on providing automated assistance to process modeling.


Column 2 wrapup for 2017

As the year draws to an end, I’m taking a look at what I wrote here this year, and what you were reading.

OpenText pillowsI had fewer posts this year since I curtailed a lot of my conference travel, but still managed to publish 40 posts. I covered a few conferences – Big Data Toronto, OpenText Enterprise World, ABBYY Technology Summit, TIBCO NOW (as an uninvited gate-crasher) and some local AIIM seminars – and a variety of technology topics including BPM (or DPA/digital business as the terminology changes), low code, RPA, case management, decision management and capture.

Inexplicably, the two most read posts this year were one from 2007 on policies, procedures, processes and rules, and one from 2011 on BPMS pricing transparency. The most popular posts that were written this year were from OpenText Enterprise World, plus the page that I published listing the books and journals to which I’ve contributed.

Although US-based readers are the largest group by far, there was also a lot of traffic from India, Canada, Germany, UK and Australia, with many other countries contributing smaller amounts of traffic.

I also made some technical improvements: the site is now more secure via https, and uses Cloudflare to enforce security and fend off some of the spam bots that were killing performance, which has resulted in the use of CAPTCHAs for some IP ranges and countries.

Thanks to all of you for reading and commenting this year, and I look forward to engaging in 2018.

Happy New Year!

A Perfect Combination: Low Code and Case Management

The paper that I wrote on low code and case management has just been published – consider it a Christmas gift! It’s sponsored by TIBCO, and you can find it here (registration required).

This is an accompaniment to the webinar that I did recently with Roger King and Nicolas Marzin, which is available for replay on demand.

What’s in a name? BPM and DPA

The term “business process management” (BPM) has always been a bit problematic because it means two things: the operations management practice of discovering, modeling and improving business processes, which may have no technology involved whatsoever; and the suite of technologies associated with automating processes. I’ve often heard – and sometimes participated in – arguments on the distinction between BPM-the-discipline and BPM-the-technology. Many people use “BPMS” (BPM system or suite) to define the technology while reserving “BPM” for the discipline, but that’s not sufficiently universal to avoid confusion.

Gartner iBPMS in 2011To compound the confusion, the components of a BPMS have grown from completely process-focused modeling and execution to more complete application development suites that may include decision management, analytics, content management and much more. Gartner relabelled this market “iBPMS” starting around 2011 when they realized that BPM suites were doing much more than just BPM:

The intelligent business process management suite (iBPMS) market is the natural evolution of the earlier BPMS market, adding more capabilities for greater intelligence within business processes. Capabilities such as validation (process simulation, including “what if”) and verification (logical compliance), optimization, and the ability to gain insight into process performance have been included in many BPMS offerings for several years. iBPMSs have added enhanced support for human collaboration such as integration with social media, mobile-enabled process tasks, streaming analytics and real-time decision management.

The term iBPMS makes it sound like what we were doing before wasn’t intelligent, which clearly is not the case, but it also made it obvious that we needed a different name to describe these technologies that we’re using to automate our business functions.

Since then, we’ve moved through a number of different names and acronyms in an attempt to describe these systems: for the more case-oriented (with little or no predefined processes), we have “case management” (confused with the non-technical term used in social sciences and healthcare) which is sometimes abbreviated as CM (confused with the abbreviation for content management, which is also abbreviated as ECM but has now be rebranded as content services) plus the variations of advanced or adaptive case management (ACM), and dynamic case management (DCM). Although there are differences between case management and BPM, there are also a lot of similarities and the distinction in products is sometimes a bit fuzzy. However, using the term “process” causes a certain amount of angst amongst the case managementerati.

This year, Forrester started using the term “digital process automation” (DPA), which is pretty much what Gartner is calling iBPMS. Forrester’s use of DPA seems to have been slightly preceded by the term “digital business automation”. Although “digital” and “automation” are a bit redundant in this context – we’re not going to do analog mechanical automation of most businesses – I think that the use of “business” rather than “process” is a much better fit. However, due to Forrester’s recent DPA wave report, vendors are leaping onto the DPA bandwagon, so we might be stuck there for a while.

From their report in February 2017, “Traditional BPM Gives Way To Digital Process Automation”, Forester describes why this shift is necessary without actually describing the differences between [i]BPM[S] and DPA; instead, this seems to be coming about because organizations took what should have been model-driven development (aka low-code) BPMS and used it in waterfall development environments, thereby turning what should have been agile into legacy. In other words, they seem to be hoping that changing the name of the class of tools will change how organizations use the tools. Call me a cynic, but I’m not completely hopeful about that.

I’m not arguing that the current low code, process/case-centric platforms that combine a full suite of business automation tools aren’t a step forward from yesterday’s BPM platforms in terms of enabling automation as a part of digital transformation. But what is going to change within customer organizations to prevent them from undermining the inherent rapid application development capabilities by enforcing antiquated software development lifecycle methods?

Bonus reading: check back on my review of a Gartner presentation from 2006 on the future of BPM, which looked forward as far as 2017! They were correct that the primary value of BPM moved from productivity to visibility to innovation, and I correctly predicted that their predictions would happen much faster than they expected.

Tune in for the 2017 WfMC Global Awards for Excellence in BPM and Workflow

I had the privilege this year of judging some of the entries for WfMC’s Global Awards for Excellence in BPM and Workflow, and next Tuesday the 12 winners will be announced in a webinar. Tune in to hear the results from Nathaniel Palmer and Keith Swenson, as well as a presentation on industry trends from Connie Moore of Digital Clarity Group.

Presenting at OPEXWeek in January: customer journey mapping and lowcode

I’ll be on stage for a couple of speaking slots at the OPEX Week Business Transformation Summit 2018 in Orlando the week of January 22nd:

  • Tuesday afternoon, I’ll lead a breakout session in the customer-centric transformation track on increasing customer satisfaction through customer journey mapping and process improvement.
  • Wednesday morning, I’ll be on a panel in the RPA track on how low-code platforms are transforming BPM.

I was last at OPEX Week in 2012, when it was still called PEX Week (for Process Excellence Network) – I was on a BPM blogger panel that time around – and it will be interesting to see how it’s changed since then. Looks like a lot more automation technology in the current version, with the expectation that digital transformation isn’t going to come about just by modeling your business.

If you’re going to be there, look me up at one of my sessions or around the conference on Tuesday and Wednesday.

Release webinar: @CamundaBPM 7.8

I listened in on the Camunda 7.8 release webinar this morning – they issue product releases every six months like clockwork – to hear about the new features and upgrades from CEO Jakob Freund and VP of engineering Daniel Meyer.

Camunda BPM stack, community versus enterpriseThey’re obviously getting a broader audience for these release webinars than just their current customers and open source community members, and started with a bit about the company, the product stack and their clients. We heard about a recent case study presented at their first San Francisco community day: 24 Hour Fitness is using Camunda process and decision management for high volume real-time orchestration of their core business processes. With over 190 processes in production, executing 20 million BPMN and 18 million DMN instances per day, this is clearly an enterprise-strength application; they are using the Camunda Enterprise Edition rather than the Community Edition for the additional features and SLA-based support, but the underlying engine and much of the tooling is identical between the products.

The key new features and updates are as follows:

  • Camunda BPM batch mode database operationsWorkflow engine performance improvements. A new batch mode allows 3-4 times more process instances to be executed per minute on several of the supported databases. This is based on grouping database transactions for the same database table (including both operational and audit tables), then doing a single round-trip call between the Camunda server and the database server to execute the batch of inserts, updates and deletes.
  • Cockpit batch operations. It’s now possible to do bulk operations for suspending/activating and modifying running processes instances, and restarting completed process instances. Process instances can be selected by process definition name and by more complex search and filtering operations such as instance variable values, then a batch command issued to suspend, restart, modify or delete instances. A new feature also allows all instances that are at a specific task to be dragged to a new task directly in the process model, whereas this was only possible with single instances before; this can be used either to move the instances to a new task to correct for an error condition or changed process flow, or to restart instances that are sitting at the final end node.
  • More Cockpit features. In addition to the batch operations, Cockpit also now has faster BPMN model rendering (from 8 seconds down to 2 seconds), ability to delete process definitions, and a number of other administrative functions.
  • Spring Boot Starter. Originally created as a community extension in 2015 (with significant contributions from community members Jan Galinski and Oliver Steinhauer), Camunda adopted this project into the main code base to create an officially-supported version of the Camunda Spring Boot Starter, documented here.

The first two updates are focused squarely on improving performance and administration for high volume operations, likely driven by clients such as 24 Hour Fitness, that will serve Camunda well as they push into more core enterprise business processes. The Spring Boot integration positions them well for deploying BPM services in a microservice architecture.

Camunda BPM 7.8

Good summary of the new features in 7.8, and a great Spring Boot coding demo by Meyer, in spite of his grumbling about having to do it on Windows for the webinar. Smile

The webinar will be available for replay soon; check their website for availability. You can also see their release blog post that links to the release notes and describes many of the things that I saw today in the webinar.

Disclaimer: Camunda has been, but is not currently, a client. They did not provide any incentive to attend and write about this webinar, and these are my own opinions. That’s always the case for what I write here, but it’s good to make it explicit every once in a while.

Fun times with low code and case management

I recently held a webinar on low code and case management, along with Roger King and Nicolas Marzin of TIBCO (TIBCO sponsored the webinar). We tossed aside the usual webinar presentation style and had a free-ranging conversation over 45 minutes, with Nicolas giving a quick demo of TIBCO’s Live Apps in the middle.

Not the long tailAlthough preparing for a webinar like this takes just as long as a standard presentation, it’s a lot more fun to participate. I also think it’s more engaging for the audience, even though there’s not as much visual material; I created some slides with a few points on the topics that we planned to cover, including some fun graphics. I couldn’t resist including a visual pun about long tail applications. Smile

You can find the playback here if you missed it, or want to watch again. If you watched it live, there was a problem with the audio for the first couple of slides. Since it was mostly me giving some introductory remarks and a quick overview of case management and low code, we just re-recorded that few minutes and fixed the on-demand version.

I’m finishing up a white paper for TIBCO on case management and low code, stressing that not only is low code the way to go for building case management applications, but that a case management paradigm is the best fit for low code applications. We should have that in publication shortly, so stay tuned. If you attended the webinar, you should receive a link to the paper when it’s published.

Machine learning in ABBYY FlexiCapture

Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session after this to close the conference, but I have to catch a plane.

He started with a basic definition of machine learning: a method of data analysis that automates analytical model building, allowing computers to find insights in data and execute logic without being explicitly programmed for where to look or what to do. It’s based on pattern recognition and computational statistics, and it’s popping up in areas such as biology, search and recommendations (e.g., Netflix), and spam detection. Machine learning is an iterative process that uses sample data and one or more machine learning algorithms: the training data set is used by the algorithm to build an analytical model, which is then applied to attempt to analyze or classify new data. Feedback on the correctness of the model for the new data is fed back to refine the learning and therefore the model. In many cases, users don’t even know that they’re providing feedback to train machine learning: every time you click “Spam” on a message in Gmail (or “Not Spam” for something that was improperly classified), or thumbs up/down for a movie in Netflix, you’re providing feedback to their machine learning models.

He walked us through several different algorithms, and their specific applicability: Naive Bayes, Support Vector Machine (SVM), and deep learning; then a bit about machine learning scenarios inclunition rulesding supervised, unsupervised and reinforcement learning. In FlexiCapture, machine learning can be used to sort documents into categories (classification), and for training on field-level recognition. The reason that this is important for ABBYY customers (partners and end customers) is that it radically compresses the time to develop the rules required for any capture project, which typically consumes most of the development time. For example, instead of just training a capture application for the most common documents since that’s all you have time for, it can be trained for all document types, then the model will continue to self-improve as verification users correct errors made by the algorithm.

Although VonBurg was unsure if the machine learning capabilities are available yet in the SDK — he works in the FlexiCapture application team, which is based on the same technology stack but runs independently — the session on robotic information capture yesterday seems to indicate that it is in the SDK, or will be very soon.