Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.

Vega Unity 7 - December 2017

A Perfect Combination: Low Code and Case Management

The paper that I wrote on low code and case management has just been published – consider it a Christmas gift! It’s sponsored by TIBCO, and you can find it here (registration required).

This is an accompaniment to the webinar that I did recently with Roger King and Nicolas Marzin, which is available for replay on demand.

Fun times with low code and case management

I recently held a webinar on low code and case management, along with Roger King and Nicolas Marzin of TIBCO (TIBCO sponsored the webinar). We tossed aside the usual webinar presentation style and had a free-ranging conversation over 45 minutes, with Nicolas giving a quick demo of TIBCO’s Live Apps in the middle.

Not the long tailAlthough preparing for a webinar like this takes just as long as a standard presentation, it’s a lot more fun to participate. I also think it’s more engaging for the audience, even though there’s not as much visual material; I created some slides with a few points on the topics that we planned to cover, including some fun graphics. I couldn’t resist including a visual pun about long tail applications. Smile

You can find the playback here if you missed it, or want to watch again. If you watched it live, there was a problem with the audio for the first couple of slides. Since it was mostly me giving some introductory remarks and a quick overview of case management and low code, we just re-recorded that few minutes and fixed the on-demand version.

I’m finishing up a white paper for TIBCO on case management and low code, stressing that not only is low code the way to go for building case management applications, but that a case management paradigm is the best fit for low code applications. We should have that in publication shortly, so stay tuned. If you attended the webinar, you should receive a link to the paper when it’s published.

Low code and case management discussion with @TIBCO

I’m speaking on a webinar sponsored by TIBCO on November 9th, along with Roger King (TIBCO’s senior director of product management and strategy, and Austin Powers impressionist extraordinaire) and Nicolas Marzon (TIBCO’s director of strategic enablement group). From their registration page:

Supercharge your digital transformation – When low code meets case management

While digital transformation is likely on your company’s agenda, the demand for ever more enterprise apps is not slowing down. How can you both transform and meet this development need?

Process-centric applications that run your business involve content, events, decisions, and automation. Knowledge workers benefit from environments that integrate all of these capabilities in a case management paradigm, which combines automation with human reasoning. And new low-code development platforms will let your business users configure their own case management apps to meet their situational or strategic needs.

This is not a structured presentation or TIBCO demo: instead, I’ll kick off with a couple of level-setting slides on case management and low code platforms, then lead a discussion with Roger and Nicolas on a variety of issues facing us with low code and case management. Some of the things on my list of potential topics:

  • What are the business and technology drivers pushing us towards low code?
  • How do we reconcile citizen developers’ situational applications with a broader architecture and design vision?
  • How are low code platforms and their developers supported by a center of excellence without squashing innovation?
  • How do we handle governance of low code apps to make sure that they don’t do anything that might negatively impact privacy or performance?
  • What sort of organizational links do we need to bring together microservices developers and low code platforms?

If you have some other topics that you’d like to hear us discuss, please add them as comments below and I’ll try to work them in. Sign up for the webinar at the registration link above and join in on November 9th.

I’m also working on a couple of white papers for them on case management and low code, which is coming up in pretty much every business and technical discussion that I have these days. At least one of those papers will be available by the time of the webinar, with the other available shortly after.

Insurance case management: SoluSoft and OpenText

It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this afternoon — and I’m sitting in on Mike Kremer of OpenText and Kiran Thakrar of SoluSoft showing SoluSoft’s Active Client Management for Insurance, built on OpenText’s Process Suite and case management. SoluSoft originally built this capability on earlier OpenText products (Global 360) but have moved to the new low-code platform. Their app can be used out of the box, or can be configured to suit a particular environment.

The goal of Active Client Management for Insurance is to provide a 360 view of the client, including data from a variety of sources (typically systems of record for policy administration or claims), content from any repository, open tasks and pending actions, checklists and ad hoc notes. It includes the entire customer lifecycle, from onboarding and underwriting, through policy administration and claims; basically, it’s user work management and CRM in one.

The solution is built on the core of Process Suite, using the full entity modeling AppWorks-style low-code development. It also includes process intelligence for analytics, Capture Center for document capture, and Streamserve for customer communication management. Above all of these OpenText building blocks, SoluSoft has built a client management solution accelerator that (I believe) they can use for a variety of vertical applications; below the OpenText layer is a service bus integration to line of business systems. For insurance, they’ve created a number of business processes and request types corresponding to different parts of the business, such as processing a new application, amending a policy, or initiating a claim; each of these can be configured for the specific customer’s terminology, or disabled if they don’t require specific functions. It’s not completely clear, however, how much of the functionality of other insurance systems might be replaced by this rather than augmented: clearly, the core policy administration system stays as the system of record, but an underwriting or claims system workflow might be replaced by this functionality. Having done this a few times with clients that use systems such as Guidewire, I have to say that this is a non-trivial architectural exercise to decide what parts of the flow happen where, and how to properly interact with other systems.

At the heart is a generic capture-driven workflow: scan, capture, index, data entry, process, approve, review, fulfill. The names of these can be aliased for different vertical applications — their example is that “processing” becomes “underwriting” — and steps can be skipped for a specific request type. Actions that can be performed at any of these work steps are configured using checklists. Ad hoc processes can be attached to steps in this master flow, either a single-step task or a more complex flow, and be executed before, after or in parallel to the pre-defined work step. Ad hoc processes can be created at runtime, and secondary request processes created for certain case types. The ability to make any of these configuration changes is restricted by role security. Relationships between clients, policies, brokers, claims, etc. are managed using folders for customers, policies and advisers, driven by entity modeling in Process Suite (AppWorks Low Code); this ability to establish relationships between all of these types of entities is critical for providing the complete view of the customer. They also have integrated iHub analytics for showing case statistics and workload analysis, as well as more complex analysis of risk or profitability for specific customer groups or policy types.

 

Although SoluSoft built some of this in custom code. a lot of the application is built directly in the OpenText low code development environment provided by Process Suite. This means that it’s fast to configure or even do some basic customizations, with the caveats that I mentioned earlier about deciding on where some parts of the workflow might happen when you have existing LOB systems that already do that. It also provides them with native mobile support through AppWorks, rather than having to build a separate mobile application.

We saw the version focused on insurance, but they also have flavors for pensions, financial services, government, healthcare and education. However, it appears that there is an existing legacy of the Global 360-based application, and it’s not clear how long it will take for this new AppWorks version to make its way into the market.

Case management in insurance

Case Management In InsuranceI recently wrote a paper on how case management technology can be used in insurance claims processing, sponsored by DST (but not about their products specifically). From the paper overview:

Claims processing is a core business capability within every insurance company, yet it is
often one of the most inefficient and risky processes. From the initial communication that
launches the claim to the final resolution, the end-to-end claims process is complex and
strictly regulated, requiring highly-skilled claims examiners to perform many of the
activities to adjudicate the claim.

Managing a manual, paper-intensive claims processing operation is a delicate balance
between risk and efficiency, with most organizations choosing to decrease risk at the cost
of lower operational efficiency. For example, skilled examiners may perform rote tasks to
avoid the risk of handing off work to less-experienced colleagues; or manual tracking and
logging of claims activities may have to be done by each worker to ensure a proper audit
trail.

A Dynamic Case Management (DCM) system provides an integrated and automated
claims processing environment that can improve claim resolution time and customer
satisfaction, while improving compliance and efficiency.

You can download it from DST’s site (registration required).

Camunda Community Day: @CamundaBPM technical sessions

I’m a few weeks late completing my report on the Camunda Community Day. The first part was on the community contributions and sessions, while the second half documented here is about Camunda showing new things that could be used by the community developers in the audience.

First up was Vladimirs Katusenoks, core developer on BPMN.io, with a presentation on bpmn-js: how it works, and how to extend it with custom functionality such as adding color to BPMN diagrams, which is a permitted extension to BPMN XML. His live coding presentation showed changing the colour of a shape background, either statically in code for the element class or by adding a colour picker to an individual element context palette; this was based on the bpmn-js core BPMN functionality, using bpmn-moddle to read/write into the metamodel and diagram-js to render it. There are a number of other bpmn-js examples on Github.

Next, Felix Müller discussed KPI management, expanding on his August blog post on the topic. KPI management is based on quantitative indicators for process cycle-time improvement, including cycle time and overdue time, plus definitions of the time period, unit of measure and calculation method. In Camunda, KPIs are defined in the Modeler, then monitored in Cockpit. He showed how to use the concept of element templates (that extend core definitions) to create custom fields on collaboration object (process) or individual tasks, e.g., KPI unit (hours, days, minutes) and KPI threshold (number). In Cockpit, this appears as a new tab for KPI Overview, showing a list of individual instances and target/current/average duration, plus an indicator of overdue status of the instance and any contained tasks; there is also a decorator bubble on the top right of the task on the process model to show the number of overdue instances on the aggregate model, or overdue status as a check mark or exclamation on individual models. The Cockpit modifications were done by creating a plug-in to display KPI statistics, which queries and calculates on the fly – a potential performance problem that might be improved through pre-aggregation of statistics. He also demonstrated how to modify this basic KPI model to include an expected duration as well as maximum duration. A good start, although I think there’s a lot more that’s needed here.

Thorsen Lindhauer, a Camunda core BPM developer, discussed how to contribute to the Camunda open source community, both at camunda.org (engine and desktop modeler, same as the commercial product) and bpmn.io (JS tools). Possible contributions include answering questions on forums; logging error reports; documenting ideas for new functionality; and working on code. Code contributions typically start by having a forum discussion about planned new functionality, then a decision is made on whether it will be core code (higher quality standards since it will become part of the commercial product, and will eventually be maintained by Camunda) versus a community extension; this is followed by ongoing development, merge and release cycles. Camunda is very supportive of community contributions, even if they don’t become part of the core product: community involvement is critical to the health of any open source project.

The last presentation of the community day was Daniel Meyer discussing the product roadmap. The next release, 7.6, will be on November 30 – they have a strict twice-yearly release cycle. This release includes updates to DMN, CMMN, BPMN, rolling updates, Cockpit features, and UI/UX in web apps; I have captured a few notes here but see the linked roadmap for a more complete and accurate description and the online documentation as it is rolled out.

  • DMN:
    • Simpler decision table editing with drop-down lists of comparison/range operators instead of having to remember FEEL or Juel syntax
    • Ability to add list of selection values (advanced mode still exists for full flexibility)
    • Decisions with literal expressions
    • DMN engine performance 4-6x faster
    • Support for decision requirements diagrams/graphs (DRD/DRG) that can link decision tables; visualization in Modeler and Cockpit are not there yet but the structures are supported – in my experience, this is typical of Camunda, which builds and releases the engine capabilities early then follows with the visualization, allowing for a quicker start for executable diagrams
  • CMMN:
    • Modeler now completely models CMMN including technical attributes such as listeners
    • Cockpit (visualization still incomplete although we saw a brief view) will allow linking models of same or different types
    • Engine feature and functionality improvements
  • Rolling updates allow Camunda process engine to be updated without shutdown: guaranteed backwards compatibility of database schema to allow database to be updated first, then roll updates of engines by taking each offline individually and allowing load balancer to reroute sessions.
  • BPMN:
    • BPMN conditional event supported
    • Improved modeling including labels, collapsing/expanding subprocesses to switch between view types, and field injections in property panel.
  • Cockpit:
    • More flexible/granular human task monitoring
    • New welcome page with links to apps (Cockpit, Tasklist, Admin), user profile, and frequent links
    • Batch operations (cancel, suspend, etc.) based on batch action capability built for instance migration
    • CMMN and DMN DRD visualization

Daniel discussed some other minor improvements based on customer feedback, plus plans for 2017, including a web modeler for collaborative BPMN, CMMN and DMN modeling via a SaaS offering and a future on-premise version. They finished the day with a poll and community feedback to establish priorities for future versions.

I stayed on for the second day, which is actually a separate conference: BPMCon for Camunda’s enterprise (commercial) customers. Rather, I stayed on for Neil Ward-Dutton’s keynote, then ducked out for most of the rest of day, which was in German. Neil’s keynote included results from workshops that he has done with executives on digital transformation, and how BPM can be used to create the bridges between the diverse parts of a digital business (internal to external, automated to people-centric), while tracking and coordinating the work that flows between the different areas.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

Take Mike Marin’s CMMN survey: learn something and help CMMN research

CMMN diagram from OMG CMMN 1.0 specification document
CMMN diagram from OMG CMMN 1.0 specification document

Mike Marin, who had a hand in creating FileNet’s ECM platform and continued the work at IBM as chief architect on their Case Manager product, is taking a bit of time away from IBM to complete his PhD. He’s doing research into complexity metrics for the Case Management Model and Notation standard, and he really needs people to complete a survey in order to complete his empirical research. The entire thing will take 45-60 minutes, and can be completed in multiple sessions; 30-40 minutes of that is an optional tutorial, which you can skip if you’re already familiar with CMMN.

Here’s his invitation to participate (feel free to share with your process and case modeling friends):

We are conducting research on the Case Management Modeling and Notation (CMMN) specification and need your help. You don’t need to be familiar with CMMN to participate, but you should have some basic understanding of process technology or graphical modeling (for example: software modeling, data modeling, object modeling, process modeling, etc.), as CMMN is a new modeling notation. Participation is voluntary and no identifiable personal information will be collected.

You will learn more about CMMN with the tutorial; and you will gain some experience and appreciation for CMMN by evaluating two models in the survey. This exercise should take about 45 to 60 minutes to complete; but it can be done in multiple sessions. The tutorial is optional and it should take 30 to 40 minutes. The survey should take 15 to 20 minutes. You can consider the survey a quiz on the tutorial.

As an appreciation for your collaboration, we will donate $6 (six dollars) to a charity of your choice and we will provide you with early results of the survey.

You can use the following URL to take the tutorial and survey. The first page provides more information on the project.

http://cmmn.limequery.org/index.php/338792?lang=en

He wrote a more detailed description of the research over on BPTrends.

Mike’s a former colleague and a friend, but I’m not asking just because of that: he’s also a Distinguished Engineer at IBM and a contributor to standards and technology that make a huge impact in our field. Help him out, take the survey, and it will help us all out in the long run.

Now available: Best Practices for Knowledge Workers

510DL8BZNlLI couldn’t make it to the BPM and Case Management Summit in DC this week, but it includes the launch of the new book, Best Practices for Knowledge Workers: Innovation in Adaptive Case Management, for which I wrote the foreword. 

In that section, which I called “Beyond Checklists”, I looked at the ways that we are making ACM smarter, using rules, analytics, machine learning and other technologies. That doesn’t mean that ACM will become less reliant on the knowledge workers that work cases; rather, these technologies support them through recommendations and selective automation.

I also cover the ongoing challenges of collaboration within ACM, particularly the issue of encouraging collaboration through social metrics that align the actions of individuals with corporate goals.

You’ll find chapters by many luminaries in the field of BPM and ACM, including some real-world case studies of ACM in action.

Pega 7 roadmap at Pegaworld 2016

I finished up Pegaworld 2016 at a panel of Pega technology executives who provided the vision and roadmap for CRM and Pega 7. Don Schuerman moderated the panel, which included Bill Baggott, Kerim Akgonul, Maarten Keijzer, Mark Replogle and Steve Bixby.

Topics covered:

  • OpenSpan RPA and RDA is the most recent technology acquisition; by next year, the workforce intelligence will be a key differentiator for Pega to help their customers analyze user behavior and detect patterns for potential improvement and automation
  • Questions about Javascript “flavor of the month” technologies come up a lot in competitive situations; they are keeping an eye on what’s sticking in the marketplace and what they can learn from it, selectively including technologies and capabilities where it can add value and fits into their framework
  • Digital channels are now outpacing call center channels in the market, and they are enabling chat and other social capabilities for customer interaction but need to consider how to integrate the text-based social channels into a seamless experience
  • Developing with Pega 7 models isolates the applications from the underlying cloud and/or containerization platforms, so that new platforms and configurations can be put in place without changing the apps
  • New data visualizations based on improved analytics are evolving, providing opportunities for discovery of business practices
  • A simpler modeling environment allows non-technical designers to create and configure apps without accessing the Designer Studio; at this point, technical developers are likely needed for some capabilities
  • They are looking at newer data management technologies, e.g., NoSQL and blockchain, to see how they might fit into Pega’s technology stack; no commitments, but interesting to hear the discussion of the use of blockchain for avoiding conflicts and deadlocks in active-active-active scenarios without having to build it into applications explicitly
  • Some new technology components, developed by Pega or third parties, may be available via Pega Exchange as add-on apps rather than built into the product stack directly
  • They hope to be able to offer more product trials in the future, which can be downloaded (or accessed directly in the cloud) for people to check out the capabilities
  • DevOps is seen as more of a cultural than technology shift, where releases are sped up since code isn’t just thrown over the wall from development to deployment, but remain the responsibility of the dev team; a container strategy is an important component of having this run smoothly
  • UX design is critical in creating an appropriate customer experience, but also in supporting developers who are developing apps; Pega Express is regularly fine-tuned to provide an optimal modeling/design experience, and Pega Design provides insight into their design thinking initiatives
  • All of the data sources, including IoT events, contribute to analytics and machine learning, and therefore to the constant improvement of Next Best Action recommendations
  • They would like to be able to offer more predictive and intelligent customer service capabilities; lots of “wish list” ideas from the entire panel

This is the last session on day 2; tomorrow is all training courses and I’ll be heading home. It’s been a pretty full couple of days and always good to see what Pega’s up to.