Bridging the bimodal IT divide

Bimodal ITI wrote a paper a few months back on bimodal IT: a somewhat controversial subject, since many feel that IT should not be bimodal. My position is that it already is – with a division between “heavy” IT development and lighter-weight citizen development – and we need to deal with what’s there with a range of development environments including low-code BPMS. From the opening section of the paper:

The concept of bimodal IT – having two different application development streams with different techniques and goals – isn’t new. Many organizations have development groups that are not part of the standard IT development structure, including developers embedded within business units creating applications that IT couldn’t deliver quickly enough, and skunkworks development groups prototyping new ideas.

In many cases, this split didn’t occur by design, but out of situational necessity when mainstream IT development groups were unable to service the needs of the business, especially transformational business units created specifically to drive innovation. However, in the past few years, analysts are positioning this split as a strategic action to boost innovation. By 2013, McKinsey & Company was proposing “greenfield IT” – technology managed independently of the legacy application development and infrastructure projects that typically consume most of the CIO’s attention and budget – as a way to innovate more effectively. They found a correlation between innovation and corporate growth, and greenfield IT as a way to achieve that innovation. By the end of 2014, the term “bimodal IT” was becoming popular, with Mode 1 being the traditional application development cycle focused on stability, well suited to infrastructure and legacy maintenance and Mode 2, focused on agility and innovation, similar to McKinsey’s greenfield IT.

Read on by downloading the paper from Software AG’s site; right now, it looks like registration isn’t required.

AIIM Toronto keynote with @jmancini77

I’m at the AIIM Toronto seminar today — I pretty much attend anything that is in my backyard and looks interesting — and John Mancini of AIIM is opening the day with a talk on business processes. Yes, Mr. Column 1 is talking about Column 2, if you get the Zachman reference. This is actually pretty significant: content management isn’t just about content, just as process management isn’t just about process, but both need to overlap and work together. I had a call with Mancini yesterday in advance of my keynote at ABBYY’s conference next month, and we spent 30 minutes talking about how disruption in capture technologies has changed all business processes. Today, in his keynote, he talked about disruptive business processes that have transformed many industries.

John Mancini at AIIM TorontoHe gave us a look at people, process and technology against the rise (and sometimes fall) of different technology platforms: document management and workflow; enterprise content management; mobile and cloud. There are a lot of issues as we move from one type of platform to another: moving to a cloud SaaS offering, for example, drives the move from perimeter-based security to asset-based security. He showed a case study for financial processes within organizations — such as accounts payable and receivable — with both a tactical dimention of getting things done and a strategic side of building a bridge to digital transformation. Most businesses (especially traditional ones) operate at a slim profit margin, making it necessary to look at ways to reduce costs: not through small, incremental improvements, but through more transformational means. For financial processes, in many cases this means getting rid of manual data capture and manipulation: no more manual data entry, no more analysis via spreadsheets. And cost reduction isn’t the key driver behind transforming financial business processes any more: it’s the need for better business analytics. Done right, these analytics provide real-time insight into your business that provide a strategic competitive differentiator: the ability to predict and react to changing business conditions.

Mancini finished by allowing today’s sponsors, with booths around the room, to introduce themselves: Precision ContentAIIMBox, Panasonic, SystemWareABBYY, SourceHOV, and Lexmark (Kofax). I’ll be here for the rest of the morning, and look forward to hearing from some of the sponsors and their customers here today.

Camunda Community Day: @CamundaBPM technical sessions

I’m a few weeks late completing my report on the Camunda Community Day. The first part was on the community contributions and sessions, while the second half documented here is about Camunda showing new things that could be used by the community developers in the audience.

First up was Vladimirs Katusenoks, core developer on BPMN.io, with a presentation on bpmn-js: how it works, and how to extend it with custom functionality such as adding color to BPMN diagrams, which is a permitted extension to BPMN XML. His live coding presentation showed changing the colour of a shape background, either statically in code for the element class or by adding a colour picker to an individual element context palette; this was based on the bpmn-js core BPMN functionality, using bpmn-moddle to read/write into the metamodel and diagram-js to render it. There are a number of other bpmn-js examples on Github.

Next, Felix Müller discussed KPI management, expanding on his August blog post on the topic. KPI management is based on quantitative indicators for process cycle-time improvement, including cycle time and overdue time, plus definitions of the time period, unit of measure and calculation method. In Camunda, KPIs are defined in the Modeler, then monitored in Cockpit. He showed how to use the concept of element templates (that extend core definitions) to create custom fields on collaboration object (process) or individual tasks, e.g., KPI unit (hours, days, minutes) and KPI threshold (number). In Cockpit, this appears as a new tab for KPI Overview, showing a list of individual instances and target/current/average duration, plus an indicator of overdue status of the instance and any contained tasks; there is also a decorator bubble on the top right of the task on the process model to show the number of overdue instances on the aggregate model, or overdue status as a check mark or exclamation on individual models. The Cockpit modifications were done by creating a plug-in to display KPI statistics, which queries and calculates on the fly – a potential performance problem that might be improved through pre-aggregation of statistics. He also demonstrated how to modify this basic KPI model to include an expected duration as well as maximum duration. A good start, although I think there’s a lot more that’s needed here.

Thorsen Lindhauer, a Camunda core BPM developer, discussed how to contribute to the Camunda open source community, both at camunda.org (engine and desktop modeler, same as the commercial product) and bpmn.io (JS tools). Possible contributions include answering questions on forums; logging error reports; documenting ideas for new functionality; and working on code. Code contributions typically start by having a forum discussion about planned new functionality, then a decision is made on whether it will be core code (higher quality standards since it will become part of the commercial product, and will eventually be maintained by Camunda) versus a community extension; this is followed by ongoing development, merge and release cycles. Camunda is very supportive of community contributions, even if they don’t become part of the core product: community involvement is critical to the health of any open source project.

The last presentation of the community day was Daniel Meyer discussing the product roadmap. The next release, 7.6, will be on November 30 – they have a strict twice-yearly release cycle. This release includes updates to DMN, CMMN, BPMN, rolling updates, Cockpit features, and UI/UX in web apps; I have captured a few notes here but see the linked roadmap for a more complete and accurate description and the online documentation as it is rolled out.

  • DMN:
    • Simpler decision table editing with drop-down lists of comparison/range operators instead of having to remember FEEL or Juel syntax
    • Ability to add list of selection values (advanced mode still exists for full flexibility)
    • Decisions with literal expressions
    • DMN engine performance 4-6x faster
    • Support for decision requirements diagrams/graphs (DRD/DRG) that can link decision tables; visualization in Modeler and Cockpit are not there yet but the structures are supported – in my experience, this is typical of Camunda, which builds and releases the engine capabilities early then follows with the visualization, allowing for a quicker start for executable diagrams
  • CMMN:
    • Modeler now completely models CMMN including technical attributes such as listeners
    • Cockpit (visualization still incomplete although we saw a brief view) will allow linking models of same or different types
    • Engine feature and functionality improvements
  • Rolling updates allow Camunda process engine to be updated without shutdown: guaranteed backwards compatibility of database schema to allow database to be updated first, then roll updates of engines by taking each offline individually and allowing load balancer to reroute sessions.
  • BPMN:
    • BPMN conditional event supported
    • Improved modeling including labels, collapsing/expanding subprocesses to switch between view types, and field injections in property panel.
  • Cockpit:
    • More flexible/granular human task monitoring
    • New welcome page with links to apps (Cockpit, Tasklist, Admin), user profile, and frequent links
    • Batch operations (cancel, suspend, etc.) based on batch action capability built for instance migration
    • CMMN and DMN DRD visualization

Daniel discussed some other minor improvements based on customer feedback, plus plans for 2017, including a web modeler for collaborative BPMN, CMMN and DMN modeling via a SaaS offering and a future on-premise version. They finished the day with a poll and community feedback to establish priorities for future versions.

I stayed on for the second day, which is actually a separate conference: BPMCon for Camunda’s enterprise (commercial) customers. Rather, I stayed on for Neil Ward-Dutton’s keynote, then ducked out for most of the rest of day, which was in German. Neil’s keynote included results from workshops that he has done with executives on digital transformation, and how BPM can be used to create the bridges between the diverse parts of a digital business (internal to external, automated to people-centric), while tracking and coordinating the work that flows between the different areas.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

Camunda Community Day: community contributions

Two years ago, I attended Camunda’s open source community day, and gave the opening keynote at their enterprise user conference the following day. I really enjoyed my experience at the open source day, and jumped at the chance to attend again this year – and to visit Berlin.

The first day was the community day, where users of Camunda’s open source software version (primarily developers) talk about what they’re doing with it, plus some of the contributions that the community is making to the project and updates from Camunda on new features on the horizon. To break this up a bit – since I’m already a week after the conference and want to get something out there – I’ll cover the community sessions in this post, then the Camunda technical sessions and a bit about the enterprise conference in a later post.

The first presentation was by Oliver Hock of Videa Project Services, demonstrating robot control using a LEGO Mindstorms robot to solve a Rubix cube. He showed how they used BPMN to define movements and decision tables to determine the move logic, then automated the solution using Camunda BPM. Although you may never want to build a robot to solve a Rubix cube, there are a lot of other devices out there that, like the Mindstorms robot, are controlled via Java APIs; Hock’s design showed how these Java-enabled devices can make use of higher-level modeling constructs such as BPMN and decision tables.

Next up was Jan Galinski of Holisticon to show the Spring Boot community code extension – an example of how the community of Camunda open source users give back to the open source project for everyone’s benefit. Spring Boot is a microservices framework allowing for fast deployment of web applications with a minimal amount of overhead; the Spring Boot starter extension to Camunda allows for using Camunda without a Java application server to essentially provide Camunda apps as microservices. The extension, consisting of about 5,000 lines of code, has been developed over two years with 10 contributors, including both community and Camunda contributors. Galinski showed a live coding demo of replacing JBoss server with Spring Boot starter in a Camunda application to show how this works; he has also written a post on the Camunda community site on the 1.3.0 version of Camunda BPM Spring Boot for more technical details. Although granualar process apps such as this are easier from a devops perspective in terms of deployment and scalability, the challenge is that there is no single point of entry for an end user to look at a worklist (for example). We saw some methods for dealing with this, where a workload service collects information from individual process services with the help of the Camunda BPM Reactor plugin and aggregates them; a federated task list is under development to bring together tasks from multiple process servers into a single list, with a simple completion form. Galinski walked through the general architecture for this, and noted that they are working on making this an official extension. Update: Jan Galinski pointed out in the comments that it was Simon Zambrovski  (also of Holisticon) who did the portion on the presentation on cloud, universal tasklist and event processing — I missed the transition and his name in my hasty note-taking.

Jarl Friis of the Danish tax authority (SKAT) presented their use of the Camunda decision engine: they are using only decision services, and not the BPM capabilities, which likely makes them unusual as a Camunda customer. There are a couple of applications for them: first is to raise data quality in financial reporting to the IRS (for FATCA requirements), where they receive data from Danish financial institutions and have to process it into a specific XML format to send to the IRS. Although many of the data cleansing and transformation rules are in the XML schema definitions, some are not amenable to that format and are being defined in DMN decision tables instead. As this has rolled out, they see that decision tables give them an easier way to respond to annual rule changes, although their business people are not yet trained to make changes to the decision tables. That has resulted in developers having to make the decision table changes and test the results, which is one of the challenges that they have had to deal with: some of the developer test frameworks replicated the original decision table logic in code, which effectively tested the decision table implementation rather than the business logic. That test framework, of course, no longer worked when the decision table was changed, and Friis’ message to the audience was that organizations have to deal with challenges of ownership and responsibility for rules as well as rules testing.

Niall Deehan of Camunda gave a great presentation on on modeling anti-patterns: snaking models that are often used to fit models onto a single sheet of paper (instead, use model with happy path down the centre from left to right); inappropriate use of BPMN versus CMMN (e.g., voting scenarios); inappropriate use of BPMN versus process engine or Cockpit capabilities (e.g., service call with error exceptions for null pointer, bad response, service down); too many listeners on tasks (masks problems and pushes process logic into code, based on concept that analysts’ model should not be changed). He discussed some best practices for consistency: define the symbol set to be used by your analysts and lock down the modeler to remove elements that you don’t want people using; create and maintain your own best practices documentation; use model templates for commonly used activities; and proper training. I would love to see his presentation captured for replay: it was engaging and informative.

The last community presentation was Martin Schimak of plexiti, showing three community extensions used for automating testing of BPMN and CMMN models.  Assert checks and sets the status of tasks in order to drive a process instance through a test scenario. Process Test Coverage visualizes test process paths and checks process model coverage ratio, as covered by individual test methods and entire test classes (e.g., using mockito). Assert Scenario is for writing robust test suites for process models; this was not covered in Schimak’s demo due to time contraints, but you can read more about it on his blog.

Before we started on the Camunda technical presentations, the community award was presented by Camunda as a reward for contributions on extensions: this went to Jan Galinski from Holisticon. It’s really encouraging to see the level of engagement between Camunda and their open source community: Camunda obviously realizes that the community is an important contributor to the success of the enterprise version of the software and the company altogether, and treats them as trusted partners.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

Ten years of social BPM

Ten years ago today, I gave my first public presentation on social BPM, “Web 2.0 and BPM”, at the now-defunct BPMG Process 2006 conference in London:

The ideas from consumer social software that I proposed be integrated into BPM — software as a service, user-created processes, collaboration during process design and runtime, lightweight integration models — are all now ubiquitous. I pushed the social business concept further by publicly posting my presentation on Slideshare, a move that other consultants and analysts considered heretical since I was “giving away” my intellectual property. How times have changed.

The attendees likely thought my ideas were a bit crazy; luckily, the BPMS vendors and the big analyst firms started to see the light a few years later. 🙂

Pega 7 roadmap at Pegaworld 2016

I finished up Pegaworld 2016 at a panel of Pega technology executives who provided the vision and roadmap for CRM and Pega 7. Don Schuerman moderated the panel, which included Bill Baggott, Kerim Akgonul, Maarten Keijzer, Mark Replogle and Steve Bixby.

Topics covered:

  • OpenSpan RPA and RDA is the most recent technology acquisition; by next year, the workforce intelligence will be a key differentiator for Pega to help their customers analyze user behavior and detect patterns for potential improvement and automation
  • Questions about Javascript “flavor of the month” technologies come up a lot in competitive situations; they are keeping an eye on what’s sticking in the marketplace and what they can learn from it, selectively including technologies and capabilities where it can add value and fits into their framework
  • Digital channels are now outpacing call center channels in the market, and they are enabling chat and other social capabilities for customer interaction but need to consider how to integrate the text-based social channels into a seamless experience
  • Developing with Pega 7 models isolates the applications from the underlying cloud and/or containerization platforms, so that new platforms and configurations can be put in place without changing the apps
  • New data visualizations based on improved analytics are evolving, providing opportunities for discovery of business practices
  • A simpler modeling environment allows non-technical designers to create and configure apps without accessing the Designer Studio; at this point, technical developers are likely needed for some capabilities
  • They are looking at newer data management technologies, e.g., NoSQL and blockchain, to see how they might fit into Pega’s technology stack; no commitments, but interesting to hear the discussion of the use of blockchain for avoiding conflicts and deadlocks in active-active-active scenarios without having to build it into applications explicitly
  • Some new technology components, developed by Pega or third parties, may be available via Pega Exchange as add-on apps rather than built into the product stack directly
  • They hope to be able to offer more product trials in the future, which can be downloaded (or accessed directly in the cloud) for people to check out the capabilities
  • DevOps is seen as more of a cultural than technology shift, where releases are sped up since code isn’t just thrown over the wall from development to deployment, but remain the responsibility of the dev team; a container strategy is an important component of having this run smoothly
  • UX design is critical in creating an appropriate customer experience, but also in supporting developers who are developing apps; Pega Express is regularly fine-tuned to provide an optimal modeling/design experience, and Pega Design provides insight into their design thinking initiatives
  • All of the data sources, including IoT events, contribute to analytics and machine learning, and therefore to the constant improvement of Next Best Action recommendations
  • They would like to be able to offer more predictive and intelligent customer service capabilities; lots of “wish list” ideas from the entire panel

This is the last session on day 2; tomorrow is all training courses and I’ll be heading home. It’s been a pretty full couple of days and always good to see what Pega’s up to.

OpenSpan at Pegaworld 2016: RPA meets BPM

Less than two months ago, Pega announced their acquisition of OpenSpan, a software vendor in the robotic process automation (RPA) market. That wasn’t my first exposure to OpenSpan, however: I looked at them eight years ago in the context of mashups. Here at PegaWorld 2016, we’re getting a first peek at the unified roadmap on how Pega and OpenSpan will fit together. Also, a whole new mess of acronyms.

I’m at the OpenSpan session at Pegaworld 2016, although some of these notes date from the time of the analyst briefing back in April. Today’s presentation featured Anna Convery of Pega (formerly OpenSpan); Robin Gomez, Director of Operational Intelligence at Radial (a BPO) providing an introduction to RPA; and Girish Arora, Senior Information Oficer at AIG, on their use of OpenSpan.

Back in the 1990’s, a lot of us who were doing integration of BPM systems into enterprises used “screen scraping” to push commands to and pull data from the screens of legacy systems; since the legacy systems didn’t support any sort of API calls, our apps had to pretend to be a human worker to allow us to automate integration between systems and even hide those ugly screens. Gomez covered a good history of this, including some terms that I had hoped to never see again (I’m looking at you, HLLAPI). RPA is like the younger, much smarter offspring of screen scraping: it still pushes and pulls commands and data, automating desktop activities by simulating user interaction, but it’s now event-driven, incorporating rules and machine learning.

As with BPM and other process automation, Gomez talked about how the goal of RPA is to automate repeatable tasks, reduce error rates, improve standardization, reduce requirement for knowledge about multiple systems, shorten worker onboarding time, and create a straight-through process. At Radial, they were looking for the combination of robotic desktop automation (RDA) that provides personal robots to assist workers’ repetitive tasks, and RPA that completely replaces the worker on an unattended desktop. I’m not sure if every vendor makes a distinction between what OpenSpan calls RDA and RPA; it’s really the same technology, although there are some additional monitoring and virtualization bits required for the headless version.

OpenSpan provides the usual RPA desktop automation capabilities, but also includes the (somewhat creepy) ability to track and analyze worker behavior: basically, what they’re typing into what application in what context, and present it in their Opportunity Finder. This information can be mined for patterns in order to understand how people do their job — much the way that process mining works, but based on user interactions rather than system log files — and automate the parts that are done the same way each time. This can be an end in itself, or a stepping stone to a replacement of the desktop apps entirely, providing interim relief while a full Pega BPM/CRM implementation is being developed, for example. Furthermore, the analytics about the user activities on the desktop can feed into requirements for any replacement initiative, both the general flow as well as an analysis of the decisions made based on what data was presented.

OpenSpan and Pega aren’t (exactly) competitive technologies: OpenSpan can be used for desktop automation where replacement is not an option, or can be used to as a quick fix while performing desktop process discovery to accelerate a full Pega desktop replacement project. OpenSpan paves the cowpaths, while a Pega implementation is usually a more fundamental innovation that may not be warranted in all situations. I can also imagine scenarios where a current Pega customer uses OpenSpan to automate the interaction between Pega and legacy applications that still exist on the desktop. From a Pega sales standpoint, OpenSpan may also act as the camel’s nose in the tent to get into net new clients.

IMG_9784There are a wide variety of use cases, some of them saving just a few minutes but applicable to thousands of workers (e.g., logging in to multiple systems each morning), others replacing a significant portion of knowledge work for a smaller number of workers (e.g., financial reconciliations). Arora talked about what they have done at AIG, in the context of processes that require a mix of human-required and fully automatable steps; he sees their opportunity as moving from RDA (where people are still involved, gaining 10-20% in efficiency) to RPA (fully automated, gaining 40-50% efficiency). Of course, they could just swap out their legacy systems for something that was built this century, but that’s just too difficult to change — expensive, risky and time-consuming — so they are filling in the automation gaps using OpenSpan. They have RDA running on every desktop to assist workers with a variety of tasks ranging from simple to complex, and want to start moving some of those to RPA to roll out unattended automation.

OpenSpan is typically deployed without automation to start gathering user analytics, with initial automation of manual procedures within a few weeks. As Pega cognitive technologies are added to OpenSpan, it should be possible for the RPA processes to continue to recognize patterns and recommend optimizations to a worker’s flow, becoming a sort of virtual personal assistant. I look forward to seeing some of that as OpenSpan is integrated into the Pega technology family.

OpenSpan is Windows-only .NET technology, with no plans to change that at the time of our original analyst briefing in April. We’ll see.

Camunda BPM 7.5: CMMN, BPMN element templates, and more

I attended an analyst briefing earlier today with Jakob Freund, CEO of Camunda, on the latest release of their product, Camunda BPM 7.5. This includes both the open source version available for free download, and the commercial version with the same code base plus a few additional features and professional support. Camunda won the “Best in Show” award at the recent bpmNEXT conference, where they demonstrated combining DMN with BPMN and CMMN; the addition of the DMN decision modeling standard to BPMN and CMMN modeling environments is starting to catch on, and Camunda has been at the front edge of the wave to push that beyond modeling into implementation.

They are managing to keep their semi-annual release schedule since they forked from Activiti in 2013: version 7.0 (September 2013) reworked the engine for scalability, redid the REST API and integrated their Cockpit administration module; 7.1 (March 2014) focused on completing the stack with performance improvements and more BPMN 2.0 support; 7.2 (November 2014) added CMMN execution support and a new tasklist; 7.3 (May 2015) added process instance modification, improved authorization models and tasklist plugins; and 7.4 (November 2015) debuted the free downloadable Camunda Modeler based on the bpmn.io web application, and added DMN modeling and execution. Pretty impressive progression of features for a small company with only 18 core developers, and they maintain their focus on providing developer-friendly BPM rather than a user-oriented low-code environment. They support their enterprise editions for 18 months; of their 85-90 enterprise customers, 25-30% are on 7.4 with most of the rest on 7.3. I suspect that a customer base of mostly developers means that customers are accustomed to the cycle of regression testing and upgrades, and far fewer lag behind on old versions than would be common with products aimed at a less technical market.

Today’s 7.5 release marches forward with improvements in CMMN and BPMN modeling, migration of process instances, performance and user interface.

Oddly (to some), Camunda has included Case Management Model & Notation (CMMN) execution in their engine since version 7.2, but has only just added CMMN modeling in 7.5: previously, you would have used another CMMN-compliant modeler such as Trisotech’s then imported the model into Camunda. Of course, that’s how modeling standards are supposed to work, but a bit awkward. Their implementation of the CMMN modeler isn’t fully complete; they are still missing certain connector types and some of the properties required to link to the execution engine, so you might want to wait for the next version if you’re modeling executable CMMN. They’re seeing a pretty modest adoption rate for CMMN amongst their customers; the messaging from BPMS vendors in general is causing a lot of confusion, since some claim that CMMN isn’t necessary (“just use ad hoc tasks in BPMN”), others claim it’s required but have incomplete implementations, and some think that CMMN and BPMN should just be merged.

Camunda 7.5 element templatesOn the BPMN modeling side, Camunda BPM 7.5 includes “element templates”, which are configurable BPMN elements to create additional functionality such as a “send email” activity. Although it looks like Camunda will only create a few of these as samples, this is really a framework for their customers who want to enable low-code development or encapsulate certain process-based functionality: a more technical developer creates a JSON file that acts as an extension to the modeler, plus a Java class to be invoked when it is executed; a less technical developer/analyst can then add an element of that type in the modeler and configure it using the properties without writing code. The examples I saw were for sending email and tweets from an activity; the JSON hooked the modeler so that if a standard BPMN Send Task was created, there was an option to make it an Email Task or Tweet Task, then specify the payload in the additional properties that appeared in the modeler (including passed variables). Although many vendors provide a similar functionality by offering things such as Send Email tasks directly in their modeling palettes, this appears to be a more standards-based approach that also allows developers to create their own extensions to standard activities. A telco customer is using Camunda BPM to create their own customized environments for in-house citizen developers, and element templates can significantly add to that functionality.

Camunda 7.5 instance migration 4 - manual mapping of unrecognized steps between modelsThe process instance migration feature, which is a plugin to the enterprise Cockpit administration module but also available to open source customers via the underlying RESET and Java APIs, helps to solve the problem of what to do with long-running processes when the process model changes. A number of vendors have solutions for this, some semi-automated and some completely manual. Camunda’s take on it is to compare the existing process model (with live instances) against the new model, attempt to match current steps to new steps automatically, then allow any step to be manually remapped onto a different step in the new model. Once that migration plan is defined, all running instances can be migrated at once, or only a filtered (or hand-selected) subset. Assuming that the models are fairly similar, such as the addition or deletion of a few steps, this should work well; if there are a lot of topology changes, it might be more of a challenge since there could need to roll back instance property values if instances are migrated to an earlier step in the process.

They have also improved multi-tenancy capabilities for customers who use Camunda BPM as a component within a SaaS platform, primarily by adding tenant identifier fields to their database tables. If those customers’ customers – the SaaS users – log in to Cockpit or a similar admin UI, they will only see their own instances and related objects, without the developers having to create a custom restricted view of the database.

Camunda 7.5 process duration reportThey’ve released a simple process instance duration report that provides a visual interface as well a downloadable data. There’s not a lot here, but I assume this means that they are starting to build out a more robust and accessible reporting platform to play catch-up with other vendors.

Lastly, I saw their implementation of external task handling, another improvement based on customer requests. You can see more of the technical details here and here: instead of a system task calling an Camunda 7.5 external-task-patternexternal task asynchronously then wait for a response, this creates a queue that an external service can poll for work. There are some advantages to this method of external task handling, including easier support for different environments: for example, it’s easier to call a Camunda REST API from a .NET client than to put a REST API on top of .NET; or to call a cloud-based Camunda server from behind a firewall than to let Camunda call through your firewall. It also provides isolation from any scaling issues of the external task handlers, and avoids service call timeouts.

Camunda BPM 7.5

There’s a public webinar tomorrow (June 1) covering this release, you can register for the English one here (11am Eastern time) and the German one here (10am Central European time).

ING Turkey’s journey to becoming a digital bank

I wanted to catch an ActiveMatrix BPM customer breakout session here at TIBCONOW 2016, so sat in on Rahsan Kalci from ING Turkey talking about their transformation to a digital bank using BPM, co-presenting with a senior BPM architect from TIBCO, Raisa Mahomed. I’ve always thought of ING Bank as innovative, both through personal experience and from reading case studies about how they apply technology to business problems.

wp-1463611831323.jpgING Turkey’s business problem four years ago was that customer-facing processes were taking too long, were inefficient and inconsistent, and weren’t fully documented so difficult for new users to learn. They decided to create a new operating model with AMX BPM at the core, supporting all of their business processes, and are in the midst of an operational transformation with 11 processes already implemented, and several others underway, ranging in complexity and customer engagement. They are building completely custom applications using the APIs rather than leveraging out of the box workspace tools, since they already had a robust user interface environment that they wanted to integrate with.

Throughput time on the now-standardized processes improved by 55%, providing greatly enhanced customer service that moved them from #6 to #3 in the market. From an operational cost point, transactions per employee increased by 38% allowing them to have a 36% reduction in operational staff (72 FTE). By using the workforce management capabilities in AMX BPM, they were able to determine parts of the process that could be near-shored (still in Turkey, but in less expensive locations than Istanbul), resulting in additional cost savings.

wp-1463611846312.jpgThey have an overall strategy for what processes to implement in what order. They picked their initial processes as not customer facing, but still important for their operations, and able to be done manually as a fall-back position. This allowed them to learn the tool and establish best practices, then start to consider processes that directly impact the customer journeys. Although they started with a team made up of both ING and TIBCO people, they are now working completely on their own to build new processes and roll out new applications. Their ultimate goal is to roll out BPM to all core processes, enhance their digital business with support for mobile internal and external users, and use Spotfire analytics more broadly in the back office to improve operational decision-making.

They were an early AMX BPM customer, starting on version 1.0 and now on 3.1, with plans for 4.1 in the near future. Their first process application took them 2 years, but that was a much broader implementation effort that built tools and infrastructure used by all later applications. They’ve had about 20 people working full time on the BPM projects for the past four years, a significant investment on their part but one that is obviously paying off for them.

ActiveMatrix BPM update at TIBCONOW

Roger King, head of BPM product management, gave us an update on ActiveMatrix BPM and Nimbus.

The most recent updates in AMX BPM have focused on data and case management. As we saw in the previous session on case management, their approach is data-centric with pre-defined process snippets that can be selected by the knowledge worker during execution.

As with most other BPMS platform vendors, they are positioning AMX BPM as an application development platform for digital business, including process, UX across multiple channels and application building tools. Version 4.0, released last year, focused on rapid user experience application development, case management enhancements, and process data and integration enhancements. Previously, you had to be a hard-core coder and afficiando of their APIs to create process applications, but now the app dev is much more accessible with HTML5 UI components (worklist, case, etc.), CSS, JavaScript APIs, and AngularJS and Bootstrap support in addition to the more traditional Java and REST APIs. They’ve also included a number of sample applications to clone and configure, including both worklist and case style. There is a complete app dev portal for administering and configuring applications, and the ability to change themes and languages, and define roles for the applications. Power developers can use their own favorite web app dev tool, or Business Studio can be used for the more integrated experience.

In their case management enhancements, they’ve added process-to-process global signalling with data, allowing any process to throw or catch global signals to allow for easy correlation between processes that are related based on their business data. In the case world, case data signals provide the same capability with the case object as the catching object rather than a process.

A new graphical mapper allows mapping between data objects, acting as a visual layer over their data transformation scripting.

Service processes are now supported, which are stateless processes for high-speed orchestration.

There is now graphical integration with external REST services, in the same way that you might do with WSDL for SOAP services, to make the integration more straightforward when calling from BPM.

IMG_9508AMX BPM 4.1 is a smaller release announced here at the conference, with the continued evolution of s a AMX BPM as an app dev platform, some new UI components for case management, and enhancements to the bundled apps for work management and case management styles as a quick-start for building new applications. There are some additional graphical mapper capabilities, and a dependency viewer between projects within Business Studio.

On the Nimbus side, the big new thing is Nimbus Maps, which is a stripped-down version of Nimbus that is intended to be more widely used for business transformation by the business users themselves, rather than by process experts. It includes a subset of the full Nimbus feature set focused just on diagramming and collaboration, at a much lower price point.

A flying tour through recent releases, making it very obvious that I’m overdue for a round of TIBCO briefings.

IMG_9509He next gave us a statement of direction on the product lines, including more self-service assessment, proof of concept and purchasing of products for faster selection and deployment. By the end of 2016, expect to see a new cloud-based business process execution and application development product from TIBCO, which will not be just a cloud layer on their existing products, but a new technology stack. It will be targeted at departmental self-service, with good enough functionality at a reasonable price point to get people started in BPM, and part of TIBCO’s overall multi-tenant cloud ecosystem. The application composition environment will be case-centric, although will allow processes to be defined with a simplified BPMN modeling syntax, all in a browser environment. There will be bundled applications that can be cloned and modified.

IMG_9510 IMG_9512

This is not intended to be a replacement for the enterprise products, but to serve a different market and different personas; regardless, I imagine that a lot of the innovation that they develop in this cloud product will end up back in the enterprise applications. The scaling for the cloud BPM offering will use Docker, which will allow for deployment to private cloud at some point in the future.

With the cloud pivot in progress, the enterprise product development will slow down a bit, but Nimbus will gain a new browser-based user experience.