On the folly of becoming the “product expert”

This post by Charity Majors of Honeycomb popped up in my feed today, and really resonated relative to our somewhat in-bred world of process automation. She is talking about the need to move between software development teams in order to keep building skills, even if it means that you move from a “comfortable” position as the project expert to a newbie role:

There is a world of distance between being expert in this system and being an actual expert in your chosen craft. The second is seniority; the first is merely .. familiarity

I see this a lot with people becoming technical experts at a particular vendor product, when it’s really a matter of familiarity with the product rather than a superior skill at application development or even process automation technology. Being dedicated to a single product means that you think about solving problems in the context of that product, not about how process automation problems in general could be solved with a wider variety of technology. Dedication to a single product may make you a better technician but does not make you a senior engineer/architect.

Majors uses a great analogy of escalators: becoming an expert on one project (or product) is like riding one long escalator. When you get to the top, you either plateau out, or move laterally and start another escalator ride from its bottom up to the next level. Considering this with vendor products in our area, this would be like building expertise in IBM BPM for a couple of years, then moving to building Bizagi expertise for a couple of years, then moving to Camunda for a couple of years. At the end of this, you would have an incredibly broad knowledge of how to solve process automation projects on a variety of different platforms, which makes you much more capable of making the type of decisions at the senior architecture and design level.

This broader knowledge base also reduces risk: if one vendor product falls out of favor in the market, you can shift to others that are already in your portfolio. More importantly, because you already understand how a number of different products work, it’s easier to take on a completely new product. Even if that means starting at the bottom of another escalator.

CamundaCon 2020.2 Day 1

I listened to Camunda CEO Jakob Freund‘s opening keynote from the virtual CamundaCon 2020.2 (the October edition), and he really hit it out of the park. I’ve known Jakob a long time and many of our ideas are aligned, and there was so much in particular in his keynote that resonated with me. He used the phrase “reinvent [your business] or die”, whereas I’ve been using “modernize or perish”, with a focus not just on legacy systems and infrastructure, but also legacy organizational culture. Not to hijack this post with a plug for another company, but I’m doing a keynote at the virtual Bizagi Catalyst next week on aligning intelligent automation with incentives and business outcomes, which looks at issues of legacy organizational culture as well as the technology around automation. Processes are, as he pointed out, the algorithms of an organization: they touch everything and are everywhere (even if you haven’t automated them), and a lot of digital-native companies are successful precisely because they have optimized those algorithms.

Jakob’s advice in achieving reinvention/modernization is to do a gradual transformation, not try to do a big bang approach that fails more often than it succeeds, and positions Camunda (of course) as the bridge between the worlds of legacy and new technology. In my years of technology consulting on BPM implementations, I also recommend using a gradual approach by building bridges between new and old technology, then swapping out the legacy bits as you develop or buy replacements. This is where, for example, you can use RPA to create stop-gap task automation with your existing legacy systems, then gradually replace the underlying legacy or at least create APIs to replace the RPA bots.

The second opening keynote was with Marco Einacker and Christoph Anzer of Deutsche Telekom, discussing how they are using process and task automation by combining Camunda for the process layer and RPA at the task layer. They started out with using RPA for automating tasks and processes, ending up with more than 3,000 bots and an estimated €93 million in savings. It was a very decentralized approach, with initially being created by business areas without IT involvement, but as they scaled up, they started to look for ways to centralize some of the ideas and technology. First was to identify the most important tasks to start with, namely those that were true pain points in the business (Einacker used the phrase ” look for the shittiest, most painful process and start there”) not just the easy copy-paste applications. They also looked at how other smart technologies, such as OCR and AI, could be integrated to create completely unattended bots that add significant value.

The decentralized approach resulted in seven different RPA platforms and too much process automation happening in the RPA layer, which increased the amount of technical debt, so they adapted their strategy to consolidate RPA platforms and separate the process layer from the bot layer. In short, they are now using Camunda for process orchestration, and the RPA bots have become tasks that are orchestrated by the process engine. Gradually, they are (or will be) replacing the RPA bots with APIs, which moves the integration from front-end to back-end, making it more robust with less maintenance.

I moved off to the business architecture track for a presentation by Srivatsan Vijayaraghavan of Intuit, where they are using Camunda for three different use cases: their own internal processes, some customer-facing processes for interacting with Intuit, and — most interesting to me — enabling their customers to create their own workflows across different applications. Their QuickBooks customers are primarily small and mid-sized business that don’t have the skills to set up their own BPM system (although arguably they could use one of the many low-code process automation platforms to do at least part of this), which opened the opportunity for Intuit to offer a workflow solution based on Camunda but customizable by the individual customer organizations. Invoice approvals was an obvious place to start, since Accounts Payable is a problem area in many companies, then they expanded to other approval types and integration with non-Intuit apps such as e-signature and CRM. Customers can even build their own workflows: a true workflow as a service model, with pre-built templates for common workflows, integration with all Intuit services, and a simplified workflow designer.

Intuit customers don’t interact directly with Camunda services; Camunda is a separately hosted and abstracted service, and they’ve used Kafka messages and external task patterns to create the cut-out layer. They’ve created a wrapper around the modeling tools, so that customers use a simplified workflow designer instead of the BPMN designer to configure the process templates. There is an issue with a proliferation of process definitions as each customer creates their own version of, for example, an invoice approval workflow — he mentioned 70,000 process definitions — and they will likely need to do some sort of automated cleanup as the platform matures. Really interesting use case, and one that could be used by large companies that want their internal customers to be able to create/customize their own workflows.

The next presentation was by Stephen Donovan of Fidelity Investments and James Watson of Doculabs. I worked with Fidelity in 2018-19 to help create the architecture for their digital automation platform (in my other life, I’m a technical architecture/strategy consultant); it appears that they’re not up and running with anything yet, but they have been engaging the business units on thinking about digital transformation and how the features of the new Camunda-based platform can be leveraged when the time comes to migrate applications from their legacy workflow platform. This doesn’t seem to have advanced much since they talked about it at the April CamundaCon, although Donovan had more detailed insights into how they are doing this.

At the April CamundaCon, I watched Patrick Millar’s presentation on using Camunda for blockchain ledger automation, or rather I watched part of it: his internet died partway through and I missed the part about how they are using Camunda, so I’m back to see it now. The RiskStream Collaborative is a not-for-profit consortium collaborating on the use of blockchain in the insurance industry; their parent organization, The Institutes, provides risk management and insurance education and is guided by senior executives from the property and casualty industry. To copy from my original post, RiskStream is creating a distributed network platform, called Canopy, that allows their insurance company members to share data privately and securely, and participate in shared business processes. Whenever you have multiple insurance companies in an insurance process, like a claim for a multi-vehicle accident, having shared business processes — such as first notice of loss and proof of insurance — between the multiple insurers means that claims can be settled quicker and at a much lower cost.

I do a lot of work with insurance companies, as well as with BPM vendors to help them understand insurance operations, and this really resonates: the FNOL (first notice of loss) process for multi-party claims continues to be a problem in almost every company, and using enterprise blockchain to facilitate interactions between the multiple insurers makes a lot of sense. Note that they are not creating or replacing claims systems in any way; rather, they are connecting the multiple insurance companies, who would then integrate Canopy to their internal claims systems such as Guidewire.

Camunda is used in the control framework layer of Canopy to manage the flows within the applications, such as the FNOL application. The control framework is just one slice of the platform: there’s the core distributed ledger layer below that, where the blockchain data is persisted, and an integration layer above it to integrate with insurers’ claims systems as well as the identity and authorization registry.

There was a Gartner keynote, which gave me an opportunity to tidy up the writing and images for the rest of this post, then I tuned back in for Niall Deehan’s session on Camunda Hackdays over on the community tech track, and some of the interesting creations that come out of the recent virtual version. This drives home the point that Camunda is, at its heart, open source software that relies on a community of developer both within and outside Camunda to extend and enhance the core product. The examples presented here were all done by Camunda employees, although many of them are not part of the development team, but come from areas such as customer-facing consulting. These were pretty quick demos so I won’t go into detail, but here are the projects on Github:

If you’re a Camunda customer (open source or commercial) and you like one of these ideas, head on over to the related github page and star it to show your interest.

There was a closing keynote by Capgemini; like the Gartner keynote, I felt that it wasn’t a great fit for the audience, but those are my only real criticisms of the conference so far.

Jakob Freund came back for a conversation with Mary Thengvall to recap the day. If you want to see the recorded videos of the live sessions, head over to the agenda page and click on Watch Now for any session.

There’s a lot of great stuff on the agenda for tomorrow, including CTO Daniel Meyer talking about their new RPA orchestration capabilities, and I’ll be back for that.

#PegaWorld iNspire 2020

PegaWorld, in shifting from an in-person to virtual event, dropped down to a short 2.5 hours. The keynotes and many of the breakouts appeared to be mostly pre-recorded, hosted live by CTO Don Schuerman who provided some welcome comic relief and moderated live Q&A with each of the speakers after their session.

The first session was a short keynote with CEO Alan Trefler. It’s been a while since I’ve had a briefing with Pega, and their message has shifted strongly to the combination of AI and case management as the core of their digital platform capabilities. Trefler also announced Pega Process Fabric that allows the integration of multiple systems not just from Pega, but other vendors.

Next up was SVP of Products Kerim Akgonul, discussing their low-code Pega Express approach and how it’s helping customers to stand up applications faster. We heard briefly from Anna Gleiss, Global IT Head of Master Data Management at Siemens, who talked about how they are leveraging Pega to ensure reusability and speed deployment across the 30 different applications that they’re running in the Pega Cloud. Akgonul continued with use cases for self-service — especially important with the explosion in customer service in some industries due to the pandemic — and some of their customers such as Aflac who are using Pega to further their self-service efforts.

There was a keynote by Rich Gilbert, Chief Digital and Information Officer at Aflac, on the reinvention that they have gone through. There’s a lot of disruption in the insurance industry now, and they’ve been addressing this by creating a service-based operating model to deliver digital services as a collaboration between business and IT. They’ve been using Pega to help them with their key business drivers of settling claims faster and providing excellent customer service with offerings such as “Claims Guest Checkout”, which lets someone initiate a claim through self-service without knowing their policy number or logging in, and a Claims Status Tracker available on their mobile app or website. They’ve created a new customer service experience using a combination of live chat and virtual assistants, the latter of which is resolving 86% of inquiries without moving to a live agent.

Akgonul also provided a bit more information on the Process Fabric, which acts as a universal task manager for individual workers, with a work management dashboard for managers. There was no live Q&A at this point, but it was delayed until a Tech Talk later in the agenda. In the interim was a one-hour block of breakouts that had one track of three live build sessions, plus a large number of short prerecorded sessions from Pega, partners and customers. I’m interested in more information on the Process Fabric, which I believe will be in the later Tech Talk, although I did grab some screenshots from Akgonul’s keynote:

The live build sessions seemed to be overloaded and there was a long delay getting into them, but once started, they were good-quality demos of building Pega applications. I came in part way through the first one on low-code using App Studio, and it was quite interactive, with a moderator dropping in occasionally with live questions, and eventually hurrying the presenter along to finish on time. I was only going to stay for a couple of minutes, but it was pretty engaging and I watched all of it. The next live demo was on data and integration, and built on the previous demo’s vehicle fleet manager use case to add data from a variety of back-end sources. The visuals were fun, too: the presenter’s demo was most of the screen, with a bubble at the bottom right containing a video of the speaker, then a bubble popping in at the bottom left with the moderator when he had a question or comment. Questions from the audience helped to drive the presentation, making it very interactive. The third live demo was on user experience, which had a few connectivity issues so I’m not sure we saw the entire demo as planned, but it showed the creation of the user interface for the vehicle manager app using the Cosmos system, moving a lot of logic out of the UI and into the case model.

The final session was the Tech Talk on product vision and roadmap with Kerim Akgonul, moderated by Stephanie Louis, Senior Director of Pega’s Community and Developer Programs. He discussed Process Fabric, Project Phoenix, Cosmos and other new product releases in addition to fielding questions from social media and Pega’s online community. This was very interactive and engaging, much more so than his earlier keynote which seemed a bit stiff and over-rehearsed. More of this format, please.

In general, I didn’t find the prerecorded sessions to be very compelling. Conference organizers may think that prerecording sessions reduces risk, but it also reduces spontaneity and energy from the presenters, which is a lot of what makes live presentations work so well. The live Q&A interspersed with the keynotes was okay, and the live demos in the middle breakout section as well as the live Tech Talk were really good. PegaWorld also benefited from Pega’s own online community, which provided a more comprehensive discussion platform than the broadcast platform chat or Q&A. If you missed today’s event, you should be able to find all of the content on demand on the PegaWorld site within the next day or two.

Building Scalable Business Automation with Microservices – a paper I created for @Camunda

scalable-business-automation-with-microservicesLast year, I did a few presentations for Camunda: a keynote at their main conference in Berlin, a webinar together with CEO Jakob Freund, and a presentation at their Camunda Day in Toronto, all on the similar theme of building a scalable digital automation platform using microservices and a BPMS.

I wrapped up my ideas for those presentations into a paper, which you can download from the Camunda website. Enjoy!

bpmNEXT 2018: Application Development with ProcessMaker, Capital BPM, Camunda

Next-Generation Backendless Workflow Orchestration API for ISVs, ProcessMaker

Brian Reale and Taylor Dondich from ProcessMaker presented their new ProcessMaker.io product for a BPMN 2.0 workflow microservice API in the cloud, targeted at ISVs to add process management capabilities into their vertical products. This is intended to solve the problem of software vendors who want customized workflow features without having to embed a full BPMS platform. They provide a simplified Javascript process designer that ISVs can use to present to their end users, although a full BPMN designer could be used and the results imported into the environment, and there’s a simple task invocation interface that can be called from pretty much any language or environment via language-specific SDKs and generalized REST APIs. The demo showed creating a new environment, and walked through a Slack integration application where Slack becomes the task list user interface, and simple HTML forms are used as the task processing UI (which could be any UI environment). This is a developer tool, not an end-user or low-code tool; check out their github for SDK and connector code as well as samples, and their own site for videos and descriptions of use cases. There was some pushback on the use of the term “microservice”; it’s really a lean cloud-based BPPM engine in the cloud that provides fast, scalable, enterprise-grade workflow capabilities. Although I haven’t done any direct comparison, there’s at least some overlap with Camunda’s Zeebe.io offering.

CapBPM’s IQ no-code BPM development – Turning Ideas into Value, Capital BPM

Max Young from Capital BPM talked about their no-code code generator: a graphical environment that can import industry-standard models (including BPMN, but also from IBM BPM’s application format), augment with functions such as service calls and user interfaces, and export as a BPM application in a number of different formats including those that can be imported into BPMS vendors’ products, or open source code. The demo showed how they can start with an application template that includes process and data models, then have the tool use AI to suggest UI layouts and other application parameters. There are a number of analysis tools for simulating processes, visualizing interactions between components (such as between a process model and a decision model). He created a process application from scratch, defining data fields, allowing auto-layout to suggest a visual form which he then modified to add logic to fields, and defining a BPMN process model to create an application shell. He then exported to both IBM BPM and Camunda BPM, which deployed the application to each of those environments and created application dashboards. The goal of this product appears to be to allow a broader range of people to rapidly develop BPM apps without being trained in the specific target BPM tool, with the resulting application passed off to a development team that will maintain it in the long term. For low-code tools such as IBM BPM, that may not be a perfect use case, but for products that are targeted at developers, such as Camunda, it might be a better fit as a UI and application code generator.

Monitoring Transparency for High-Volume, Next-Generation Workflows, Camunda

Ryan Johnston of Camunda presented on their Zeebe.io product, which (like the new ProcessMaker.io offering discussed above), is a microservice orchestration engine, but more specifically monitoring the performance of Zeebe by pairing it with Camunda Optimize to create heatmaps and other reports. The demo is based on a stock market pairs trading arbitrage use case, where a third-party process detects arbitrage opportunities and sends a signal that instantiates a Zeebe process; this process calls services to calculate the risk, calculate the long/short positions, and execute the trade. Speed and volume are key since rapidly changing market conditions could impact the effectiveness of the trade, hence the requirement for a high-performance engine like Zeebe, but also the need to monitor performance. The Zeebe Simple Monitor is the first of the administration tools being ported to this environment from the main Camunda product, providing a lighter-weight version of Cockpit. Camunda Optimize is used directly to view Zeebe performance, with the ability to create reports and assemble them into dashboards that show metrics such as flow node distribution (in pie chart, heatmap and tabular format), process instance count, and raw process instance data. He also demonstrated alerts, which can notify (by email) when specific values hit certain milestones, such as process instance count exceeding a value. He finished with one of Camunda’s fun add-ons, which is a video game view of a process model that allows you to walk through a 3D representation and shoot to kill process instances. Interesting audience question on using Zeebe as a smart event bus in addition to standard process applications at high volume.

Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.

Vega Unity 7 - December 2017

Getting started with OpenText case management

I had a demo from Simon English at the OpenText Enterprise World expo earlier this week, and now he and Kelli Smith are giving a session on their dynamic case management offering. English started by defining case management:

  • Management of dynamic, unstructured processes
  • Processes are driven by events or human interactions to support faster, more accurate decisions
  • Decisions are tied to content and the case directs that content to the right conclusion

In their terms, a case is a transaction that is “opened” and “closed” over a period of time: resolve a problem, settle a claim, or fulfill a request. There may be many different types of participants required to complete the case, and a variety of content and data involved.

Similar to the approach of other vendors, OpenText equates “case management” with “vertical application development” to a certain extent, and getting to case handling quickly needs a blueprint to quick-start solution development. To that end, they provide an accelerator as part of Process Suite that includes a pre-defined case model and entities to provide a starting point for developing a case management application, particularly incident management or service requests. Essentially, it’s a sample app/template, albeit a well-structured one that can easily be modified for actual solutions; they have no illusions that this is going to be an out-of-the-box solution for anyone, but rather a guide for people creating new case management applications so that they don’t need to start from scratch.

If you refer back to the more complete description of AppWorks Low Code that I gave in the previous post, they have defined entities, forms, layouts and a case lifecycle that fit a wide variety of request-style case management applications.

Smith then gave us a demonstration of People Center — similar to what we saw her do on the main stage on Tuesday — and discussed how they used the case management accelerator as a starting point for developing the People Center application. They used some parts of the template pretty much as is — such as the request creation form — but made it specific to HR management and extended the capabilities to suit, including a dashboard specific to each role. Checklists and options are specific to the HR application, but as discussed in previous posts, those will persist through an upgrade of the underlying People Center application.

She also walked us through the case management accelerator in the development environment, showing the fairly complete set of entities, forms, layouts, action bars, lists, relationships, rules, email templates, BPM processes, roles and other objects, as well as how easy it is to modify them for your own use. For any partners in the audience, or even customer developers, this will resonate as a method of quickly creating a fully-customized application based on the template that addresses a specific vertical functionality.

OpenText Process Suite becomes AppWorks Low Code

“What was formerly known as Process Suite is AppWorks Low Code, since it has always been an application development environment and we don’t want the focus to be on a single technology within in.”  – Dana Khoyi, architect of OpenText’s Process Suite

That pretty much sums up the biggest BPM positioning/branding announcement at OpenText Enterprise World 2017 this week. BPM is dead, long live low-code application development? Note that AppWorks is the name used for all OpenText developer tools; the technical developer APIs and access points, plus this low code product which is really a separate product.

Khoyi and Kelli Smith (who did the main stage People Center demo on Tuesday) led a session on the last day of Enterprise World to show how Process Suite AppWorks is used to create applications, starting with defining composite entities (business objects made up of multiple pieces of data), then UI constructs including forms, dashboards and lists. Because process and content are built into the environment, there are easy building blocks for content lifecycle, activity flow and history. Declarative rules are supported — triggered on conditions, events or user actions — and dropping out to a full process model for more complex flows and events. They also have a development framework for building customizable applications that persists customizations separately from the application and merges them at runtime, allowing a new version of the core application to be installed without discarding the previous customizations, although obviously you’d want to test and might require some minor retrofits.

Application development starts by defining the core entity for the application (think process or case instance class) then add properties (data fields) and building blocks: forms to edit and display those properties (as well as built-in properties such as state); lists that can be worklists or reporting artifacts; and layouts, which are essentially the application UI screens and can include the previously-created forms plus actions, breadcrumbs, and related content. Data/content security and access/update conflicts are handled automatically on the forms/layouts based on underlying security definitions. Apps that are created can be published immediately to run; these can be moved as packages between testing and production environments although it’s not clear that there’s any versioning or automation around that, so likely some manual governance is required.

Other building blocks that can be added to an application include:

  • A history log that maintains a complete audit trail of everything done during the instance including field-level data changes
  • A discussion for collaborative chat/comments on an instance
  • Content, which can be files/folders that are attached to the case instance using a local document store or other content store via a connector or CMIS, or a businses workspace within Content Server (using Extended ECM) which stores the content in CS and allows access from either environment while syncing properties between them.
  • Email templates that provide a form letter email capability for inbound/outbound email associated with the case
  • Three ways of managing work:
    • Lifecycle, which is a state machine-oriented view (i.e., milestones and the actions required to move between states) for a simple case workflow
    • BPM, for a full drop to the BPMN editor for complex process flows
    • Action flow, which is a simple sequence flow
  • Mobile app creation
  • Entity relationships

There’s a lot of stuff in here, and we didn’t see it all in this short session, but looks like a pretty robust environment for low-code development. Khoyi stated explicitly that this is becoming the development for all OpenText products, replacing the workflow capabilities in Content Server and Documentum.

Bridging the bimodal IT divide

Bimodal ITI wrote a paper a few months back on bimodal IT: a somewhat controversial subject, since many feel that IT should not be bimodal. My position is that it already is – with a division between “heavy” IT development and lighter-weight citizen development – and we need to deal with what’s there with a range of development environments including low-code BPMS. From the opening section of the paper:

The concept of bimodal IT – having two different application development streams with different techniques and goals – isn’t new. Many organizations have development groups that are not part of the standard IT development structure, including developers embedded within business units creating applications that IT couldn’t deliver quickly enough, and skunkworks development groups prototyping new ideas.

In many cases, this split didn’t occur by design, but out of situational necessity when mainstream IT development groups were unable to service the needs of the business, especially transformational business units created specifically to drive innovation. However, in the past few years, analysts are positioning this split as a strategic action to boost innovation. By 2013, McKinsey & Company was proposing “greenfield IT” – technology managed independently of the legacy application development and infrastructure projects that typically consume most of the CIO’s attention and budget – as a way to innovate more effectively. They found a correlation between innovation and corporate growth, and greenfield IT as a way to achieve that innovation. By the end of 2014, the term “bimodal IT” was becoming popular, with Mode 1 being the traditional application development cycle focused on stability, well suited to infrastructure and legacy maintenance and Mode 2, focused on agility and innovation, similar to McKinsey’s greenfield IT.

Read on by downloading the paper from Software AG’s site; right now, it looks like registration isn’t required.

American Express digital transformation at Pegaworld 2016

Howard Johnson and Keith Weber from American Express talked about their digital transformation to accommodate their expanding market of corporate card services for global accounts, middle market and small businesses. Digital servicing using their @work portal was designed with customer engagement in mind, and developed using Agile methodologies for improved flexibility and time to market. They developed a set of guiding principles: it needed to be easy to use, scalable to be able to manage any size of servicing customer, and proactive in providing assistance on managing cash flow and other non-transactional interactions. They also wanted consistency across channels, rather than their previous hodge-podge of processes and teams depending on which channels.

wp-1465337619564.jpg

AmEx used to be a waterfall development shop — which enabled them to offshore a lot of the development work but meant 10-16 months delivery time — but have moved to small, agile teams with continuous delivery. Interesting when I think back to this morning’s keynote, where Gerald Chertavian of Year Up said that they were contacted by AmEx about providing trained Java/Pega developers to help them with re-onshoring their development teams; the AmEx presenter said that he had four of the Year Up people on his team and they were great. This is a pretty negative commentary on the effectiveness of outsourced, offshore development teams for agile and continuous delivery, which is considered essential for today’s market. AmEx is now hiring technical people for onshore development that is co-located with their business process experts, greatly reducing delivery times and improving quality.

wp-1465337686253.jpg

Technology-wise, they have moved to an omni-channel platform that uses Pega case management, standardizing 65% of their processes while providing a single source of the truth. This has resulted in faster development (lower cost per market and integration time, with improved configurability) while enabling future capabilities including availability, analytics and a process API. On the business side, they’re looking at a lot of interesting capabilities for the future: big data-enabled insights, natural language search, pluggable widgets to extend the portal, and frequent releases to keep rolling this out to customers.

It sounds like they’re starting to use best practices from a technology design and development standpoint, and that’s really starting to pay off in customer experience. It will be interesting to see if other large organizations — with large, slow-moving offshore development shops — can learn the same lessons.