In that section, which I called “Beyond Checklists”, I looked at the ways that we are making ACM smarter, using rules, analytics, machine learning and other technologies. That doesn’t mean that ACM will become less reliant on the knowledge workers that work cases; rather, these technologies support them through recommendations and selective automation.
I also cover the ongoing challenges of collaboration within ACM, particularly the issue of encouraging collaboration through social metrics that align the actions of individuals with corporate goals.
You’ll find chapters by many luminaries in the field of BPM and ACM, including some real-world case studies of ACM in action.
I finished up Pegaworld 2016 at a panel of Pega technology executives who provided the vision and roadmap for CRM and Pega 7. Don Schuerman moderated the panel, which included Bill Baggott, Kerim Akgonul, Maarten Keijzer, Mark Replogle and Steve Bixby.
OpenSpan RPA and RDA is the most recent technology acquisition; by next year, the workforce intelligence will be a key differentiator for Pega to help their customers analyze user behavior and detect patterns for potential improvement and automation
Digital channels are now outpacing call center channels in the market, and they are enabling chat and other social capabilities for customer interaction but need to consider how to integrate the text-based social channels into a seamless experience
Developing with Pega 7 models isolates the applications from the underlying cloud and/or containerization platforms, so that new platforms and configurations can be put in place without changing the apps
New data visualizations based on improved analytics are evolving, providing opportunities for discovery of business practices
A simpler modeling environment allows non-technical designers to create and configure apps without accessing the Designer Studio; at this point, technical developers are likely needed for some capabilities
They are looking at newer data management technologies, e.g., NoSQL and blockchain, to see how they might fit into Pega’s technology stack; no commitments, but interesting to hear the discussion of the use of blockchain for avoiding conflicts and deadlocks in active-active-active scenarios without having to build it into applications explicitly
Some new technology components, developed by Pega or third parties, may be available via Pega Exchange as add-on apps rather than built into the product stack directly
They hope to be able to offer more product trials in the future, which can be downloaded (or accessed directly in the cloud) for people to check out the capabilities
DevOps is seen as more of a cultural than technology shift, where releases are sped up since code isn’t just thrown over the wall from development to deployment, but remain the responsibility of the dev team; a container strategy is an important component of having this run smoothly
UX design is critical in creating an appropriate customer experience, but also in supporting developers who are developing apps; Pega Express is regularly fine-tuned to provide an optimal modeling/design experience, and Pega Design provides insight into their design thinking initiatives
All of the data sources, including IoT events, contribute to analytics and machine learning, and therefore to the constant improvement of Next Best Action recommendations
They would like to be able to offer more predictive and intelligent customer service capabilities; lots of “wish list” ideas from the entire panel
This is the last session on day 2; tomorrow is all training courses and I’ll be heading home. It’s been a pretty full couple of days and always good to see what Pega’s up to.
Howard Johnson and Keith Weber from American Express talked about their digital transformation to accommodate their expanding market of corporate card services for global accounts, middle market and small businesses. Digital servicing using their @work portal was designed with customer engagement in mind, and developed using Agile methodologies for improved flexibility and time to market. They developed a set of guiding principles: it needed to be easy to use, scalable to be able to manage any size of servicing customer, and proactive in providing assistance on managing cash flow and other non-transactional interactions. They also wanted consistency across channels, rather than their previous hodge-podge of processes and teams depending on which channels.
AmEx used to be a waterfall development shop — which enabled them to offshore a lot of the development work but meant 10-16 months delivery time — but have moved to small, agile teams with continuous delivery. Interesting when I think back to this morning’s keynote, where Gerald Chertavian of Year Up said that they were contacted by AmEx about providing trained Java/Pega developers to help them with re-onshoring their development teams; the AmEx presenter said that he had four of the Year Up people on his team and they were great. This is a pretty negative commentary on the effectiveness of outsourced, offshore development teams for agile and continuous delivery, which is considered essential for today’s market. AmEx is now hiring technical people for onshore development that is co-located with their business process experts, greatly reducing delivery times and improving quality.
Technology-wise, they have moved to an omni-channel platform that uses Pega case management, standardizing 65% of their processes while providing a single source of the truth. This has resulted in faster development (lower cost per market and integration time, with improved configurability) while enabling future capabilities including availability, analytics and a process API. On the business side, they’re looking at a lot of interesting capabilities for the future: big data-enabled insights, natural language search, pluggable widgets to extend the portal, and frequent releases to keep rolling this out to customers.
It sounds like they’re starting to use best practices from a technology design and development standpoint, and that’s really starting to pay off in customer experience. It will be interesting to see if other large organizations — with large, slow-moving offshore development shops — can learn the same lessons.
I attended a breakout panel on how the idea and usage of personal data are changing was moderated by Alan Marcus of the World Economic Forum (nice socks!), and included Richard Archdeacon of HP, Rob Walker from Pega and Matt Mobley from Merkel.
The focus is on customer data as it is maintained in an organization’s systems, and the regulations that now drive how that data is managed. The talk was organized around three key themes that are emerging from the global dialog: strengthening trust and accountability; understanding usage-based, individual-centric frameworks; and engaging the individual. Thoughts from the panel:
Once you have someone’s data, you remain responsible for it even as you pass it to other parties
Customer data management is now regulation-driven
It’s not enough to restrict values in a customer data set; it’s now possible to derive hidden values (such as gender or race) from other values, which can result in illegal targeting: how much efforts should be put into anonymizing data when it can be easily deanonymized?
Organizations need to inform customers of what data that they have about them, and how it is being used
Consumers want the convenience offered by giving up their data more than they fear misuse of the data
The true currency of identity for organizations is an email address and one other piece of data, which can then be matched to a vast amount of data from other sources
The biggest consumer fear is data privacy violation from a security breach (about which is there is a high level of hysteria), but possibly they should be more afraid of how the companies that they willingly give the data to are going to use it
Personal data includes data that you create, data that others create about you, and data that is inferred based on your activities
Many people are maintained multiple identities on social media sites, curated differently for professional and personal audiences
Personal health data, including genetic data, has an additional set of concerns since it can impact individual healthcare options
Unresolved question of when personal data is no longer personal data, e.g., after a certain amount of aggregation and analysis occurs
Issues of consent (by customers to use their data) are becoming more prominent, and using data without consent will be counter to the regulations in most jurisdictions
Many smaller businesses will find it difficult to meet security compliance regulations; this may drive them to use cloud services where the provider assumes some degree of security responsibility
Food for thought. A lot of unresolved issues in personal data privacy and management.
Day 2 of Pegaworld 2016 – another full day on the schedule.
The keynote started with Gilles Leyrat, SVP of Customer and Partner Services at Cisco, discussing how they became a more digital operation in order to provide better customer service and save costs. Cisco equipment provides a huge part of the backbone of the internet, supporting digital transformation for many other organizations, but this was about how they are transforming themselves to keep pace with their customers as well as their competitors. They are using Pega to digitize their business by connecting people and technology, automating processes, and using data for real-time analytics and process change to support their 20,000-strong sales team and 2M orders per year.
Their digitization has three key goals: operational excellence, revenue growth, and “delightful” customer experience. Customer experience is seen as being crucial to revenue growth, with strong causal links showing up in research. He compared the old world — offshore customer service centers augmented by onshore specialists — with the new digital world, where digitization is a means to achieving their customer experience goal by simplifying, automating and using analytics. By reducing human touch in many standard processes, they are able to reduce wait time for customers while allowing workers to focus on interacting with customers to resolve problems: 93% of cases are now handled with zero touch, saving 2M hours of wait time per year and reducing order resolution time to 6 hours. The employee experience is improved through integrated workplaces and actionable intelligence that support their work patterns. He ended with the advice to understand what you’re trying to achieve, and linking your digital transformation initiatives to those goals.
Next was a panel on digital transformation moderated by Christopher Paquette, Digital Principal at McKinsey, including Alistair Currie, COO at ANZ Bank; Toine Straathof, EVP at Rabobank; Kevin Sullivan, SVP and Head of the Decision Sciences Group at Fifth Third Bank; and Nicole Gleason, Practice Lead for Business Intelligence & Analytics at Comet Global Consulting. A few notes from the panel (I mostly haven’t attributed to the specific speaker since the conversation was free-ranging):
Digital transformation is being driven by rapidly-changing customer expectations
Banking customers prefer mobile/online first, then ATM, then branch, then call center; this aligns well with operational costs but requires that the digital platforms be built out first
Moving internal stakeholders off their old methods and out of operational silos can be more difficult than dealing with regulators and other external parties
Making IT and business people responsible for results (e.g., a guiding business architecture) rather than dictating their exact path can lead to innovation and optimal solutions
Employee incentives need to be consistent across channels to lessen the competition across them
A lot of current digitization efforts are to bridge/hide the complexity of existing legacy systems rather than actual digital transformation
Alan Trefler returned to the stage to introduce the concepts of the fourth industrial revolution and workforce disruption; he sees what is happening now as a step change in how society works and how we interact with technology. We heard from Alan Marcus, Head of the Technology Agenda at the World Economic Forum, on this topic, and how new categories of jobs and the required skill sets will completely transform employment markets. Lots of opportunities, but also lots of disruption, in both first world and emerging markets. He covered a timeline of changes and their impacts, and stressed that skill sets are changing quickly: 35% of core skills will change by 2020. Companies need to expose workers to new roles and training, and particularly open doors to women in all roles. Creativity will become a core skill, even as AI technologies gain acceptance. Governments and education systems need to innovate to support the changing workforce. Organizations need to reinvent their HR to help employees to move into this brave new world.
The keynote finished with Gerald Chertavian, Founder and CEO at Year Up, an organization that helps low-income youth prepare for a professional job. There’s a social justice goal of helping young adults who have no college degree (and no path to get one) to become hireable talent through practical training and internships; but there’s also the side benefit of feeding skilled workers into the rapidly-changing technology-heavy employment market that Marcus discussed earlier. Year Up was contacted by American Express, who needed people trained in Java and Pega in order to re-onshore some of their development work; they created a curriculum targeted at those jobs and trained up a large number of people who then competed successfully for those jobs. Year Up is now in 18 cities across the US, working with large organizations to identify skills gaps and train people to suit the employment pipeline. They’re changing tens of thousands of lives by providing a start on the path to upward mobility, and feeding a need for companies to hire the right skills in order to transform in this fourth industrial revolution.
Less than two months ago, Pega announced their acquisition of OpenSpan, a software vendor in the robotic process automation (RPA) market. That wasn’t my first exposure to OpenSpan, however: I looked at them eight years ago in the context of mashups. Here at PegaWorld 2016, we’re getting a first peek at the unified roadmap on how Pega and OpenSpan will fit together. Also, a whole new mess of acronyms.
I’m at the OpenSpan session at Pegaworld 2016, although some of these notes date from the time of the analyst briefing back in April. Today’s presentation featured Anna Convery of Pega (formerly OpenSpan); Robin Gomez, Director of Operational Intelligence at Radial (a BPO) providing an introduction to RPA; and Girish Arora, Senior Information Oficer at AIG, on their use of OpenSpan.
Back in the 1990’s, a lot of us who were doing integration of BPM systems into enterprises used “screen scraping” to push commands to and pull data from the screens of legacy systems; since the legacy systems didn’t support any sort of API calls, our apps had to pretend to be a human worker to allow us to automate integration between systems and even hide those ugly screens. Gomez covered a good history of this, including some terms that I had hoped to never see again (I’m looking at you, HLLAPI). RPA is like the younger, much smarter offspring of screen scraping: it still pushes and pulls commands and data, automating desktop activities by simulating user interaction, but it’s now event-driven, incorporating rules and machine learning.
As with BPM and other process automation, Gomez talked about how the goal of RPA is to automate repeatable tasks, reduce error rates, improve standardization, reduce requirement for knowledge about multiple systems, shorten worker onboarding time, and create a straight-through process. At Radial, they were looking for the combination of robotic desktop automation (RDA) that provides personal robots to assist workers’ repetitive tasks, and RPA that completely replaces the worker on an unattended desktop. I’m not sure if every vendor makes a distinction between what OpenSpan calls RDA and RPA; it’s really the same technology, although there are some additional monitoring and virtualization bits required for the headless version.
OpenSpan provides the usual RPA desktop automation capabilities, but also includes the (somewhat creepy) ability to track and analyze worker behavior: basically, what they’re typing into what application in what context, and present it in their Opportunity Finder. This information can be mined for patterns in order to understand how people do their job — much the way that process mining works, but based on user interactions rather than system log files — and automate the parts that are done the same way each time. This can be an end in itself, or a stepping stone to a replacement of the desktop apps entirely, providing interim relief while a full Pega BPM/CRM implementation is being developed, for example. Furthermore, the analytics about the user activities on the desktop can feed into requirements for any replacement initiative, both the general flow as well as an analysis of the decisions made based on what data was presented.
OpenSpan and Pega aren’t (exactly) competitive technologies: OpenSpan can be used for desktop automation where replacement is not an option, or can be used to as a quick fix while performing desktop process discovery to accelerate a full Pega desktop replacement project. OpenSpan paves the cowpaths, while a Pega implementation is usually a more fundamental innovation that may not be warranted in all situations. I can also imagine scenarios where a current Pega customer uses OpenSpan to automate the interaction between Pega and legacy applications that still exist on the desktop. From a Pega sales standpoint, OpenSpan may also act as the camel’s nose in the tent to get into net new clients.
There are a wide variety of use cases, some of them saving just a few minutes but applicable to thousands of workers (e.g., logging in to multiple systems each morning), others replacing a significant portion of knowledge work for a smaller number of workers (e.g., financial reconciliations). Arora talked about what they have done at AIG, in the context of processes that require a mix of human-required and fully automatable steps; he sees their opportunity as moving from RDA (where people are still involved, gaining 10-20% in efficiency) to RPA (fully automated, gaining 40-50% efficiency). Of course, they could just swap out their legacy systems for something that was built this century, but that’s just too difficult to change — expensive, risky and time-consuming — so they are filling in the automation gaps using OpenSpan. They have RDA running on every desktop to assist workers with a variety of tasks ranging from simple to complex, and want to start moving some of those to RPA to roll out unattended automation.
OpenSpan is typically deployed without automation to start gathering user analytics, with initial automation of manual procedures within a few weeks. As Pega cognitive technologies are added to OpenSpan, it should be possible for the RPA processes to continue to recognize patterns and recommend optimizations to a worker’s flow, becoming a sort of virtual personal assistant. I look forward to seeing some of that as OpenSpan is integrated into the Pega technology family.
OpenSpan is Windows-only .NET technology, with no plans to change that at the time of our original analyst briefing in April. We’ll see.
It seems like I was just here in Vegas at the MGM Grand…oh, wait, I *was* just here. Well, I’m back for Pegaworld 2016, and 4,000 of us congregated in the Grand Garden Arena for the opening keynote on the first day. If you’re watching from home, or want to catch a replay, there is a live stream of the keynotes that will likely feature an on-demand replay at some point.
Alan Trefler, Pega’s CEO, kicked things off by pointing out the shift from a focus on technology to a focus on the customer. Surveys show that although most companies think that they understand their customers, the customers don’t agree; companies need to undergo a serious amount of digital transformation in order to provide the level of service that today’s customers need, while still improving efficiencies to support that experience. One key to this is a model-driven technology environment that incorporates insights and actions, allowing the next best action to be provided at any given point depending on the current context, while supporting organizational evolution to allow constant change to meet the future demands. Model-driven environments let you create applications that are future-proof, since it is relatively quick to make changes to the models without changing a lot of code. Pega has a lot of new online training at the Pega Academy, a marketplace of third-party Pega applications at the Pega Exchange, and the continuing support of their Pega Express easy-to-use modeler; they continue to work on breaking free from their tech-heavy past to support more agile digital transformation. Pega recently sponsored an Economist report on digital transformation; you can grab that here.
Don Schuerman, Pega’s CTO, took over as MC for the event to introduce the other keynote speakers, but first announced a new partnership with Philips that links Pega’s care management package with Philips’ HealthSuite informatics and cloud platform for home healthcare. Jeroen Tas, CEO of Connected Care & Health Informatics at Philips presented more on this, specifically in the context of the inefficient and unevenly-distributed US healthcare system. He had a great chart that showed the drivers for healthcare transformation: from episodic to continuous, by orchestrating 24/7 care; from care provider to human-centric, by focusing on patient experience; from fragmented to connected, by connecting patients and caregivers; and from volume to value, by optimizing resources. Connected, personalized care links healthy living to disease prevention, and supports the proper diagnosis and treatment since healthcare providers all have access to a comprehensive set of the patient’s information. Lots of cool personal healthcare devices, such as ultrasound-as-a-service, where they will ship a device that can be plugged into a tablet to allow your GP to do scans that might normally be done by a specialist; continuous glucose meters and insulin regulation; and tools to monitor elderly patients’ medications. Care costs can be reduced by 26% and readmissions reduced by 52% through active monitoring in networked care delivery environments, such as by monitoring heart patients for precursors of a heart attack; this requires a combination of IoT, personal health data, data analytics and patient pathways provided by Philips and Pega. He ended up stating that it’s a great time to be in healthcare, and that there are huge benefits for patients as well as healthcare providers.
Although Tas didn’t discuss this aspect, there’s a huge amount of fear of connected healthcare information in user-pay healthcare systems: people are concerned that they will be refused coverage if their entire health history is known. Better informatics and analysis of healthcare information improves health and reduces overall healthcare costs, but it needs to be provided in an environment that doesn’t punish people for exposing their health data to everyone in the healthcare system.
We continued on the healthcare topic, moving to the insurance side with Birgit König, CEO of Allianz Health Germany. Since basic healthcare in Germany is provided by the state, health insurance is for additional services not covered by the basic plan, and for travelers while they are outside Germany. There is a lot of competition in the market, and customer experience for claims is becoming a competitive differentiator especially with new younger customers. In order to accommodate, Allianz is embracing a bimodal architecture approach, where back-end systems are maintained using traditional development techniques that focus on stability and risk, while front-end systems are more agile and innovative with shorter release cycles. I’ve just written a paper on bimodal IT and how it plays out in enterprises; not published yet, but completely aligned with what König discussed. Allianz is using Pega for more agile analytics and decisioning at the front end of their processes, while keeping their back-end systems stable. Innovation and fast development has been greatly aided by co-locating their development and business teams, not surprisingly.
The keynote finished with Kerim Akgonul, Pega’s SVP of Products, for a high-level product update. He started by looking at the alignment between internal business goals and the customer journey, spanning marketing, sales, customer service and operations. The Pega Customer Decision Hub sits at the middle of these four areas, linking information so that (for example), offers sent to customers are based on their past orders.
Marketing: A recent Forrester report stated that Pega Marketing yields an 8x return on marketing investment (ROMI) due to the next-best-action strategies and other smart uses of analytics. Marketers don’t need to be data scientists to create intelligent campaigns based on historical and real-time data, and send those to a targeted list based on filters including geolocation. We saw this in action, with a campaign created in front of us to target Pegaworld attendees who were actually in the arena, then sent out to the recipients via the conference mobile app.
Sales: The engagement map in the Pega Sales Automation app uses the Customer Decision Hub information to provide guidance that links products to opportunities for salespeople; we saw how the mobile sales automation app makes this information available and recommends contacts and actions, such as a follow-up contact or training offer. There are also some nice tools such as capturing a business card using the mobile camera and importing the contact information, merging it if a similar record is found.
Customer service: The Pega customer service dashboard shows individual customer timelines, but the big customer service news in this keynote is the OpenSpan acquisition that provides robotic process automation (RPA) to improve customer service environments. OpenSpan can monitor desktop work as it is performed, and identify opportunities for RPA based on repetitive actions. The new automation is set up by recording the actions that would be done by a worker, such as copying and pasting information between systems. The example was an address change, where a CSR would take a call from a customer then have to update three different systems with the same information by copying and pasting between applications. We saw the address change being recorded, then played back on a new transaction; this was also included as an RPA step in a Pega Express model, although I’m not sure if that was just to document the process as opposed to any automation driven from the BPM side.
Operations: The Pega Field Service application provides information for remote workers doing field support calls, reducing the time required to complete the service while documenting the results and tracking the workers. We saw a short video of Xerox using this in Europe for their photocopier service calls: the field engineer sees the customer’s equipment list, the inventory that he has with him, and other local field engineers who might have different skills or inventory to assist with his call. Xerox has reduced their service call time, improved field engineer productivity, and increased customer satisfaction.
Good mix of vision, technology and customer case studies. Check out the replay when it’s available.
I attended an analyst briefing earlier today with Jakob Freund, CEO of Camunda, on the latest release of their product, Camunda BPM 7.5. This includes both the open source version available for free download, and the commercial version with the same code base plus a few additional features and professional support. Camunda won the “Best in Show” award at the recent bpmNEXT conference, where they demonstrated combining DMN with BPMN and CMMN; the addition of the DMN decision modeling standard to BPMN and CMMN modeling environments is starting to catch on, and Camunda has been at the front edge of the wave to push that beyond modeling into implementation.
They are managing to keep their semi-annual release schedule since they forked from Activiti in 2013: version 7.0 (September 2013) reworked the engine for scalability, redid the REST API and integrated their Cockpit administration module; 7.1 (March 2014) focused on completing the stack with performance improvements and more BPMN 2.0 support; 7.2 (November 2014) added CMMN execution support and a new tasklist; 7.3 (May 2015) added process instance modification, improved authorization models and tasklist plugins; and 7.4 (November 2015) debuted the free downloadable Camunda Modeler based on the bpmn.io web application, and added DMN modeling and execution. Pretty impressive progression of features for a small company with only 18 core developers, and they maintain their focus on providing developer-friendly BPM rather than a user-oriented low-code environment. They support their enterprise editions for 18 months; of their 85-90 enterprise customers, 25-30% are on 7.4 with most of the rest on 7.3. I suspect that a customer base of mostly developers means that customers are accustomed to the cycle of regression testing and upgrades, and far fewer lag behind on old versions than would be common with products aimed at a less technical market.
Today’s 7.5 release marches forward with improvements in CMMN and BPMN modeling, migration of process instances, performance and user interface.
Oddly (to some), Camunda has included Case Management Model & Notation (CMMN) execution in their engine since version 7.2, but has only just added CMMN modeling in 7.5: previously, you would have used another CMMN-compliant modeler such as Trisotech’s then imported the model into Camunda. Of course, that’s how modeling standards are supposed to work, but a bit awkward. Their implementation of the CMMN modeler isn’t fully complete; they are still missing certain connector types and some of the properties required to link to the execution engine, so you might want to wait for the next version if you’re modeling executable CMMN. They’re seeing a pretty modest adoption rate for CMMN amongst their customers; the messaging from BPMS vendors in general is causing a lot of confusion, since some claim that CMMN isn’t necessary (“just use ad hoc tasks in BPMN”), others claim it’s required but have incomplete implementations, and some think that CMMN and BPMN should just be merged.
On the BPMN modeling side, Camunda BPM 7.5 includes “element templates”, which are configurable BPMN elements to create additional functionality such as a “send email” activity. Although it looks like Camunda will only create a few of these as samples, this is really a framework for their customers who want to enable low-code development or encapsulate certain process-based functionality: a more technical developer creates a JSON file that acts as an extension to the modeler, plus a Java class to be invoked when it is executed; a less technical developer/analyst can then add an element of that type in the modeler and configure it using the properties without writing code. The examples I saw were for sending email and tweets from an activity; the JSON hooked the modeler so that if a standard BPMN Send Task was created, there was an option to make it an Email Task or Tweet Task, then specify the payload in the additional properties that appeared in the modeler (including passed variables). Although many vendors provide a similar functionality by offering things such as Send Email tasks directly in their modeling palettes, this appears to be a more standards-based approach that also allows developers to create their own extensions to standard activities. A telco customer is using Camunda BPM to create their own customized environments for in-house citizen developers, and element templates can significantly add to that functionality.
The process instance migration feature, which is a plugin to the enterprise Cockpit administration module but also available to open source customers via the underlying RESET and Java APIs, helps to solve the problem of what to do with long-running processes when the process model changes. A number of vendors have solutions for this, some semi-automated and some completely manual. Camunda’s take on it is to compare the existing process model (with live instances) against the new model, attempt to match current steps to new steps automatically, then allow any step to be manually remapped onto a different step in the new model. Once that migration plan is defined, all running instances can be migrated at once, or only a filtered (or hand-selected) subset. Assuming that the models are fairly similar, such as the addition or deletion of a few steps, this should work well; if there are a lot of topology changes, it might be more of a challenge since there could need to roll back instance property values if instances are migrated to an earlier step in the process.
They have also improved multi-tenancy capabilities for customers who use Camunda BPM as a component within a SaaS platform, primarily by adding tenant identifier fields to their database tables. If those customers’ customers – the SaaS users – log in to Cockpit or a similar admin UI, they will only see their own instances and related objects, without the developers having to create a custom restricted view of the database.
They’ve released a simple process instance duration report that provides a visual interface as well a downloadable data. There’s not a lot here, but I assume this means that they are starting to build out a more robust and accessible reporting platform to play catch-up with other vendors.
Lastly, I saw their implementation of external task handling, another improvement based on customer requests. You can see more of the technical details here and here: instead of a system task calling an external task asynchronously then wait for a response, this creates a queue that an external service can poll for work. There are some advantages to this method of external task handling, including easier support for different environments: for example, it’s easier to call a Camunda REST API from a .NET client than to put a REST API on top of .NET; or to call a cloud-based Camunda server from behind a firewall than to let Camunda call through your firewall. It also provides isolation from any scaling issues of the external task handlers, and avoids service call timeouts.
There’s a public webinar tomorrow (June 1) covering this release, you can register for the English one here (11am Eastern time) and the German one here (10am Central European time).
Michael O’Connell hosted the last general session for TIBCO NOW 2016, focusing on analytics customer stories with the help of five customers: State Street, Shell, Vestas, Monsanto and Western Digital. I’m not going to try to attribute specific comments to the customer representatives, just capture a few thoughts as they go by.
Spotfire is allowing self-service analytics to be pushed down to the business users
Typically, the analysis going on in a number of different solutions — from Excel to BI tools — are able to be consolidated onto a single analytics platform
Analytics is allowing the business to discover the true nature of their business, especially with outliers
Providing visual analytics to business changes the way that they use data and collaborate across the organization
The enterprise-class back-end and the good visualizations in Spotfire are helping it to win over both IT and business areas
Data and events are being generated faster and in greater volumes from more devices, making desktop analytics solutions impractical
Business users who are not data specialists can understand — and leverage — fairly complex analytical models when it concerns their own data
Analytics about manufacturing quality can be used to identify potential problems before they occur
We finished up with a brief presentation from Fred Ehlers, VP of IT at Norfolk Southern, about their use of TIBCO products to help manage their extensive railway operations. He talked about optimizing their intermodal terminals, where goods shipped in containers are moved between trains, trucks and ships; asset utilization, to ensure that empty cars are distributed to the right place at the right time for expected demand; and their customer service portal that shows an integrated view of a shipment lifecycle to give customers a more accurate, real-time view. As an old company, they have a lot of legacy systems, and used TIBCO to integrate them, centralizing operational events, data and business rules. For them, events can come from their physical assets (locomotives and railway sensors), legacy reporting systems, partner networks for assets not under their ownership, and external information including weather. On this, they build asset state models, and create applications that automatically correlate information and optimize operations. They now have one source of data and rules, and a reusable set of data and services to make application development faster. Their next steps are predictive maintenance, gathering information from locomotives, signal systms, switches and trackside defect detector to identify problems prior to an equipment failure; and real-time visual analytics with alerts on potential problem areas. They also want to inmprove operational forecasting to support better allocation of resources, allowing them to divert traffic and take other measures to avoid service disruptions. Great case study that incorporates the two conference themes of interconnecting everything and augmenting intelligence.
We’re at the end of day 2, and the end of my blogging at TIBCO NOW; there are breakouts sessions tomorrow but I’ll be on my way home. Some great new stuff in BPM and analytics, although far too many sessions going on at once to capture more than a fraction of what I wanted to see.
I wanted to catch an ActiveMatrix BPM customer breakout session here at TIBCONOW 2016, so sat in on Rahsan Kalci from ING Turkey talking about their transformation to a digital bank using BPM, co-presenting with a senior BPM architect from TIBCO, Raisa Mahomed. I’ve always thought of ING Bank as innovative, both through personal experience and from reading case studies about how they apply technology to business problems.
ING Turkey’s business problem four years ago was that customer-facing processes were taking too long, were inefficient and inconsistent, and weren’t fully documented so difficult for new users to learn. They decided to create a new operating model with AMX BPM at the core, supporting all of their business processes, and are in the midst of an operational transformation with 11 processes already implemented, and several others underway, ranging in complexity and customer engagement. They are building completely custom applications using the APIs rather than leveraging out of the box workspace tools, since they already had a robust user interface environment that they wanted to integrate with.
Throughput time on the now-standardized processes improved by 55%, providing greatly enhanced customer service that moved them from #6 to #3 in the market. From an operational cost point, transactions per employee increased by 38% allowing them to have a 36% reduction in operational staff (72 FTE). By using the workforce management capabilities in AMX BPM, they were able to determine parts of the process that could be near-shored (still in Turkey, but in less expensive locations than Istanbul), resulting in additional cost savings.
They have an overall strategy for what processes to implement in what order. They picked their initial processes as not customer facing, but still important for their operations, and able to be done manually as a fall-back position. This allowed them to learn the tool and establish best practices, then start to consider processes that directly impact the customer journeys. Although they started with a team made up of both ING and TIBCO people, they are now working completely on their own to build new processes and roll out new applications. Their ultimate goal is to roll out BPM to all core processes, enhance their digital business with support for mobile internal and external users, and use Spotfire analytics more broadly in the back office to improve operational decision-making.
They were an early AMX BPM customer, starting on version 1.0 and now on 3.1, with plans for 4.1 in the near future. Their first process application took them 2 years, but that was a much broader implementation effort that built tools and infrastructure used by all later applications. They’ve had about 20 people working full time on the BPM projects for the past four years, a significant investment on their part but one that is obviously paying off for them.