Integrating your enterprise content with cloud business applications? I wrote a paper on that!

Just because there’s a land rush towards SaaS business applications like Salesforce for some of your business applications, it doesn’t mean that your content and data are all going to be housed on that platform. In reality, you have a combination of cloud applications, cloud content that may apply across several applications, and on-premise content; users end up searching in multiple places for information in order to do a single transaction.

In this paper, sponsored by Intellective (who have a bridging product for enterprise content/data with SaaS business applications), I wrote about some of the architecture and design issues that you need to consider when you’re linking these systems together. Here’s the introduction:

Software-as-a-service (SaaS) solutions provide significant utility and value for standard business applications, including customer relationship management (CRM), enterprise resource planning (ERP), supply chain management (SCM), human resources (HR), accounting, insurance claims management, and email. These “systems of engagement” provide a modern and agile user experience that guides workers through actions and enables collaboration. However, they rarely replace the core “systems of record”, and don’t provide the range of content services required by most organizations.

This creates an issue when, for example, a customer service worker’s primary environment is Salesforce CRM, but for every Salesforce activity they may also need to access multiple systems of record to update customer files, view regulatory documentation or initiate line-of-business (LOB) processes not supported in Salesforce. The worker spends too much time looking for information, risks missing relevant content in their searches, and may forget to update the same information in multiple systems.

The solution is to integrate enterprise content from the systems of record – data, process and documents – directly with the primary user-facing system of engagement, such that the worker sees a single integrated view of everything required to complete the task at hand. The worker completes their work more efficiently and accurately because they’re not wasting time searching for information; data is automatically updated between systems, reducing data entry effort and errors.

Head on over to get the full paper (registration required).

AlfrescoDay 2018: digital business platform and a whole lot of AWS

I attended Alfresco’s analyst day and a customer day in New York in late March, and due to some travel and project work, just finding time to publish my notes now. Usually I do that while I’m at the conference, but part of the first day was under NDA so I needed to think about how to combine the two days of information.

The typical Alfresco customer is still very content-centric, in spite of the robust Alfresco Process Services (formerly Activiti) offering that is part of their platform, with many of their key success stories presented at the conference were based on content implementations and migrations from ECM competitors such as Documentum. In a way, this is reminiscent of the FileNet conferences of 20 years ago, when I was talking about process but almost all of the customers were only interested in content management. What moves this into a very modern discussion, however, is the focus on Alfresco’s cloud offerings, especially on Amazon AWS.

First, though, we had a fascinating keynote by Sangeet Paul Choudary — and received a copy of his book Platform Scale: How an emerging business model helps startups build large empires with minimum investment — on how business models are shifting to platforms, and how this is disrupting many traditional businesses. He explained how supply-side economies of scale, machine learning and network effects are allowing online platforms like Amazon to impact real-world industries such as logistics. Traditional businesses in telecom, financial services, healthcare and many other verticals are discovering that without a customer-centric platform approach rather than a product approach, they can’t compete with the newer entrants into the market that build platforms, gather customer data and make service-based partnerships through open innovation. Open business models are particularly important, and striking the right balance between an open ecosystem and maintaining control over the platform through key control points. He finished up with a digital transformation roadmap: gaining efficiences through digitization; then using data collected in the first stage while integrating flows across the enterprise to create one view of the ecosystem; and finally externalizing and harnessing value flows in the ecosystem. This last stage, externalization, is particularly critical, since opening the wrong control points can kills you business or stifle open growth.

This was a perfect lead-in to Chris Wiborg’s (Alfresco’s VP of product marketing) presentation on Alfresco’s partnership with Amazon and the tight integration of many AWS services into the Alfresco platform: leveraging Amazon’s open platform to build Alfresco’s platform. This partnership has given this conference in particular a strong focus on cloud content management, and we are hearing more about their digitial business platform that is made up of content, process and governance services. Wiborg started off talking about the journey from (content) digitization to digital business (process and content) to digital transformation (radically improving performance or reach), and how it’s not that easy to do this particularly with existing systems that favor on-premise monolithic approaches. A (micro-) service approach on cloud platforms changes the game, allowing you to build and modify faster, and deploy quickly on a secure elastic infrastructure. This is what Alfresco is now offering, through the combination of open source software, integration of AWS services to expand their portfolio of capabilities, and automated DevOps lifecycle.

This brings a focus back to process, since their digital business platform is often sold process-first to enable cross-departmental flows. In many cases, process and content are managed by different groups within large companies, and digital transformation needs to cut across both islands of functionality and islands of technology.

They are promoting the idea that differentiation is built and not bought, with the pendulum swinging back from buy toward build for the portions of your IT that contribute to your competitive differentiation. In today’s world, for many businesses, that’s more than just customer-facing systems, but digs deep into operational systems as well. In businesses that have a large digital footprint, I agree with this, but have to caution that this mindset makes it much too easy to go down the rabbit hole of building bespoke systems — or having someone build them for you — for standard, non-differentiating operations such as payroll systems.

Alfresco has gone all-in with AWS. It’s not just a matter of shoving a monolithic code base into a Docker container and running it on EC2, which how many vendors claim AWS support: Alfresco has a much more integrated microservices approach that provides the opportunity to use many different AWS services as part of an Alfresco implementation in the AWS Cloud. This allows you to build more innovative solutions faster, but also can greatly reduce your infrastructure costs by moving content repositories to the cloud. They have split out services such as Amazon S3 (and soon Glacier) for storage services, RDS/Aurora for database services, SNS for notification, security services, networking services, IoT via Alexa, Rekognition for AI, etc. Basically, a big part of their move to microservices (and extending capabilities) is by externalizing to take advantage of Amazon-offered services. They’re also not tied to their own content services in the cloud, but can provide direct connections to other cloud content services, including Box, SharePoint and Google Drive.

We heard from Tarik Makota, an AWS solution architect from Amazon, about how Amazon doesn’t really talk about private versus public cloud for enterprise clients. They can provide the same level of security as any managed hosting company, including private connections between their data centers and your on-premise systems. Unlike other managed hosting companies, however, Amazon is really good at near-instantaneous elasticity — both expanding and contracting — and provides a host of other services within that environment that are directly consumed by Alfresco and your applications, such as Amazon RDS for Aurora, a variety of AI services, serverless step functions. Alfresco Content Services and Process Services are both available as AWS QuickStarts, allowing for full production deployment in a highly-available, highly-redundant environment in the geographic region of your choice in about 45 minutes.

Quite a bit of food for thought over the two days, including their insights into common use cases for Alfresco and AI in content recognition and classification, and some of their development best practices for ensuring reusability across process and content applications built on a flexible modern architecture. Although Alfresco’s view of process is still quite content-centric (naturally), I’m interested to see where they take the entire digital business platformin the future.

Also great to see a month later that Bernadette Nixon, who we met at the Chief Revenue Officer at the event, has moved up to the CEO position. Congrats!

bpmNEXT 2018: Last session with a Red Hat demo, Serco presentation and DMN TCK review

We’re on the final session of bpmNEXT 2018 — it’s been an amazing three days with great demos and wonderful conversations.

Exploiting Cloud Infrastructure for Efficient Business Process Execution, Red Hat

Kris Verlaenen, project lead for jBPM as part of Red Hat, presented on cloud BPM infrastructure, specifically for execution and monitoring. Cloud makes BPM lightweight, scalable, embedable and able to take advantage of the larger cloud app ecosystem. They are introducing some new cloud infrastructure, including a controller for managing server deployments, a smart router for delegating and aggregating requests from applications to servers, and monitoring that aggregates process statistics across servers and containers. The demo showed using Red Hat’s OpenShift container application platform (actually MiniShift running on his laptop) to create a new environment and deploy an IT hardware ordering BPM application. He walked through using the application to create a new order and see the milestone-based monitoring of the order, then the hardware provider’s view of their steps in the process to provide information and advance the process to the next stage. The process engine and monitoring engine can be deployed in different containers on different hardware, in any combination of cloud providers and on-premise infrastructure. Applications and servers can be bundled into a single immutable image for easy provisioning — more of a microservices style — or can be deployed independently. Multiple versions of the same application can be deployed, allowing current instances to play out in the original version while new instances use the most recent version, or other strategies that would allow new instances of any version to be created, while monitoring can aggregate instance data from all versions in all containers.

Kris is also live-blogging the conference, check out his posts. He has gone back and included the video of each presentation when they are released (something that I didn’t do for page load performance reasons) as well as providing his commentary on each presentation.

Dynamic Work Assignment, Serco

Lloyd Dugan of Serco has the unenviable position of being the last presenter of the conference, although he gave a presentation of dynamic work assignment implementation rather than an actual demo (with a quick view of the simple process model in the Trisotech animator near the end, plus an animation of the work assignment in action). His company is a call center business process outsourcer, where knowledge workers use a case management application implemented in BPMN, driven by events such as inbound calls and documents, as well as timers. Real-time work prioritization and assignment is necessary because of SLAs around inbound calls, and the task management model is moving from work being selected (and potentially cherry-picked) by workers, to push assignments. Tasks are scored and assigned using decision models that include task type and SLAs, and worker eligibility based on each individual’s skills and training. Although work assignment products exist, this one is specifically for the complex rules around the US Affordable Care Act administration, which requires a combination of decision tables, database table-driven rules, and lower-level coding to provide the right combination of flexibility and performance.

DMN TCK (Technical Compatibility Kit) Working Group

Keith Swenson of Fujitsu (but presenting here in his role on the DMN standards) started on the idea of a set of standardized DMN technical compatibility tests based on conversations at bpmNEXT in 2016, and he presented today on where they’re at with the TCK. Basically, the TCK provides a way for DMN vendors to demonstrate their compliance with the standard by providing a set of DMN models, input data, and expected results, testing decision tables, boxed expressions and FEEL. Vendors who can demonstrate that they pass all of the TCK tests are listed on a github site along with information about individual test results, providing a way for DMN customers to assess the compliance level of vendors. Keith wrote an update on this last September that provides a good summary up to that point, and in today’s presentation he walked through some of the additional things that they’ve done including identifying sections of the DMN specification that require clarifications or additions due to ambiguity that can lead to different implementations. DMN 1.2 is coming out this year, which will require a new set of tests specifically for that version while maintaining the previous version tests; they are also trying to improve testing of error cases and introducing more real-world decision models. If you create and use DMN models, or make a DMN-compliant decision management product, or you’re otherwise interested in the DMN TCK, you can find out here how to get involved in the working group.

That’s it for bpmNEXT 2018. There will be voting for the best in show and some wrapup after lunch, but we’re pretty much done for this year. Another amazing year that makes me proud to be a part of this community.

Anarchy in Edmonton: no, it’s not hockey, it’s Google Drive

I’m in a breakout session at the AIIM 2018 conference, and Kristan Cook and Gina Smith-Guidi are talking about their work at the City of Edmonton in transitioning from network drives to Google Drive for their unstructured corporate information. Corporate Records and Information Management (CRIM) is part of the Office of the City Clerk, and is run a bit independently of IT and in a semi-decentralized manner. They transitioned from Microsoft Office to Google Suite in 2013, and wanted to apply records management to what they were doing; at that time, there was nothing commercially available, so hired a Google Apps developer to do it for them. They needed the usual records management requirements: lifecycle management, disposition and legal hold reporting, and tools to help users to file in the correct location; on top of that, it had to be easy to use and relatively inexpensive. They also managed to reconcile over 2000 retention schedules into one master classification and retention schedule, something that elicited gasps from the audience here.

What they offer to the City departments is called COE Drive, which is a functional classification — it just appears as a folder in Google Drive — then the “big bucket” method below that top level, where documents are filed within a subfolder that represents the retention classification. When you click New in Google Drive, there’s a custom popup that asks for the primary classification and secondary classification/record series, and a subfolder within the secondary classification. This works for both uploaded files and newly-created Google Docs/Sheets files. Because these are implemented as folders in Google Drive, access permissions are applied so that users only see the classifications that apply to them when creating new documents. There’s also a simple customized view that can be rolled out to most users who only need to see certain classifications when browsing for documents. Users don’t need to know about retention schedules or records management, and can just work the way that they’ve been working with Google Drive for five years with a bit of a helper app to help them with filing the documents. They’re also integrating Google File Stream (the sync capability) for files that people work on locally on their desktop, to ensure that they are both backed up and stored as proper records if required.

The COE Drive is a single account drive, I assume so that documents added to the COE Drive have their ownership set to the COE Drive and are not subject to individual user changes. There’s not much metadata stored except for the date, business area and retention classification; in my experience with Google Drive, the search capabilities mean that you need much less explicit metadata.

It sounds as if most of the original work was done by a single developer, and now they have new functionality created by one student developer; on top of that, since it’s cloud-based, there’s no infrastructure cost for servers or software licences, just subscription costs for Google Apps. They keep development in-house both to reduce costs and to speed deployment. Compare the chart on the right with the cost and time for your usual content and records management project — there are no zeros missing, the original development cost was less than $50k (Canadian). That streamlined technology path has also inspired them to streamline their records management policies: now, changes to the retention schedule that used to require a year and five signatures can now be signed off by the City Clerk alone.

Lots of great discussion with the audience: public sector organizations are very interested in any solution where you can do robust content and records management using low-cost cloud-based tools, but many private sector companies are seeing the benefits as well. There was a question about whether they share their code: they don’t currently do that, but don’t have a philosophical problem with doing that — watch for their Github to pop up soon!

My guest post on the @Alfresco blog: BPM Cloud Architectures and Microservices

The second of the short articles that I wrote for Alfresco has been published on their blog, on BPM cloud architectures and microservices. I walk through the basics of cloud architectures (private, public and hybrid), containerization and microservices, and why this is relevant for BPM implementations. As I point out:

Not all BPM solutions are built for cloud-native architectures: a monolithic BPMS stuffed into a Docker container will not be able to leverage the advantages of modern cloud infrastructures.

Check out the full article on the Alfresco site.

Transforming Insurance with Cloud BPM: my guest post on the @Alfresco blog

I recently wrote three short articles for Alfresco, which they are publishing on their blog. The first one is about insurance and cloud BPM, looking at how new business models are enabled and customer-facing processes improved using a containerized cloud architecture and microservices. From the intro:

In this blog post, I plan to explore the role BPMS plays in integrating packaged software, custom-built systems, and external services into a seamless process that includes both internal and external participants. What if you need to include customers in your process without having to resort to email or manual reconciliation with an otherwise automated process? What if you need employees and partners to participate in processes regardless of their location, and from any device? What if some of the functions that you want to use, such as machine learning for auto-adjudication, industry comparative analytics on claims, or integration with partner portals, are available primarily in the public cloud?

Head over there to read more about my 4-step plan for insurance technology modernization, although the same can be applied in many other types of organizations. They also have a webinar coming up next week on legacy ECM modernization at Liberty Mutual; with some luck, Liberty Mutual will read my article and think about how cloud BPM can help modernize their processes too.

The other two posts that I wrote for them – one that dives more into BPM cloud architectures and microservices, and one that examines use cases for content in process applications – will be published over the next couple of months. Obviously, Alfresco paid me to write the content that is published on their site, although it’s educational and thought leadership in nature, not about their products.

On the Alfresco topic, I’ll likely be at Alfresco Day in New York on March 28, since they’re holding an analyst briefing there the day before.

OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from Majzoub in the morning keynote. He started with a review of the past year of product development — specific product enhancements and releases, more applications, and Magellan analytics suite — then moved on to the ongoing vision and strategy.

Dean Haacker, CTO of The PrivateBank, joined Majzoub to talk about their use of OpenText content products. They moved from an old version of Content Server to the curret CS16, adding WEM integrated with CS for their intranet, Teleform for scanning, and ShinyDrive (OpenText’s partner of the year) for easy access to the content repository. The improved performance, capabilities and user experience are driving adoption within the bank; more of their employees are using the content capabilities for their everyday document needs, and as one measure of the success, their paper consumption has reduced by 20%.

Majzoub continued with a discussion of their recent enhancements in their content products, and demoed their life sciences application built on Documentum D2. There’s a new UI for D2 and a D2 mobile app, plus Brava! widgets for building apps. They can deploy their content products (OTMM, Content Suite, D2 and eDocs) across a variety of OpenText Cloud configurations, from on-premise to hybrid to public cloud. Content in the cloud allows for external sharing and collaboration, and we saw a demo of this capability using OpenText Core, which is their personal/team cloud product. Edits to an Office365 document by an external collaborator (or, presumably, edited using a desktop app and saved back to Core) can be synchronized back into Content Suite.

Other products and demos that he covered:

  • A demo of Exstream for updating and publishing a customer communication asset, which can automatically push the communication to specific customers and platforms via email, document notifications in Core, or mobile notifications. It actually popped up in the notifications section of the Enterprise World app on my phone, so worked as expected.
  • Their People Center HR app, which we saw demonstrated yesterday, built on AppWorks and Process Suite.
  • A demo of Extended ECM, which integrates content capabilities directly into other applications such as SAP, supporting both private and shared public cloud platforms for both internal and external participants.
  • Enhancements coming to Business Network, which is their collection of supply chain technologies, including B2B integration, fax, secure messaging and more; most interesting is the upcoming integration with Process Suite to merge internal and external processes.
  • A bit about the Covisint acquisition — not yet closed so not too many details — for IoT and deveice messaging.
  • AppWorks is their low-code development environment that enables both desktop and mobile apps to be created quickly, while still supporting more advanced developers.
  • Applying machine-assisted discovery to information lakes formed by a variety of hetereogenous content sources for predictions and insights.
  • eDOCS InfoCenter for an improved portal-style UI (in case you haven’t been paying attention for the past few years, eDOCS is focused purely on legal applications, although has functionality that overlaps with Content Suite and Documentum).

Majzoub finished with commitments for their next version — EP3 coming in October 2017 — covering enhancements across the full range of products, and the longer-term view of their roadmap of continuous innovation including their new hybrid platform, Project Banff. This new modern architecture will include a common RESTful services layer and an underlying integrated data model, and is already manifesting in AppWorks, People Center, Core, LEAP and Magellan. I’m assuming that some of their legacy products are not going to be migrated onto this new architecture.

 

I also attended the Process Suite product roadmap session yesterday as well as a number of demos at the expo, but decided to wait until later today when I’ve seen some of the other BPM-related sessions to write something up. There are some interesting changes coming — such as Process Suite becoming part of the AppWorks low-code application development environment — and I’m still getting a handle on how the underlying Cordys DNA of the product is being assimilated.

The last part of the keynote was a session on business creativity by Fredrik Härén — interesting!

Cloud ECM with @l_elwood @OpenText at AIIM Toronto Chapter

Lynn Elwood, VP of Cloud and Services Solutions at OpenText, presented on managing information in a cloud world at today’s AIIM chapter meeting in Toronto. This is of particular interest to Canadians, since most of the cloud service offerings that we see are in the US, and many companies are not comfortable with keeping their private data in a jurisdiction where it can be somewhat easily exposed to foreign government and intelligence agencies.

She used a building analogy to talk about cloud services:

  • Infrastructure as a service (IaaS) is like a piece of serviced land on which you need to build your own building and worry about your connections to services. If your water or electricity is off, you likely need to debug the problem yourself although if you find that the problem is with the underlying services, you can go back to the service provider.
  • Platform as a service (PaaS) is like a mobile home park, where you are responsible for your own dwelling but not for the services, and there are shared services used by all residents.
  • Software as a service (SaaS) is like a condo building, where you own your own part of it, but it’s within a shared environment. SaaS by Gartner’s definition is multi-tenant, and that’s the analogy: you are at the whim, to a certain extent, of the building management in terms of service availability, but at a greatly reduced cost.
  • Dedicated, hosted or managed is like a private house on serviced land, where everything in the house is up to you to maintain. In this set of analogies, not sure that there is a lot of distinction between this and IaaS.
  • On-premises is like a cottage, where you probably need to deal with a lot of the services yourself, such as water and septic systems. You can bring in someone to help, but it’s ultimately all your responsibility.
  • Hybrid is a combo of things — cloud to cloud, cloud to on-premise — such as owning a condo and driving to a cottage, where you have different levels of service at each location but they share information.
  • Managed services is like having a property manager, although it can be cloud or on-premise, to augment your own efforts (or that of your staff).

Regardless of the platform, anything that touches the cloud is going to have a security consideration as well as performance/up-time SLAs if you want to consider it as part of your core business. From my experience, on-premise solutions can be just as insecure and unstable as any cloud offering, so good to know what you’re comparing with when you are looking at cloud versus on-premise.

Most organziations require that their cloud provider have some level of certification: of the facility (data centre), platform (infrastructure) and service (application). Elwood talked about the cloud standards that impact these, including ISO 27001, and SOC 1, 2 and 3.

A big concern is around applications in the cloud, namely SaaS such as Box or Salesforce. Although IT will be focused on whether the security of that application can be breached, business and information managers need to be concerned about what type of data is being stored in those applications and whether it potentially violates any privacy regulations. Take a good look at those SaaS EULAs — Elwood took us through some Apple and Google examples — and have your lawyers look at them as well if you’re deploying these solutions within the enterprise. You also need to look at data residency requirements (as I mentioned at the start): where the data resides, the sovereignty of the hosting company, the routing between you and the data even if the data resides in your own country, and the backup policies of the hosting company. The US Patriot Act allows the US government to access any data that passes through, is stored in, or is hosted by a company that is domiciled in the US; other countries are also adding similar laws. Although a company may have a data centre in your country, if they’re a US company, they probably have a default to store/process/backup in the US: check our the Microsoft hosting and data processing agreement, for example, which specifies that your data will be hosted and/or processed in the US unless you explicitly request otherwise. There’s an additional issue that even if your data has the appropriate residency, if an employee is travelling to a restricted country and accesses the data remotely, you may be violating privacy regulations; not all applications have the ability to filter otherwise authenticated access based on IP address. If you add this to the ability of foreign governments to demand device passwords in order to enter a country, the information accessible via an employee’s computer — not just the information stored it — is at risk for exposure.

Elwood showed a map of the information governance laws and regulations around the world, and it’s a horrifying mix of acronyms for data protection and privacy rules, regulated records retention, eDiscovery requirements, information integrity and authenticity, and reporting obligations. There’s a new EU regulation — the General Data Protection Regulation (GDPR) — that is going to be a game-changer, harmonizing laws across all 28 member nations and applying to any data collected about an EU citizen even outside the EU. The GDPR includes increased consent standards, stronger individual data rights, stronger breach notification, increased governance obligation, stronger recordkeeping requirements, and data transfer constraints. Interestingly, Canada is recognized as one of the countries that is deemed to have “adequate protection” for data transfer, along with Andorra, Argentina, the Faroe Islands, the Channel Islands (Guernsey and Jersey), Isle of Man, Israel, New Zealand, Switzerland and Uruguay. In my opinion, many companies aren’t even aware of the GDPR, much less complying with it, and this is going to be a big wake-up call. Your compliance teams need to be aware of the global landscape as it impacts your data usage and applications, whether in the cloud or on premise; companies can receive huge fines (up to 4% of annual revenue) for violating GDPR whether they are the “owner” of the data or just a data processor/host.

OpenText has a lot of GDPR information on their website that is not specific to their products if you want to read more. 

There are a lot of benefits to cloud when it comes to information management, and a lot of things to consider: agility to grow and change quickly; a services approach that requires partnering with the service provider; mobility capabilities offered by cloud platforms that may not be available for on premise; and analytics offered by cloud vendors within and across applications.

She finished up with a discussion on the top areas of concerns for the attendees: security, regulations, GDPR, data sovereignty, consumer applications, and others. Great discussion amongst the attendees, many of whom work in the Canadian financial services industry: as expected, the biggest concerns are about data residency and sovereignty. GDPR is seen as having the potential to level the regulatory playing field by making everyone comply; once the data centres and service providers start to comply, it will be much easier for most organizations to outsource that piece of their compliance by moving to cloud services. I think that cloud service providers are already doing a better job at security and availability than most on-premise systems, so once they crack the data residency and sovereignty problem there is little reason to have a private data centre. IT’s concern has mostly been around security and availability, but now is the time for information and compliance managers to get involved to ensure that privacy regulations are supported by these platforms.

There are Canadian companies using cloud services, even the big banks and government, although I am guessing that it’s for peripheral rather than core services. Although some are doing this “accidentally” as the only way to share information with external participants, it’s likely time for many companies to revisit their information management strategies to see if they can be more inclusive of property vetted cloud solutions.

We did get a very brief review of OpenText and their offerings at the end, including their software solutions and their EIM cloud offerings under the OpenText Cloud banner. They are holding their Enterprise World user conference in Toronto this July, which is the first (but likely not the last) big software company to see the benefits of a non-US North American conference location.

Pegaworld 2016 Day 1 Keynote: Pega direction, Philips and Allianz

It seems like I was just here in Vegas at the MGM Grand…oh, wait, I *was* just here. Well, I’m back for Pegaworld 2016, and 4,000 of us congregated in the Grand Garden Arena for the opening keynote on the first day. If you’re watching from home, or want to catch a replay, there is a live stream of the keynotes that will likely feature an on-demand replay at some point.

IMG_9776Alan Trefler, Pega’s CEO, kicked things off by pointing out the shift from a focus on technology to a focus on the customer. Surveys show that although most companies think that they understand their customers, the customers don’t agree; companies need to undergo a serious amount of digital transformation in order to provide the level of service that today’s customers need, while still improving efficiencies to support that experience. One key to this is a model-driven technology environment that incorporates insights and actions, allowing the next best action to be provided at any given point depending on the current context, while supporting organizational evolution to allow constant change to meet the future demands. Model-driven environments let you create applications that are future-proof, since it is relatively quick to make changes to the models without changing a lot of code. Pega has a lot of new online training at the Pega Academy, a marketplace of third-party Pega applications at the Pega Exchange, and the continuing support of their Pega Express easy-to-use modeler; they continue to work on breaking free from their tech-heavy past to support more agile digital transformation. Pega recently sponsored an Economist report on digital transformation; you can grab that here.

wp-1465232175851.jpgDon Schuerman, Pega’s CTO, took over as MC for the event to introduce the other keynote speakers, but first announced a new partnership with Philips that links Pega’s care management package with Philips’ HealthSuite informatics and cloud platform for home healthcare. Jeroen Tas, CEO of Connected Care & Health Informatics at Philips presented more on this, specifically in the context of the inefficient and unevenly-distributed US healthcare system. He had a great chart that showed the drivers for healthcare transformation: from episodic to continuous, by orchestrating 24/7 care; from care provider to human-centric, by focusing on patient experience; from fragmented to connected, by connecting patients and caregivers; and from volume to value, by optimizing resources. Connected, personalized care links healthy living to disease prevention, and supports the proper diagnosis and treatment since healthcare providers all have access to a comprehensive set of the patient’s information. Lots of cool personal healthcare devices, such as ultrasound-as-a-service, where they will ship a device that can be plugged into a tablet to allow your GP to do scans that might normally be done by a specialist; continuous glucose meters and insulin regulation; and tools to monitor elderly patients’ medications. Care costs can be reduced by 26% and readmissions reduced by 52% through active monitoring in networked care delivery environments, such as by monitoring heart patients for precursors of a heart attack; this requires a combination of IoT, personal health data, data analytics and patient pathways provided by Philips and Pega. He ended up stating that it’s a great time to be in healthcare, and that there are huge benefits for patients as well as healthcare providers.

Although Tas didn’t discuss this aspect, there’s a huge amount of fear of connected healthcare information in user-pay healthcare systems: people are concerned that they will be refused coverage if their entire health history is known. Better informatics and analysis of healthcare information improves health and reduces overall healthcare costs, but it needs to be provided in an environment that doesn’t punish people for exposing their health data to everyone in the healthcare system.

We continued on the healthcare topic, moving to the insurance side with Birgit König, CEO of Allianz Health Germany. Since basic healthcare in Germany is provided by the state, health insurance is for additional services not covered by the basic plan, and for travelers while they are outside Germany. There is a lot of competition in the market, and customer experience for claims is becoming a competitive differentiator especially with new younger customers. In order to accommodate, Allianz is embracing a bimodal architecture approach, where back-end systems are maintained using traditional development techniques that focus on stability and risk, while front-end systems are more agile and innovative with shorter release cycles. I’ve just written a paper on bimodal IT and how it plays out in enterprises; not published yet, but completely aligned with what König discussed. Allianz is using Pega for more agile analytics and decisioning at the front end of their processes, while keeping their back-end systems stable. Innovation and fast development has been greatly aided by co-locating their development and business teams, not surprisingly.

wp-1465232200882.jpgThe keynote finished with Kerim Akgonul, Pega’s SVP of Products, for a high-level product update. He started by looking at the alignment between internal business goals and the customer journey, spanning marketing, sales, customer service and operations. The Pega Customer Decision Hub sits at the middle of these four areas, linking information so that (for example), offers sent to customers are based on their past orders.

wp-1465234442978.jpg

 

  • Marketing: A recent Forrester report stated that Pega Marketing yields an 8x return on marketing investment (ROMI) due to the next-best-action strategies and other smart uses of analytics. Marketers don’t need to be data scientists to create intelligent campaigns based on historical and real-time data, and send those to a targeted list based on filters including geolocation. We saw this in action, with a campaign created in front of us to target Pegaworld attendees who were actually in the arena, then sent out to the recipients via the conference mobile app.
  • Sales: The engagement map in the Pega Sales Automation app uses the Customer Decision Hub information to provide guidance that links products to opportunities for salespeople; we saw how the mobile sales automation app makes this information available and recommends contacts and actions, such as a follow-up contact or training offer. There are also some nice tools such as capturing a business card using the mobile camera and importing the contact information, merging it if a similar record is found.
  • wp-1465234409405.jpgCustomer service: The Pega customer service dashboard shows individual customer timelines, but the big customer service news in this keynote is the OpenSpan acquisition that provides robotic process automation (RPA) to improve customer service environments. OpenSpan can monitor desktop work as it is performed, and identify opportunities for RPA based on repetitive actions. The new automation is set up by recording the actions that would be done by a worker, such as copying and pasting information between systems. The example was an address change, where a CSR would take a call from a customer then have to update three different systems with the same information by copying and pasting between applications. We saw the address change being recorded, then played back on a new transaction; this was also included as an RPA step in a Pega Express model, although I’m not sure if that was just to document the process as opposed to any automation driven from the BPM side.
  • Operations: The Pega Field Service application provides information for remote workers doing field support calls, reducing the time required to complete the service while documenting the results and tracking the workers. We saw a short video of Xerox using this in Europe for their photocopier service calls: the field engineer sees the customer’s equipment list, the inventory that he has with him, and other local field engineers who might have different skills or inventory to assist with his call. Xerox has reduced their service call time, improved field engineer productivity, and increased customer satisfaction.

Good mix of vision, technology and customer case studies. Check out the replay when it’s available.

bpmNEXT 2016 demos: IBM, Orquestra, Trisotech and BPM.com

On the home stretch of the Wednesday agenda, with the last session of the four last demos for the day.

BPM in the Cloud: Changing the Playing Field – Eric Herness, IBM

wp-1461193672487.jpgIBM Bluemix process-related cloud services, including cognitive services leveraging Watson. Claims process demo that starts by uploading an image of a vehicle and passing to Watson image recognition for visual classification; returned values show confidence in vehicle classification, such as “car”, and sends any results over 90% to the Alchemy taxonomy service to align those — in the demo, Watson returned “cars” and “sedan” with more than 90% confidence, and the taxonomy service determined that sedan is a subset of cars. This allows routing of the claim to the correct process for the type of vehicle. If Watson has not been trained for the specific type of vehicle, the image classification won’t be determined with a sufficient level of confidence, and it will be passed to a work queue for manual classification. Unrecognized images can be used to add to classifier either as example of an existing classification or as a new classification. Predictive models based on Spark machine learning and analytics of past cases create predictions of whether claim should be approved, and the degree of confidence in that decision; at some point, as this confidence increases, some of the claims could be approved automatically. Good examples of how to incorporate cognitive computing to make business processes smarter, using cognitive services that could be called from any BPM system, or any other app that can call REST services.

Model, Generate, Compile in the Cloud and Deploy Ready-To-Use Mobile Process Apps – Rafael Bortolini and Leonardo Luzzatto, CRYO/Orquestra

Demo of Orquestra BPMS implementation for Rio de Janeiro’s municipal processes, e.g., business license requests. From a standard worklist style of process management, generate a process app for a mobile platform: specify app name and logo, select app functionality based on templates, then preview it and compile for iOS or Android. The .ipa or .apk files are generated ready for uploading to the Apple or Google app stores, although that upload can’t be automated. Full functionality to allow mobile user to sign up or login, then access the functionality defined for the app to request a business license. Although an app is generated, the data entry forms are responsive HTML5 to be identical to the desktop version. Very quick implementation of a mobile app from an existing process application without having to learn the Orquestra APIs or even do any real mobile development, but it can also produce the source code in case this is just wanted as a quick starting point for a mobile development project.

Dynamic Validation of Integrated BPMN, CMMN and DMN – Denis Gagné, Trisotech

wp-1461196893964.jpgKommunicator tool based on their animation technology that animates models, which allows tracing the animation directly from a case step in the BPMN model to the CMMN model, or from a decision step to the DMN model. Also links to the semantic layer, such as the Sparx SOA architecture model or other enterprise architecture reference models. This allows manually stepping through an entire business model in order to learn and communicate the procedures, and to validate the dynamic behavior of the model against the business case. Stepping through a CMMN model requires selecting the ad hoc tasks as the case worker would in order to step through the tasks and see the results; there are many different flow patterns that can emerge depending on the tasks selected and the order of selection, and stages will appear as being eligible to close only when the required tasks have been completed. Stepping through a DMN model allows selecting the input parameters in a decision table and running the decision to see the behavior. Their underlying semantic graph shows the interconnectivity of all of the models, as well as goals and other business information.

Simplified CMMN – Lloyd Dugan, BPM.com

wp-1461198272050.jpgLast up is not a demo (by design), but a proposal for a simplified version of CMMN, starting with a discussion of BPMN’s limitations in case management modeling: primarily that BPMN treats activities but not events as first-class citizens, making it difficult to model event-driven cases. This creates challenges for event subprocesses, event-driven process flow and ad hoc subprocesses, which rely on “exotic” and rarely used BPMN structures and events that many BPMN vendors don’t even support. Moving a business case – such as an insurance claim – to a CMMN model makes it much clearer and easier to model; the more unstructured that the situation is, the harder it is to capture in BPMN, and the easier it is to capture in CMMN. Proposal for simplifying CMMN for use by business analysts include removing PlanFragment and removing all notational attributes (AutoComplete, Manual Activitation, Required, Repetition) that are really execution-oriented logic. This leaves the core set of elements plus the related decorators. I’m not enough of a CMMN expert to know if this makes complete sense, but it seems similar in nature to the subsets of BPMN commonly used by business analysts rather than the full palette.