Transforming compliance at Farmers Insurance with @rafael_moscatel

In the last Thursday breakout of AIIM 2018, I attended a session on initiatives within the compliance department at Farmers Insurance to modernize their records management, presented by Rafael Moscatel. Their technology includes IGS’ Virgo to manage retention schedules, Legal Hold Pro for legal holds and custodian compliance, and Box for content governance. They started in 2015 with an assessment and plan, then built a new team with the appropriate expertise going forward, then updated their policy and governance, and finally brought in the three new key technology components in 2017. For an insurance company, that’s pretty fast.

Their retention policy is based on 12 big buckets, which are primarily aligned with business functions, making it easy for employees to understand what they are from a real-world standpoint. Legal Hold Pro replaced an old customized SharePoint system, and works together with Box Governance for e-discovery. He went through a lot of the details of how the technologies work together and what they’re doing with them, but the key takeaway for me is that an insurance company — what I know through a lot of experience to be an extremely conservative industry that’s struggling to transform themselves — is realizing that they need to shake things up in terms of how compliance of digital records are managed in order to move forward into the future. He ended up with some great comments on how to work with the business people, especially the executives, to bring programs like this to fruition.

Great talk by a knowledgeable and well-spoken presenter; my end-of-the-day writing doesn’t do it justice.

Automation and digital transformation in the North Carolina Courts

Elizabether Croom, Morgan Naleimaile and Gaynelle Knight from the North Carolina Courts led a breakout session on Thursday afternoon at AIIM 2018 on what they’ve done to move into the digital age. NC has a population of over 10 million, and the judiciary adminstration is integrated throughout the state across all levels, serving 6,800 staff, 5,400 volunteers and 32,000 law enforcement officers as well as integrating and sharing information with other departments and agencies. New paper filings taking up 4.3 miles of shelving each year, yet the move to electronic storage has to be done carefully to protect the sensitivity of the information contained within these documents. For the most part, the court records are public records unless they are for certain types of cases (e.g., juveniles), but PII such as social security numbers must be redacted in some of these documents: this just wasn’t happening, especially when documents are scanned outside the normal course of content management. The practical obscurity (and security) of paper documents was moving into the accessible environment of electronic files.

They built their first version of an enterprise information management systems, including infrastructure, taxonomy, metadata, automated capture and manual redaction. This storage-centric phase wasn’t enough: they also needed to address paper file destruction (due to space restrictions), document integrity and trustworthiness, automated redaction of PII, appropriate access to files, and findability. In moving along this journey, they started looking at declaring their digital files as records, and how that tied in with the state archives’ requirements, existing retention schedules and the logic for managing retention of records. There’s a great deal of manual quality control currently required for having the scanned documents be approved as an official record that can replace the paper version, which didn’t sit well with the clerks who were doing their own scanning. It appears as if an incredible amount of effort is being focused on properly interpreting the retention schedule logic and trigger sources: fundamentally, the business rules that underlie the management of records.

Moving beyond scanning, they also have to consider intake of e-filed documents — digitally-created documents that are sent into the court system in electronic form — and the judicial branch case management applications, which need to consume any of the documents and have them readily available. They have some real success stories here: there’s an eCourts domestic violence protection order (DVPO) process where a victim can go directly to a DV advocate’s office and all filings (including a video affidavit) and the issue of the order are done electronically while the victim remains in the safety of the advocate’s office.

They have a lot of plans moving forward to address their going-forward records capture strategy as well as addressing some of the retention issues that might be resolved by back-scanning of microfilmed documents, where documents with different retention periods may be on the same roll of film. Interestingly, they wouldn’t say what their content management technology is, although it does sound like they’re assessing the feasibility of moving to a cloud solution.

Anarchy in Edmonton: no, it’s not hockey, it’s Google Drive

I’m in a breakout session at the AIIM 2018 conference, and Kristan Cook and Gina Smith-Guidi are talking about their work at the City of Edmonton in transitioning from network drives to Google Drive for their unstructured corporate information. Corporate Records and Information Management (CRIM) is part of the Office of the City Clerk, and is run a bit independently of IT and in a semi-decentralized manner. They transitioned from Microsoft Office to Google Suite in 2013, and wanted to apply records management to what they were doing; at that time, there was nothing commercially available, so hired a Google Apps developer to do it for them. They needed the usual records management requirements: lifecycle management, disposition and legal hold reporting, and tools to help users to file in the correct location; on top of that, it had to be easy to use and relatively inexpensive. They also managed to reconcile over 2000 retention schedules into one master classification and retention schedule, something that elicited gasps from the audience here.

What they offer to the City departments is called COE Drive, which is a functional classification — it just appears as a folder in Google Drive — then the “big bucket” method below that top level, where documents are filed within a subfolder that represents the retention classification. When you click New in Google Drive, there’s a custom popup that asks for the primary classification and secondary classification/record series, and a subfolder within the secondary classification. This works for both uploaded files and newly-created Google Docs/Sheets files. Because these are implemented as folders in Google Drive, access permissions are applied so that users only see the classifications that apply to them when creating new documents. There’s also a simple customized view that can be rolled out to most users who only need to see certain classifications when browsing for documents. Users don’t need to know about retention schedules or records management, and can just work the way that they’ve been working with Google Drive for five years with a bit of a helper app to help them with filing the documents. They’re also integrating Google File Stream (the sync capability) for files that people work on locally on their desktop, to ensure that they are both backed up and stored as proper records if required.

The COE Drive is a single account drive, I assume so that documents added to the COE Drive have their ownership set to the COE Drive and are not subject to individual user changes. There’s not much metadata stored except for the date, business area and retention classification; in my experience with Google Drive, the search capabilities mean that you need much less explicit metadata.

It sounds as if most of the original work was done by a single developer, and now they have new functionality created by one student developer; on top of that, since it’s cloud-based, there’s no infrastructure cost for servers or software licences, just subscription costs for Google Apps. They keep development in-house both to reduce costs and to speed deployment. Compare the chart on the right with the cost and time for your usual content and records management project — there are no zeros missing, the original development cost was less than $50k (Canadian). That streamlined technology path has also inspired them to streamline their records management policies: now, changes to the retention schedule that used to require a year and five signatures can now be signed off by the City Clerk alone.

Lots of great discussion with the audience: public sector organizations are very interested in any solution where you can do robust content and records management using low-cost cloud-based tools, but many private sector companies are seeing the benefits as well. There was a question about whether they share their code: they don’t currently do that, but don’t have a philosophical problem with doing that — watch for their Github to pop up soon!

AIIM18 keynote with @jmancini77: it’s all about digital transformation

I haven’t been to the AIIM conference since the early to mid 90s; I stopped when I started to focus more on process than content (and it was very content-centric then), then stayed away when the conference was sold off, then started looking at it again when it reinvented itself a few years ago. These days, you can’t talk about content without process, so there’s a lot of content-oriented process here as well as AI, governance and a lot of other related topics.

I arrived yesterday just in time for a couple of late-afternoon sessions: one presentation on digital workplaces by Stephen Ludlow of OpenText that hit a number of topics that I’ve been working on with clients lately, then a roundtable on AI and content hosted by Carl Hillier of ABBYY. This morning, I attended the keynote where John Mancini discussed digital transformation and a report released today by AIIM. He put a lot of emphasis on AI and machine learning technologies; specifically, how they can help us to change our business models and accelerate transformation.

We’re in a different business and technology environment these days, and a recent survey by AIIM shows that a lot of people think that their business is being (or about to be) disrupted, and digital transformation is and important part of dealing with that. However, very few of them are more than a bit of the way towards their 2020 goals for transformation. In other words, people get that this is important, but just aren’t able to change as fast as is required. Mancini attributed this in part to the escalating complexity and chaos that we see in information management, where — like Alice — we are running hard just to stay in place. Given the increasing transparency of organizations’ operations, either voluntarily or through online customer opinions, staying in the same place isn’t good enough. One contributor to this is the number of content management systems that the average organization has (hint: it’s more than one) plus all of the other places where data and content reside, forcing workers to have to scramble around looking for information. Most companies don’t want to have a single monolithic source of content, but do want a federated way to find things when they need it: in part, this fits in with the relabelling of enterprise content management (ECM) as “Content Services” (Gartner’s term) or “Intelligent Information Managment” (AIIM’s term), although I feel that’s a bit of unnecessary hand-waving that just distracts from the real issues of how companies deal with their content.

He went through some other key findings from their report on what technologies that companies are looking at, and what priority that they’re giving them; looks like it’s worth a read. He wrapped up with a few of his own opinions, including the challenge that we need to consider content AND data, not content OR data: the distinction between structure and unstructured information is breaking down, in part because of the nature of natively-digital content and in part because of AI technologies that quickly turn what we think of as content into data.

There’s a full slate of sessions today, stay tuned.

Insurance technology: is this very conservative industry finally ready for its close-up?

I’ve worked with insurance clients for a long time, first helping them with automation in their underwriting, policy administration and claims processes, and now helping them with digital transformation to create new business models and platforms. One thing that has always struck me is how behind the time most insurance companies are: usually old companies (by today’s standards), they trend far on the conservative end of the business and technology innovation scale. However, new entrants to the market have been stirring the pot for a couple of years – such as Lemonade for the urban consumer property insurance market – and it seems that everywhere I look, there’s something popping up about innovation in insurance.

Capgemini has a significant insurance practice, and writes an annual World Insurance Report that is about to be updated for 2018; a couple of their consultants write about different aspects of how insurance is changing and the technology enabling that change. They’ve just started a three-part series on the insurance customer of the future, which echoes some of the points that I made in my recent post on the Alfresco blog about transforming insurance with cloud BPM, and although they use the apocryphal “millennial” definition to describe who these customers are in their first post, they point out four main characteristics:

  • Smart shoppers
  • Lower loyalty
  • Self-centred
  • Caring consumers – which appears contrary to the previous point, but check out their post for a description

They have another post on how new InsurTech models can decrease risk for the insurer, which explains more about the social risk pool models that are used by companies like Lemonade, and how risk can be proactively mitigated through the use of connected devices.

We’re also seeing platform innovation for some insurers, such as Liberty Mutual moving their documents to Alfresco on AWS cloud. As I’ve experienced for many years, just getting insurance companies to move from paper to digital files can provide huge operational benefits, and moving those files to the cloud allows a global insurer to allow access wherever required. There are a lot of regulatory issues with data sovereignty, that is, where the content is actually stored and what laws/regulations apply to it because of that, but the vendors are starting to solve those problems with regional data centers and secure, encrypted transport. With digital content comes the issue of digital preservation, which John Mancini on the AIIM blog points out is a big issue for financial and insurance companies because of the typically long time span that they are dealing with customers: consider that a personal injury insurance claim can go on for years, requiring that all documents be retained for future review. After hearing about one former insurance customer of mine that had a flood in their basement storage, destroying years of customer files, I wished that they had decided to move a bit faster on my advice about digital documents.

Cutting edge technologies such as blockchain are also getting into the insurance mix: blockchain can be used to show proof of insurance, improve transparency and reduce risk of fraud, and speed up claims with smart contracts. I can also imagine that as cars get smarter and insurance companies can tie in directly to the on-board systems, there may be less opportunity for auto repair shop fraud, which reduces overall costs to the insurer and consumer.

If you work in insurance and know that you’re behind the curve, there are a lot of things that you can do to help bring yourself into at least the last century, if not this one:

  • Convert all of your files to digital format at the front end of the process, that is, when they arrive (or are created). This will allow you to automatically extract data from the files, which can then be used for classifying and routing content as it arrives. Files can now be shared by anyone who needs to see them, and there will be no piles of completed documents/files waiting to be scanned at the end of a process. This is a big cultural shift as your workers move from working on paper to working on the screen, but if you give them a couple of big screens and a properly-designed workspace, it can be just as productive as paper.
  • With all of your content arriving in digital form, or being converted to digital immediately on arrival, you can now automate your processes:
    • New policy application? Look up any previous information for this customer, create a new business case, and route to the appropriate underwriter if required. If this is a simple policy, such as consumer renter insurance, it can usually be automatically adjudicated and issued immediately.
    • Policy changes? Extract information from the policy administration system, classify the type of change, and either complete the change automatically or forward to a policy administration clerk.
    • A first notice of loss arriving for a claim? Use that to automatically extract information from your policy administration system, set up a claim in your claims system, and route the claim to the appropriate claims manager. Simple claims, such as auto windshield replacement, can be settled automatically and immediately.
    • Additional documents arriving for a claim? Automatically recognize the document type and claim number, and add to the claim case directly.
  • Find the best ways to integrate your digital content and processes with your legacy systems. This is a huge part of what I do with any insurance customer (really, with any customer at all), and it’s not trivial but can result in huge rewards. This will be some combination of exposing APIs, digging directly into operational databases, RPA to integrate “at the glass”, and other methods that are specific to your environment. In the end, you want to be sure that no one is re-entering data manually from one system to another, even by copy and paste.
  • Automate, automate, automate. In case I haven’t made that clear already. There should be no such thing as manual work assignment or routing, except in special cases. Data exchange with legacy systems should be automated. Decisions should be automated where possible, or at least used to make recommendations to workers. Incorporate artificial intelligence and machine learning to learn how your most skilled workers make decisions, align that with your policies and regulatory compliance, and use as input to automated decisions and recommendations. The workers will be left doing the work that actually requires a person to do it, not all of the low-level administrative work.
  • Use some type of low-code application development platform that allows semi-technical business analysts – there are a ton of these working in insurance business areas – to create their own situational apps.
  • Now that you have your operational processes sorted out, start looking for new ways to leverage your digital content and processes:
    • Interact with reinsurers and other business partners using digital content and processes, not paper files and faxes.
    • Provide customers with the option for completely paperless policy application, issuance and renewal. Although I’m far from being a millennial in age, the huge stack of paper sent by my previous home insurer on renewal was a key reason that I ran directly towards an online insurer that could do it all without paper.
    • Streamline claims processes, automating where possible. Many insurance companies don’t spend a lot of time fixing their claims processes, preferring to spend their time on attracting new customers; however, in this age of online consumer reviews, an inefficient claims process is going to hit hard. Automating claims also reduces operational costs: claims managers are highly skilled, and it can take 6-12 months to train a new one.
    • Automate and streamline your ancillary processes that support the main processes, such as recovery of assets, and negotiating contracts with preferred repair vendors.
    • Build in the process monitoring, and provide automated dashboards and reports to different levels of management. As well as giving management a real-time view of operations, this reduces the time of line supervisors spent manually compiling reports. It also, amazingly, will reduce the amount of time that individual workers spend tracking their own work: in many of the insurance companies that I visit, claims managers and other front-line workers keep a manual log of their work because they don’t trust the system to do it for them.
  • Tie your process performance back to business goals: loss ratio, customer satisfaction, regulatory SLAs (such as communicating with customer in a timely manner), net promoter score, fraud rate, closure rate. It’s too easy to get bogged down in making a particular activity or process more efficient when it shouldn’t even be done at all. Although you can use your existing procedures guides as a starting point for your new digital processes, you really need to link everything back to the high-level goals rather than just paving the cow paths.

This started out as a short post because I was seeing a flurry of insurance-related items in my news feed, and grew into a bit of a monster as I thought of my own experiences with insurance customers over the past couple of years. Nonetheless, likely some useful tidbits in here.

Integrating process and content: exploring the use cases

I recently wrote a series of short articles sponsored by Alfresco and published on their blog. Today, the third of the series was published, discussing some use cases for integrating content into your processes:

  • Document-driven processes
  • Case management
  • Document lifecycle processes
  • Support documentation for exceptions in data-driven processes
  • Classification and analysis processes for non-document content

Head over there to read all the details on each of these use cases. As I write at the end:

Over the years, I’ve learned two things about integrating process and content: first, almost every process application has some sort of content associated with it; and second, most process-centric developers underestimate the potential complexity of handling the content in the context of the process application.

While you’re over there, you can also check out the other two articles that I wrote: transforming insurance with cloud BPM, and BPM cloud architectures and microservices.

Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.

Vega Unity 7 - December 2017

ABBYY partnerships in ECM, BPM, RPA and ERP

It’s the first session of the last morning of the ABBYY Technology Summit 2017, and the crowd is a bit sparse — a lot of people must have had fun at the evening event last night — and I’m in a presentation by another ex-FileNet colleague of mine, Carl Hillier.

He discussed how capture isn’t just a discrete operation any more, where you just capture, index and store in a content management repository, but is now the front end to business processes that have the potential for digital transformation. To that end, since ABBYY has no plans to expand their side of the business, they have made strategic partnerships with a number of vendors that push into downstream processes: M-Files and Laserfiche for content management, Appian and Pega (still in the works) for BPM, and Acumatica for ERP. As with many technology partnerships, there can be overlap in capabilities but that usually sorts out in favor of the specialist vendor: for example, with Laserfiche, ABBYY is being used to replace Laserfiche’s simpler OCR capabilities for customers with more complex capture capabilities. Both BPM vendors have RPA capabilities — Appian through a partnership with Blue Prism, Pega through their OpenSpan acquisition — and there’s a session following by RPA vendor UiPath on using ABBYY for RPA that likely has broader implications for working with these other partners.

For the solution builders who use ABBYY’s FlexiCapture, the connectors to these other products gives them fast path to implementation, although they can also use the ABBYY SDK directly to create solutions that include competing products. We saw a bit about each of the ABBYY connectors to the five strategic partners, and how they take advantage of those platforms’ capabilities: with Appian, for example, a capture operator uses FlexiCapture to scan/import and verify documents, then the connector maps the structured data directly into Appian’s data objects (records), whereas for one of the content management platforms, they may transfer a smaller subset of document indexing data. The Acumatica integration is a bit different, in that FlexiCapture isn’t seen as a separate application for the capture front end, but it’s embedded within the Acumatica interface as an invoice capture service.

ABBYY’s plan is to create more of these connectors, making it easier for their VARs and solution partners (who are the primary attendees at this conference) to quickly build solutions with ABBYY and a variety of platforms.

The collision of capture, content and analytics

Martyn Christian of UNDRSTND Group, who I worked with back in FileNet in 2000-1, gave a keynote at ABBYY Technology Summit 2017 on the evolution and ultimate collision of capture, content and analytics. He started by highlighting some key acquisitions in the industry, including the entry of private capital, as well as a move to artificial intelligence in the capture space, as harbingers of the changes in the capture market. Since Gartner declared enterprise content management dead — long live content services platforms! — and introduced new players in the magic quadrant alongside the traditional ECM players, while shifting IBM from the leaders quadrant back to the challengers quadrant.

Intelligent capture is gaining visibility and importance, particularly as a driver for digital transformation. Interestingly, capture was traditionally about converting analog (paper) to digital (data); now, however, many forms of information are natively digital, and capture is not only about performing OCR on scanned paper documents but about extracting and analyzing actionable data from both analog and digital content. High-volume in-house production scanning operations are being augmented — or replaced — with customers doing their own capture, such as we now see with depositing a check using a mobile banking application. Information about customer actions and sentiment is being automatically gleaned from their social media actions. Advanced machine learning is being used to classify content, reducing the need for manual intervention further downstream, and enabling straight-through processing or the use of autonomous agents.

As a marketing guy, he had a lot of advice on how this can be positioned and sold into customers; UNDRSTND apparently ran a workshop yesterday for some of the channel partner companies on bringing this message to their customers who are seeking to move beyond simple capture solutions to digital transformation.

OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from Majzoub in the morning keynote. He started with a review of the past year of product development — specific product enhancements and releases, more applications, and Magellan analytics suite — then moved on to the ongoing vision and strategy.

Dean Haacker, CTO of The PrivateBank, joined Majzoub to talk about their use of OpenText content products. They moved from an old version of Content Server to the curret CS16, adding WEM integrated with CS for their intranet, Teleform for scanning, and ShinyDrive (OpenText’s partner of the year) for easy access to the content repository. The improved performance, capabilities and user experience are driving adoption within the bank; more of their employees are using the content capabilities for their everyday document needs, and as one measure of the success, their paper consumption has reduced by 20%.

Majzoub continued with a discussion of their recent enhancements in their content products, and demoed their life sciences application built on Documentum D2. There’s a new UI for D2 and a D2 mobile app, plus Brava! widgets for building apps. They can deploy their content products (OTMM, Content Suite, D2 and eDocs) across a variety of OpenText Cloud configurations, from on-premise to hybrid to public cloud. Content in the cloud allows for external sharing and collaboration, and we saw a demo of this capability using OpenText Core, which is their personal/team cloud product. Edits to an Office365 document by an external collaborator (or, presumably, edited using a desktop app and saved back to Core) can be synchronized back into Content Suite.

Other products and demos that he covered:

  • A demo of Exstream for updating and publishing a customer communication asset, which can automatically push the communication to specific customers and platforms via email, document notifications in Core, or mobile notifications. It actually popped up in the notifications section of the Enterprise World app on my phone, so worked as expected.
  • Their People Center HR app, which we saw demonstrated yesterday, built on AppWorks and Process Suite.
  • A demo of Extended ECM, which integrates content capabilities directly into other applications such as SAP, supporting both private and shared public cloud platforms for both internal and external participants.
  • Enhancements coming to Business Network, which is their collection of supply chain technologies, including B2B integration, fax, secure messaging and more; most interesting is the upcoming integration with Process Suite to merge internal and external processes.
  • A bit about the Covisint acquisition — not yet closed so not too many details — for IoT and deveice messaging.
  • AppWorks is their low-code development environment that enables both desktop and mobile apps to be created quickly, while still supporting more advanced developers.
  • Applying machine-assisted discovery to information lakes formed by a variety of hetereogenous content sources for predictions and insights.
  • eDOCS InfoCenter for an improved portal-style UI (in case you haven’t been paying attention for the past few years, eDOCS is focused purely on legal applications, although has functionality that overlaps with Content Suite and Documentum).

Majzoub finished with commitments for their next version — EP3 coming in October 2017 — covering enhancements across the full range of products, and the longer-term view of their roadmap of continuous innovation including their new hybrid platform, Project Banff. This new modern architecture will include a common RESTful services layer and an underlying integrated data model, and is already manifesting in AppWorks, People Center, Core, LEAP and Magellan. I’m assuming that some of their legacy products are not going to be migrated onto this new architecture.

 

I also attended the Process Suite product roadmap session yesterday as well as a number of demos at the expo, but decided to wait until later today when I’ve seen some of the other BPM-related sessions to write something up. There are some interesting changes coming — such as Process Suite becoming part of the AppWorks low-code application development environment — and I’m still getting a handle on how the underlying Cordys DNA of the product is being assimilated.

The last part of the keynote was a session on business creativity by Fredrik Härén — interesting!