Show me the money: Financials, sales and support at @OpenText Analyst Summit 2019

We started the second day of the OpenText Analyst Summit 2019 with their CFO, Madhu Ranganathan, talking about their growth via acquisitions and organic growth. She claimed that their history of acquisitions shows that M&A does work — a point with which some industry specialists may not agree, given the still overlapping collection of products in their portfolio — but there’s no doubt that they’re growing well based on their six-year financials, across a broad range of industries and geographies. She sees this as a position for continuing to scale to $1B in operating cash flow by June 2021, an ambitious but achievable target, on their existing 25-year run.

Ted Harrison, EVP of Worldwide Sales, was up next with an update on their customer base: 85 of the 100 largest companies in the world, 17 of the top 20 financial services companies, 20 of the top 20 life sciences companies, etc. He walked through the composition of the 1,600 sales professionals in their teams, from the account executives and sales reps to the solution consultants and other support roles. They also have an extensive partner channel bringing domain expertise and customer relationships. He highlighted a few customers in some of the key product areas — GM for digital identity management, Nestle for supply chain management, Malaysia Airports for AI and analytics,and British American Tobacco for SuccessFactors-OT2 integration — with a focus on customers that are using OpenText in ways that span their business operations in a significant way.

James McGourlay, EVP of Customer Operations, covered how their global technical support and professional services organization has aligned with the customer journey from deployment to adoption to expansion of their OpenText products. With 1,400 professional services people, they have 3,000 engagements going on at any given time across 30 countries. As with most large vendors’ PS groups, they have a toolbox of solution accelerators, best practices, and expert resources to help with initial implementation and ongoing operations. This is also where they partner with systems integrators such as CGI, Accenture and Deloitte, and platform partners like Microsoft and Oracle. He addressed the work of their 1,500 technical support professionals across four major centers of excellence for round-the-clock support, co-located with engineering teams to provide a more direct link to technical solutions. They have a strong focus on customer satisfaction in PS and technical support because they realize that happy customers tend to buy more stuff; this is particularly important when you have a lot of different products to sell to those customers to expand your footprint within their organizations.

Good to hear more about the corporate and operations side than I normally cover, but looking forward to this afternoon’s deeper dives into product technology.

Product Innovation session at @OpenText Analyst Summit 2019

Muhi Majzoub, EVP of Engineering, continued the first day of the analyst summit with a deeper look at their technology progress in the past year as well as future direction. I only cover a fraction of OpenText products; even in the ECM and BPM space, they have a long history of acquisitions and it’s hard to keep on top of all of them.

Their Content Services provides information integration into a variety of key business applications, including Salesforce and SAP; this allows users to work in those applications and see relevant content in that context without having to worry where or how it’s stored and secured. Majzoub covered a number of the new features of their content platforms (alas, there are still at least two content platforms, and let’s not even talk about process platforms) as well as user experience, digital asset management, AI-powered content analytics and eDiscovery. He talked about their solutions for LegalTech and digital forensics (not areas that I follow closely), then moved on to the much broader areas of AI, machine learning and analytics as they apply to capture, content and process, as well as their business network transactions.

He talked about AppWorks, which is their low-code development environment but also includes their BPM platform capabilities since they have a focus on process- and content-centric applications such as case management. They have a big push on vertical application development, both in terms of enabling it for their customers and also for building their own vertical offerings. Interestingly, they are also allowing for citizen development of micro-apps in their Core cloud content management platform that includes document workflows.

The product session was followed by a showcase and demos hosted by Stephen Ludlow, VP of Product Marketing. He emphasized that they are a platform company, but since line-of-business buyers want to buy solutions rather than platforms, they need to be able to demonstrate applications that bring together many of their capabilities. We had five quick demos:

  • AI-augmented capture using Captive capture and Magellan AI/analytics: creating an insurance claim first notice of loss from an unstructured email, while gathering aggregate analytics for fraud detection and identifying vehicle accident hotspots.
  • Unsupervised machine learning for eDiscovery to identify concepts in large sets of documents in legal investigations, then using supervised learning/classification to further refine search results and prioritize review of specific documents.
  • Integrated dashboard and analytics for supply chain visibility and management, including integrating, harmonizing and cleansing data and transactions from multiple internal and external sources, and drilling down into details of failed transactions.
  • HR application integrating SAP SuccessFactors with content management to store and access documents that make up an employee HR file, including identifying missing documents and generating customized documents.
  • Dashboard for logging and handling non-conformance and corrective/preventative actions for Life Sciences manufacturing, including quality metrics and root cause analysis, and linking to reference documentation.
  • Good set of business use cases to finish off our first (half) day of the analyst summit.

Snowed in at the @OpenText Analyst Summit 2019

Mark Barrenechea, OpenText’s CEO and CTO, kicked off the analyst summit with his re:imagine keynote here in Boston amidst a snowy winter storm that ensures a captive audience. He gave some of the current OpenText stats –100M end users over 120,000 customers, 2.8B in revenue last year — before expanding into a review of how the market has shifted over the past 10 years, fueled by changes in technology and infrastructure. What’s happened on the way to digital and AI is what he calls the zero theorem: zero trust (guard against security and privacy breaches), zero IT (bring your own device, work in the cloud), zero people (automate everything possible) and zero down time (everything always available).

Their theme for this year is to help their customers re:imagine work, re:imagine their workforce, and re:imagine automation and AI. This starts with OpenText’s intelligent information core (automation, AI, APIs and data management), then expands with both their EIM platforms and EIM applications. OpenText has a pretty varied product portfolio (to say the least) and is bringing many of these components together into a more cohesive integrated vision in both the content services and the business network spaces. More importantly, they are converging their many, many engines so that in the future, customers won’t have to decide between which ECM or BPM engine, for example.

They are providing a layer of RESTful services on top of their intelligent information core services (ECM, BPM, Capture, Business Network, Analytics/AI, IoT), then allow that to be consumed either by standard development tools in a technical IDE, or using the AppWorks low-code environment. The Cloud OT2 architecture provides about 40 services for consumption in these development environments or by OpenText’s own vertical applications such as People Center.

Barrenechea finished up with a review of how OpenText is using OpenText to transform their own business, using AI for looking at some of their financial and people management data to help guide them towards improvements. They’ll be investing $2B in R&D over the next five years to help them become even bigger in the $100B EIM market, both through the platform and more increasingly through vertical applications.

We’ll be digging into more of the details later today and tomorrow as the summit continues, so stay tuned.

Next up was Ted Harrison, EVP of Worldwide Sales, interviewing one of their customers: Gopal Padinjaruveetil, VP and Chief Information Security Officer at The Auto Club Group. AAA needs no introduction as a roadside assistance organization, but they also have insurance, banking, travel, car care and advocacy business areas, with coordinated member access to services across multiple channels. It’s this concept of the connected member that has driven their focus on digital identity for both people and devices, and how AI can help them to reduce risk and improve security by detecting abnormal patterns.

Integrating your enterprise content with cloud business applications? I wrote a paper on that!

Just because there’s a land rush towards SaaS business applications like Salesforce for some of your business applications, it doesn’t mean that your content and data are all going to be housed on that platform. In reality, you have a combination of cloud applications, cloud content that may apply across several applications, and on-premise content; users end up searching in multiple places for information in order to do a single transaction.

In this paper, sponsored by Intellective (who have a bridging product for enterprise content/data with SaaS business applications), I wrote about some of the architecture and design issues that you need to consider when you’re linking these systems together. Here’s the introduction:

Software-as-a-service (SaaS) solutions provide significant utility and value for standard business applications, including customer relationship management (CRM), enterprise resource planning (ERP), supply chain management (SCM), human resources (HR), accounting, insurance claims management, and email. These “systems of engagement” provide a modern and agile user experience that guides workers through actions and enables collaboration. However, they rarely replace the core “systems of record”, and don’t provide the range of content services required by most organizations.

This creates an issue when, for example, a customer service worker’s primary environment is Salesforce CRM, but for every Salesforce activity they may also need to access multiple systems of record to update customer files, view regulatory documentation or initiate line-of-business (LOB) processes not supported in Salesforce. The worker spends too much time looking for information, risks missing relevant content in their searches, and may forget to update the same information in multiple systems.

The solution is to integrate enterprise content from the systems of record – data, process and documents – directly with the primary user-facing system of engagement, such that the worker sees a single integrated view of everything required to complete the task at hand. The worker completes their work more efficiently and accurately because they’re not wasting time searching for information; data is automatically updated between systems, reducing data entry effort and errors.

Head on over to get the full paper (registration required).

AlfrescoDay 2018: digital business platform and a whole lot of AWS

I attended Alfresco’s analyst day and a customer day in New York in late March, and due to some travel and project work, just finding time to publish my notes now. Usually I do that while I’m at the conference, but part of the first day was under NDA so I needed to think about how to combine the two days of information.

The typical Alfresco customer is still very content-centric, in spite of the robust Alfresco Process Services (formerly Activiti) offering that is part of their platform, with many of their key success stories presented at the conference were based on content implementations and migrations from ECM competitors such as Documentum. In a way, this is reminiscent of the FileNet conferences of 20 years ago, when I was talking about process but almost all of the customers were only interested in content management. What moves this into a very modern discussion, however, is the focus on Alfresco’s cloud offerings, especially on Amazon AWS.

First, though, we had a fascinating keynote by Sangeet Paul Choudary — and received a copy of his book Platform Scale: How an emerging business model helps startups build large empires with minimum investment — on how business models are shifting to platforms, and how this is disrupting many traditional businesses. He explained how supply-side economies of scale, machine learning and network effects are allowing online platforms like Amazon to impact real-world industries such as logistics. Traditional businesses in telecom, financial services, healthcare and many other verticals are discovering that without a customer-centric platform approach rather than a product approach, they can’t compete with the newer entrants into the market that build platforms, gather customer data and make service-based partnerships through open innovation. Open business models are particularly important, and striking the right balance between an open ecosystem and maintaining control over the platform through key control points. He finished up with a digital transformation roadmap: gaining efficiencies through digitization; then using data collected in the first stage while integrating flows across the enterprise to create one view of the ecosystem; and finally externalizing and harnessing value flows in the ecosystem. This last stage, externalization, is particularly critical, since opening the wrong control points can kills you business or stifle open growth.

This was a perfect lead-in to Chris Wiborg’s (Alfresco’s VP of product marketing) presentation on Alfresco’s partnership with Amazon and the tight integration of many AWS services into the Alfresco platform: leveraging Amazon’s open platform to build Alfresco’s platform. This partnership has given this conference in particular a strong focus on cloud content management, and we are hearing more about their digitial business platform that is made up of content, process and governance services. Wiborg started off talking about the journey from (content) digitization to digital business (process and content) to digital transformation (radically improving performance or reach), and how it’s not that easy to do this particularly with existing systems that favor on-premise monolithic approaches. A (micro-) service approach on cloud platforms changes the game, allowing you to build and modify faster, and deploy quickly on a secure elastic infrastructure. This is what Alfresco is now offering, through the combination of open source software, integration of AWS services to expand their portfolio of capabilities, and automated DevOps lifecycle.

This brings a focus back to process, since their digital business platform is often sold process-first to enable cross-departmental flows. In many cases, process and content are managed by different groups within large companies, and digital transformation needs to cut across both islands of functionality and islands of technology.

They are promoting the idea that differentiation is built and not bought, with the pendulum swinging back from buy toward build for the portions of your IT that contribute to your competitive differentiation. In today’s world, for many businesses, that’s more than just customer-facing systems, but digs deep into operational systems as well. In businesses that have a large digital footprint, I agree with this, but have to caution that this mindset makes it much too easy to go down the rabbit hole of building bespoke systems — or having someone build them for you — for standard, non-differentiating operations such as payroll systems.

Alfresco has gone all-in with AWS. It’s not just a matter of shoving a monolithic code base into a Docker container and running it on EC2, which how many vendors claim AWS support: Alfresco has a much more integrated microservices approach that provides the opportunity to use many different AWS services as part of an Alfresco implementation in the AWS Cloud. This allows you to build more innovative solutions faster, but also can greatly reduce your infrastructure costs by moving content repositories to the cloud. They have split out services such as Amazon S3 (and soon Glacier) for storage services, RDS/Aurora for database services, SNS for notification, security services, networking services, IoT via Alexa, Rekognition for AI, etc. Basically, a big part of their move to microservices (and extending capabilities) is by externalizing to take advantage of Amazon-offered services. They’re also not tied to their own content services in the cloud, but can provide direct connections to other cloud content services, including Box, SharePoint and Google Drive.

We heard from Tarik Makota, an AWS solution architect from Amazon, about how Amazon doesn’t really talk about private versus public cloud for enterprise clients. They can provide the same level of security as any managed hosting company, including private connections between their data centers and your on-premise systems. Unlike other managed hosting companies, however, Amazon is really good at near-instantaneous elasticity — both expanding and contracting — and provides a host of other services within that environment that are directly consumed by Alfresco and your applications, such as Amazon RDS for Aurora, a variety of AI services, serverless step functions. Alfresco Content Services and Process Services are both available as AWS QuickStarts, allowing for full production deployment in a highly-available, highly-redundant environment in the geographic region of your choice in about 45 minutes.

Quite a bit of food for thought over the two days, including their insights into common use cases for Alfresco and AI in content recognition and classification, and some of their development best practices for ensuring reusability across process and content applications built on a flexible modern architecture. Although Alfresco’s view of process is still quite content-centric (naturally), I’m interested to see where they take the entire digital business platform in the future.

Also great to see a month later that Bernadette Nixon, who we met at the Chief Revenue Officer at the event, has moved up to the CEO position. Congrats!

bpmNEXT 2018: Last session with a Red Hat demo, Serco presentation and DMN TCK review

We’re on the final session of bpmNEXT 2018 — it’s been an amazing three days with great demos and wonderful conversations.

Exploiting Cloud Infrastructure for Efficient Business Process Execution, Red Hat

Kris Verlaenen, project lead for jBPM as part of Red Hat, presented on cloud BPM infrastructure, specifically for execution and monitoring. Cloud makes BPM lightweight, scalable, embedable and able to take advantage of the larger cloud app ecosystem. They are introducing some new cloud infrastructure, including a controller for managing server deployments, a smart router for delegating and aggregating requests from applications to servers, and monitoring that aggregates process statistics across servers and containers. The demo showed using Red Hat’s OpenShift container application platform (actually MiniShift running on his laptop) to create a new environment and deploy an IT hardware ordering BPM application. He walked through using the application to create a new order and see the milestone-based monitoring of the order, then the hardware provider’s view of their steps in the process to provide information and advance the process to the next stage. The process engine and monitoring engine can be deployed in different containers on different hardware, in any combination of cloud providers and on-premise infrastructure. Applications and servers can be bundled into a single immutable image for easy provisioning — more of a microservices style — or can be deployed independently. Multiple versions of the same application can be deployed, allowing current instances to play out in the original version while new instances use the most recent version, or other strategies that would allow new instances of any version to be created, while monitoring can aggregate instance data from all versions in all containers.

Kris is also live-blogging the conference, check out his posts. He has gone back and included the video of each presentation when they are released (something that I didn’t do for page load performance reasons) as well as providing his commentary on each presentation.

Dynamic Work Assignment, Serco

Lloyd Dugan of Serco has the unenviable position of being the last presenter of the conference, although he gave a presentation of dynamic work assignment implementation rather than an actual demo (with a quick view of the simple process model in the Trisotech animator near the end, plus an animation of the work assignment in action). His company is a call center business process outsourcer, where knowledge workers use a case management application implemented in BPMN, driven by events such as inbound calls and documents, as well as timers. Real-time work prioritization and assignment is necessary because of SLAs around inbound calls, and the task management model is moving from work being selected (and potentially cherry-picked) by workers, to push assignments. Tasks are scored and assigned using decision models that include task type and SLAs, and worker eligibility based on each individual’s skills and training. Although work assignment products exist, this one is specifically for the complex rules around the US Affordable Care Act administration, which requires a combination of decision tables, database table-driven rules, and lower-level coding to provide the right combination of flexibility and performance.

DMN TCK (Technical Compatibility Kit) Working Group

Keith Swenson of Fujitsu (but presenting here in his role on the DMN standards) started on the idea of a set of standardized DMN technical compatibility tests based on conversations at bpmNEXT in 2016, and he presented today on where they’re at with the TCK. Basically, the TCK provides a way for DMN vendors to demonstrate their compliance with the standard by providing a set of DMN models, input data, and expected results, testing decision tables, boxed expressions and FEEL. Vendors who can demonstrate that they pass all of the TCK tests are listed on a github site along with information about individual test results, providing a way for DMN customers to assess the compliance level of vendors. Keith wrote an update on this last September that provides a good summary up to that point, and in today’s presentation he walked through some of the additional things that they’ve done including identifying sections of the DMN specification that require clarifications or additions due to ambiguity that can lead to different implementations. DMN 1.2 is coming out this year, which will require a new set of tests specifically for that version while maintaining the previous version tests; they are also trying to improve testing of error cases and introducing more real-world decision models. If you create and use DMN models, or make a DMN-compliant decision management product, or you’re otherwise interested in the DMN TCK, you can find out here how to get involved in the working group.

That’s it for bpmNEXT 2018. There will be voting for the best in show and some wrapup after lunch, but we’re pretty much done for this year. Another amazing year that makes me proud to be a part of this community.

Anarchy in Edmonton: no, it’s not hockey, it’s Google Drive

I’m in a breakout session at the AIIM 2018 conference, and Kristan Cook and Gina Smith-Guidi are talking about their work at the City of Edmonton in transitioning from network drives to Google Drive for their unstructured corporate information. Corporate Records and Information Management (CRIM) is part of the Office of the City Clerk, and is run a bit independently of IT and in a semi-decentralized manner. They transitioned from Microsoft Office to Google Suite in 2013, and wanted to apply records management to what they were doing; at that time, there was nothing commercially available, so hired a Google Apps developer to do it for them. They needed the usual records management requirements: lifecycle management, disposition and legal hold reporting, and tools to help users to file in the correct location; on top of that, it had to be easy to use and relatively inexpensive. They also managed to reconcile over 2000 retention schedules into one master classification and retention schedule, something that elicited gasps from the audience here.

What they offer to the City departments is called COE Drive, which is a functional classification — it just appears as a folder in Google Drive — then the “big bucket” method below that top level, where documents are filed within a subfolder that represents the retention classification. When you click New in Google Drive, there’s a custom popup that asks for the primary classification and secondary classification/record series, and a subfolder within the secondary classification. This works for both uploaded files and newly-created Google Docs/Sheets files. Because these are implemented as folders in Google Drive, access permissions are applied so that users only see the classifications that apply to them when creating new documents. There’s also a simple customized view that can be rolled out to most users who only need to see certain classifications when browsing for documents. Users don’t need to know about retention schedules or records management, and can just work the way that they’ve been working with Google Drive for five years with a bit of a helper app to help them with filing the documents. They’re also integrating Google File Stream (the sync capability) for files that people work on locally on their desktop, to ensure that they are both backed up and stored as proper records if required.

The COE Drive is a single account drive, I assume so that documents added to the COE Drive have their ownership set to the COE Drive and are not subject to individual user changes. There’s not much metadata stored except for the date, business area and retention classification; in my experience with Google Drive, the search capabilities mean that you need much less explicit metadata.

It sounds as if most of the original work was done by a single developer, and now they have new functionality created by one student developer; on top of that, since it’s cloud-based, there’s no infrastructure cost for servers or software licences, just subscription costs for Google Apps. They keep development in-house both to reduce costs and to speed deployment. Compare the chart on the right with the cost and time for your usual content and records management project — there are no zeros missing, the original development cost was less than $50k (Canadian). That streamlined technology path has also inspired them to streamline their records management policies: now, changes to the retention schedule that used to require a year and five signatures can now be signed off by the City Clerk alone.

Lots of great discussion with the audience: public sector organizations are very interested in any solution where you can do robust content and records management using low-cost cloud-based tools, but many private sector companies are seeing the benefits as well. There was a question about whether they share their code: they don’t currently do that, but don’t have a philosophical problem with doing that — watch for their Github to pop up soon!

My guest post on the @Alfresco blog: BPM Cloud Architectures and Microservices

The second of the short articles that I wrote for Alfresco has been published on their blog, on BPM cloud architectures and microservices. I walk through the basics of cloud architectures (private, public and hybrid), containerization and microservices, and why this is relevant for BPM implementations. As I point out:

Not all BPM solutions are built for cloud-native architectures: a monolithic BPMS stuffed into a Docker container will not be able to leverage the advantages of modern cloud infrastructures.

Check out the full article on the Alfresco site.

Transforming Insurance with Cloud BPM: my guest post on the @Alfresco blog

I recently wrote three short articles for Alfresco, which they are publishing on their blog. The first one is about insurance and cloud BPM, looking at how new business models are enabled and customer-facing processes improved using a containerized cloud architecture and microservices. From the intro:

In this blog post, I plan to explore the role BPMS plays in integrating packaged software, custom-built systems, and external services into a seamless process that includes both internal and external participants. What if you need to include customers in your process without having to resort to email or manual reconciliation with an otherwise automated process? What if you need employees and partners to participate in processes regardless of their location, and from any device? What if some of the functions that you want to use, such as machine learning for auto-adjudication, industry comparative analytics on claims, or integration with partner portals, are available primarily in the public cloud?

Head over there to read more about my 4-step plan for insurance technology modernization, although the same can be applied in many other types of organizations. They also have a webinar coming up next week on legacy ECM modernization at Liberty Mutual; with some luck, Liberty Mutual will read my article and think about how cloud BPM can help modernize their processes too.

The other two posts that I wrote for them – one that dives more into BPM cloud architectures and microservices, and one that examines use cases for content in process applications – will be published over the next couple of months. Obviously, Alfresco paid me to write the content that is published on their site, although it’s educational and thought leadership in nature, not about their products.

On the Alfresco topic, I’ll likely be at Alfresco Day in New York on March 28, since they’re holding an analyst briefing there the day before.

OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from Majzoub in the morning keynote. He started with a review of the past year of product development — specific product enhancements and releases, more applications, and Magellan analytics suite — then moved on to the ongoing vision and strategy.

Dean Haacker, CTO of The PrivateBank, joined Majzoub to talk about their use of OpenText content products. They moved from an old version of Content Server to the curret CS16, adding WEM integrated with CS for their intranet, Teleform for scanning, and ShinyDrive (OpenText’s partner of the year) for easy access to the content repository. The improved performance, capabilities and user experience are driving adoption within the bank; more of their employees are using the content capabilities for their everyday document needs, and as one measure of the success, their paper consumption has reduced by 20%.

Majzoub continued with a discussion of their recent enhancements in their content products, and demoed their life sciences application built on Documentum D2. There’s a new UI for D2 and a D2 mobile app, plus Brava! widgets for building apps. They can deploy their content products (OTMM, Content Suite, D2 and eDocs) across a variety of OpenText Cloud configurations, from on-premise to hybrid to public cloud. Content in the cloud allows for external sharing and collaboration, and we saw a demo of this capability using OpenText Core, which is their personal/team cloud product. Edits to an Office365 document by an external collaborator (or, presumably, edited using a desktop app and saved back to Core) can be synchronized back into Content Suite.

Other products and demos that he covered:

  • A demo of Exstream for updating and publishing a customer communication asset, which can automatically push the communication to specific customers and platforms via email, document notifications in Core, or mobile notifications. It actually popped up in the notifications section of the Enterprise World app on my phone, so worked as expected.
  • Their People Center HR app, which we saw demonstrated yesterday, built on AppWorks and Process Suite.
  • A demo of Extended ECM, which integrates content capabilities directly into other applications such as SAP, supporting both private and shared public cloud platforms for both internal and external participants.
  • Enhancements coming to Business Network, which is their collection of supply chain technologies, including B2B integration, fax, secure messaging and more; most interesting is the upcoming integration with Process Suite to merge internal and external processes.
  • A bit about the Covisint acquisition — not yet closed so not too many details — for IoT and deveice messaging.
  • AppWorks is their low-code development environment that enables both desktop and mobile apps to be created quickly, while still supporting more advanced developers.
  • Applying machine-assisted discovery to information lakes formed by a variety of hetereogenous content sources for predictions and insights.
  • eDOCS InfoCenter for an improved portal-style UI (in case you haven’t been paying attention for the past few years, eDOCS is focused purely on legal applications, although has functionality that overlaps with Content Suite and Documentum).

Majzoub finished with commitments for their next version — EP3 coming in October 2017 — covering enhancements across the full range of products, and the longer-term view of their roadmap of continuous innovation including their new hybrid platform, Project Banff. This new modern architecture will include a common RESTful services layer and an underlying integrated data model, and is already manifesting in AppWorks, People Center, Core, LEAP and Magellan. I’m assuming that some of their legacy products are not going to be migrated onto this new architecture.

 

I also attended the Process Suite product roadmap session yesterday as well as a number of demos at the expo, but decided to wait until later today when I’ve seen some of the other BPM-related sessions to write something up. There are some interesting changes coming — such as Process Suite becoming part of the AppWorks low-code application development environment — and I’m still getting a handle on how the underlying Cordys DNA of the product is being assimilated.

The last part of the keynote was a session on business creativity by Fredrik Härén — interesting!