OpenText EIMDay Toronto, Financial Services Session

After lunch at the Toronto OpenText EIM Day, Catharine MacKenzie of the Mutual Fund Dealers Association talked about how they’re using OpenText MBPM (from the Metastorm acquisition). She spoke on an OpenText webinar last year, and I was interested in how they’ve progressed since then.

The MFDA is very process-based, since they’re a regulatory body, and although their policies don’t change that often, the processes used to deal with members and policies are constantly being improved. There was no packaged solution for their regulatory processes, and the need to have process flexibility without a full-on custom solution (which was beyond their budget and IT capabilities) led them to BPM. As I described in the post about the webinar (linked above), they started with four processes including compliance and enforcement, and sped through the implementation of several other processes through 2012. Although during the webinar, she stated that they would be implementing five new processes in 2012, most of that has been pushed to 2013, in part (it appears) because of a platform upgrade to MBPM 9.

She pointed out that everyone in MFDA is using BPM for internal administrative processes, such as booking time off, as well as for the member-facing processes; for many of these processes, the users don’t even know that they’re using BPM. They’re also an OpenText eDocs customer, so can present content within processes, although apparently they have had to do a lot of that integration work themselves.

As for benefits, they’re seeing a huge decrease in development and deployment time compared to custom applications that they build in Visual Studio, with process versioning and auditing built in. They’ve had challenges around having the business own the processes, rather than IT, while maintaining good process design and disciplined testing; the MBPM upgrade and migration is also taking longer than expected, hence is delaying some of their planned process implementations. This is an interesting result, against the backdrop of this morning’s customer keynote talking about major system upgrades: an upgrade that requires data migration and custom application refactoring is almost always going to cause delays in a previously-defined schedule of roll-outs, but very necessary for setting the stage for future functionality.

I’m skipping out for the rest of the afternoon to get back to my desk, but this has been a good opportunity to get caught up on the entire OpenText product suite and talk to some of their local customers.

Disclosure: OpenText is a customer, for whom I recently did a webinar and related white paper, but I am not paid to be here today, nor for writing any of these blog posts.

OpenText EIMDay Toronto, Customer Keynotes

Following the company/product keynotes, we heard from two OpenText customers.

First up was Tara Drover from Hatch Engineering, a Canadian engineering firm with 11,000 employees worldwide. They have OpenText Content Server on 10 corporate instances containing 32 million documents for more than 37,000 projects, almost half at their corporate headquarters in the Toronto area. They use it for project documentation, but also for a variety of other administrative and management documents. It appears that they have configured and customized Content Server, and built add-ons, to be the core of their corporate information store. They’ve been using Content Server since 2002 (v9.1), and have upgraded through v9.5 (including “de-customization”, a term and philosophy that I adore), v9.7.1 and v10. The latest upgrade, to CS10, is the one that she focused on in her presentation. Their drivers for the upgrade were to move to a 64-bit platform for scalability and performance reasons, to get off v9.7.1 before support ended, and to set the stage for some of the features in CS10: facets and columns, an improved search engine, and multilingual support. However, they wanted to keep the UI as similar as possible, providing more of a back-end upgrade as a platform for growth rather than a radical user experience change.

They started in March 2012 with strategy, change assessment and planning, then continued on to environmental assessment, development and testing, people change management and their first deployment in July 2012. Their readiness assessment identified that they first had to update their Windows Server and SQL Server instances (to 2008 — hardly cutting edge), and showed some of the changes to the integration points with other Hatch systems. As part of their development and testing, they developed an 80-page deployment guide, since this would have to roll out to all of the Content Server sites worldwide, including estimates of times required for the upgrade in order to avoid downtime during local business hours, and plans for using local staff for remote upgrades. During development and testing, they simultaneously ran the v9.7.1 production environment on the upgraded Windows Server platform, plus a CS10 development environment and a separate CS10 test/staging environment where the production processes were cloned and tested.

If you’re upgrading a single Content Server instance, you’re unlikely to go to this level of complexity in your upgrade plans and rollout, but for multiple sites across multiple countries (and languages), it’s a must. In spite of all the planning, they did have a few hiccups and some production performance issues, in part because they didn’t have a load testing tool. From their first rollout in Santiago, Chile in July 2012, followed by a few months of tuning and testing, they’re now rolling out about one site per month. They’re seeing improvements in the UI and search functions, and are ready to start thinking about how to use some of the new CS10 features.

They had a number of success factors that are independent of whatever product that you’re upgrading, such as clearly defined scope, issue management, and upgrading the platform without adding too many new user features all at once.

The second customer keynote was from Robin Thompson, CIO for the shared services sector of the Government of Ontario. They had some pretty serious information and records management issues, pretty much leaving the retention and disposition of information in the hands of individuals, with little sharing of information between ministries. To resolve this, they have developed a framework for information management over the next several years, targeted at improving efficiencies and improving services to constituents. Their guiding principles are that nformation needs to be protected and secure, managed, governed, accessible and relevant, and valued; in other words, the information needs to be managed as business records. Their roadmap identified an enterprise records and document management service as a necessary starting point, which they have deployed (based on OpenText) in the past year to the Social Services Ministries, with six more areas queued up and ready to implement. In addition to deploying in more ministries, they are also expanding functionality, bringing in email records management. to the Ministry of Finance later this year. This information management framework and vision is long overdue for the Ontario government, and hopefully will lead to better services for those of us who live here.

She shared a number of lessons that they learned along the way: the importance of change management and stakeholder communication; the time required for developing data architecture and taxonomy; the balance between overly-rigid standardization and too many customized instances; the need for external and internal resources to develop and maintain a records/document management practice; and the importance of governance. They’ve focused on an incremental approach, and have allowed the business leaders to pull the functionality rather than have IT push it into the business areas.