OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from Majzoub in the morning keynote. He started with a review of the past year of product development — specific product enhancements and releases, more applications, and Magellan analytics suite — then moved on to the ongoing vision and strategy.

Dean Haacker, CTO of The PrivateBank, joined Majzoub to talk about their use of OpenText content products. They moved from an old version of Content Server to the curret CS16, adding WEM integrated with CS for their intranet, Teleform for scanning, and ShinyDrive (OpenText’s partner of the year) for easy access to the content repository. The improved performance, capabilities and user experience are driving adoption within the bank; more of their employees are using the content capabilities for their everyday document needs, and as one measure of the success, their paper consumption has reduced by 20%.

Majzoub continued with a discussion of their recent enhancements in their content products, and demoed their life sciences application built on Documentum D2. There’s a new UI for D2 and a D2 mobile app, plus Brava! widgets for building apps. They can deploy their content products (OTMM, Content Suite, D2 and eDocs) across a variety of OpenText Cloud configurations, from on-premise to hybrid to public cloud. Content in the cloud allows for external sharing and collaboration, and we saw a demo of this capability using OpenText Core, which is their personal/team cloud product. Edits to an Office365 document by an external collaborator (or, presumably, edited using a desktop app and saved back to Core) can be synchronized back into Content Suite.

Other products and demos that he covered:

  • A demo of Exstream for updating and publishing a customer communication asset, which can automatically push the communication to specific customers and platforms via email, document notifications in Core, or mobile notifications. It actually popped up in the notifications section of the Enterprise World app on my phone, so worked as expected.
  • Their People Center HR app, which we saw demonstrated yesterday, built on AppWorks and Process Suite.
  • A demo of Extended ECM, which integrates content capabilities directly into other applications such as SAP, supporting both private and shared public cloud platforms for both internal and external participants.
  • Enhancements coming to Business Network, which is their collection of supply chain technologies, including B2B integration, fax, secure messaging and more; most interesting is the upcoming integration with Process Suite to merge internal and external processes.
  • A bit about the Covisint acquisition — not yet closed so not too many details — for IoT and deveice messaging.
  • AppWorks is their low-code development environment that enables both desktop and mobile apps to be created quickly, while still supporting more advanced developers.
  • Applying machine-assisted discovery to information lakes formed by a variety of hetereogenous content sources for predictions and insights.
  • eDOCS InfoCenter for an improved portal-style UI (in case you haven’t been paying attention for the past few years, eDOCS is focused purely on legal applications, although has functionality that overlaps with Content Suite and Documentum).

Majzoub finished with commitments for their next version — EP3 coming in October 2017 — covering enhancements across the full range of products, and the longer-term view of their roadmap of continuous innovation including their new hybrid platform, Project Banff. This new modern architecture will include a common RESTful services layer and an underlying integrated data model, and is already manifesting in AppWorks, People Center, Core, LEAP and Magellan. I’m assuming that some of their legacy products are not going to be migrated onto this new architecture.

 

I also attended the Process Suite product roadmap session yesterday as well as a number of demos at the expo, but decided to wait until later today when I’ve seen some of the other BPM-related sessions to write something up. There are some interesting changes coming — such as Process Suite becoming part of the AppWorks low-code application development environment — and I’m still getting a handle on how the underlying Cordys DNA of the product is being assimilated.

The last part of the keynote was a session on business creativity by Fredrik Härén — interesting!

OpenText Enterprise World keynote with @markbarrenechea

I’m at OpenText Enterprise World 2017  in Toronto; there is very little motivating me to attend the endless stream of conferences in Vegas, but this one is in my backyard. There have been a couple of days of partner summit and customer training, but this is the first day of the general conference.

We kicked off with a keynote hosted by OpenText CEO/CTO Mark Barrenechea, who presented some ideas on his own and invited others to the stage to chat or present as well. He talked about world-changing concepts that we may see start to have a significant impact over the next decade:

  • Robotics
  • Internet of things
  • Internet of money (virtual and alternative currencies)
  • Artificial intelligence
  • Mobile eating the world
  • New business models
  • Living to 150
  • IQ of 1000, where human intelligence and capabilities will be augmented by machines

He positions OpenText as a leading provider of enterprise information management technologies for digital transformation, leveraging the rapid convergence of connectivity, automation and computing power. My issue with OpenText is that they have grown primarily through acquisitions – a LOT of acquisitions – and the product portfolio is vast and overlapping. OpenText Cloud is a fundamental platform, which makes a lot of sense for them with the amount of B2B integration that their tools support, as well as the push to private, hybrid and public cloud by many organizations. They see documents (whether human or machine created) as fundamental business artifacts and therfore content management is a primary capability, but there are a few different products that fall into their ECM category and I’m not sure of the degree of overlap, for example, with the recently-acquired Documentum and some of their other ECM assets. Application development is also a key category for them, with a few different products including their Appworks low-code environment. The story gets a bit simpler with their information network for inter-enterprise connectivity, new acquisition Covisint for managing IoT messages and actions, and newly-released Magellan for analytics.

He interviewed two customers on their business and use of OpenText products:

  • Kristijan Jarc, VP of Digital at KUKA, a robotics company serving a variety of vertical industries, from welding in automotive manufacturing to medical applications. Jarc’s team develops digital strategies and solutions that help their internal teams build better products, often related to how data collected from the robots is used for analytics and preventative maintenance, and they’re using OpenText technology to capture and store that data.
  • Sarah Shortreed, CIO of Bruce Power, which runs a farm of 8 CANDU reactors that generate 30% of Ontario’s electrical power. They’re in the process of refurbishing the plant, some parts of which are 40 years old, which is allowing more data to be collected from more of their assets in realtime. They have much tighter security restrictions than most organizations, and much longer planning cycles, making enterprise information management a critical capability.

Barrenechea hosted three other OpenText people to give demos (I forgot to note the names, but if anyone can add them in a comment, I’ll update this post); I’ve just put in a couple of notes for each trying to capture the essence of the demo and the technologies that they were showcasing:

  • Magellan analytics for a car-share company: targeted marketing, demand and utilization, and proactive maintenance via different algorithms. Automated clustering, trend derivation within a selected dataset to determine the target market for a campaign. Allow data scientists to open notebooks and directly program in Python, R, Scala to create own algorithms by calling Magellen APIs. Use linear regression on historical usage data and weather forecasts to forecast demand. IoT streaming diagnostics from vehicles to predict likelihood of breakdown and take appropriate automated actions to remove cars from service and schedule maintenance.
  • People Center app built on Appworks. Integrated with HRIs including SAP, SuccessFactors, Workday, Oracle for core HR transactions; People Center adds the unstructured data including documents to create the entire employee file. Manage recruitment and onboarding processes. Magellan analytics to match resumes to open positions using proximity-based matching. Identify employees at risk of leaving using logistic regression.
  • KUKA iiwa robot sending IoT data to cloud for viewing through dashboard, analytics to identify possible problems. Field service tech accesses manuals and reference materials via content platform. Case management foldering to collect and view documents related to a maintenance incident. Collaborative chat within maintenance case to allow product specialist to assist field tech. Future AI application: automatically find and rank related cases and highlight relevant information.

The keynote ended with Barrenechea interviewing Wayne Gretzky, which was a delightful conversation although unrelated to any of the technology topics. However, Gretzky did talk about the importance of teamwork, and how working with people who are better than you at something makes you better at what you do. You could also see analogies in business when he talked about changes in the sport of hockey: globalization, expanding markets, and competition is getting bigger and meaner. As a guy who spent a lot of the early part of his hockey career as the smallest guy on the ice, he learned how to hone his intelligence about the game to be a winner in spite of the more traditional strengths of his competitors: a good lesson for all of us.

Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City initiative. This initiative is in part about using “smart” technology – by which he appears to mean well-designed, consumer-facing applications – as well as good mobile infrastructure to support an ecosystem of startup and other small businesses for creating new technology solutions. He gave an example from the city’s transportation department, where historical data is used to analyze traffic patterns, allowing for optimization of traffic flow and predictive modeling for future traffic needs due to new development. This includes input in projects such as the King Street Pilot Study that is going into effect later this year, that will restrict private vehicle traffic on a stretch of King in order to optimize streetcar and pedestrian flows. In general, the city has no plan to monetize data, but prefer to use city-owned data (which is, of course, owned by the public) to foster growth through Open Data initiatives.

There were some questions about how the city will deal with autonomous vehicles, short-term (e.g., AirBnB) rentals and other areas where advancing technology is colliding with public policies. Kolm also spoke about how the city needs to work with the startup/small business community for bringing innovation into municipal government services, and also how our extensive network of public libraries are an untapped potential channel for civic engagement. For more on digital transformation in the city of Toronto, check out my posts on the TechnicityTO conference from a few months back.

I was going to follow this session with the one on intelligent buildings and connected communities by someone from Tridel, which likely would have made an interesting complement to this presentation, but unfortunately the speaker had to cancel at the last minute. That gives me a free hour to crouch in a hallway by an electrical outlet to charge my phone. Winking smile

Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is that creating success with IoT devices is based on saving/creating time, strengthening connections, and having fun.

It also helps to be approaching an underserved market, and if you believe his somewhat horrifying stat that 70% of pet owners consider themselves to be “pet parents”, there’s a market with people who want to interact with and entertain their pets with technology while they are gone during working hours. PetBot’s device gives you a live video feed of your pet remotely, but can also play sounds, drop treats (cue Pavlov) and record pet selfies using facial recognition to send to you while you’re out. This might seem a bit frivolous, but his lessons on using devices to “create” time (allowing for interaction during a time that you would not normally be available), make your own type of interactions (e.g., create a training regimen using voice commands), and have fun to promote usage retention (who doesn’t like cute pet selfies?).

I asked about integrating with pet activity trackers and he declined to comment, so we might see something from them on this front.; other audience questions asked about the potential for learning and recognition algorithms that could automatically reward specific behaviours. I’m probably not going to run out and get a PetBot – it seems much more suited for dogs than cats – but his insights into consumer IoT devices are valid across a broader range of applications.

Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make changes while maintaining the integrity of legacy enterprise processes. Borrowell is a fintech company focused on lending applications: free credit score monitoring, and low-interest personal loans for debt consolidation or reducing credit card debt. They partner with existing financial institutions such as Equifax and CIBC to provide the underlying credit monitoring and lending capabilities, with Borrowell providing a technology layer that’s more than just a pretty face: they use a lot of information sources to create very accurate risk models for automated loan adjudication. As Borrowell’s deep learning platforms learn more about individual and aggregate customer behaviour, their risk models and adjudication platform becomes more accurate, reducing the risk of loan defaults while fine-tuning loan rates to optimize the risk/reward curve.

Great application of AI/ML technology to financial services, which sorely need some automated intelligence applied to many of their legacy processes.

IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live. This week, it’s Big Data Toronto, held in conjunction with Connected Plus and AI Toronto.

Paul Zikopoulos, VP of big data systems at IBM gave a keynote on what cognitive, AI and machine learning mean to big data. He pointed out that no one has a problem collecting data – all companies are pros at that – but the problem is knowing what to do with it in order to determine and act on competitive advantage, and how to value it. He talked about some of IBM’s offerings in this area, and discussed a number of fascinating uses of AI and natural language that are happening in business today. There are trendy chatbot applications, such as Sephora’s lipstick selection bot (upload your selfie and a picture of your outfit to match to get recommendations and purchase directly); and more mundane but useful cases of your insurance company recommending that you move your car into the garage since a hailstorm is on the way to your area. He gave us a quick lesson on supervised and unsupervised learning, and how pattern detection is a fundamental capability of machine learning. Cognitive visual inspection – the descendent of the image pattern analysis algorithms that I wrote in FORTRAN about a hundred years ago – now happens by training an algorithm with examples rather than writing code. Deep learning can be used to classify pictures of skin tumors, or learn to write like Ernest Hemingway, or auto-translate a sporting event. He finished with a live demo combining open source tools such as sentiment analysis, Watson for image classification, and a Twitter stream into a Bluemix application that classified pictures of cakes at Starbucks – maybe not much of a practical application, but you can imagine the insights that could be extracted and analyzed in the same fashion. All of this computation doesn’t come cheap, however, and IBM would love to sell you a few (thousand) servers or cloud infrastructure to make it happen.

After being unable to get into three breakout sessions in a row – see my more detailed comments on conference logistics below – I decided to head back to my office for a couple of hours. With luck, I’ll be able to get into a couple of other interesting sessions later today or tomorrow.

A huge thumbs down to the conference organizers (Corp Agency), by the way. The process to pick up badges for pre-registered attendees was a complete goat rodeo, and took me 20+ minutes to simply pick up a pre-printed badge from a kiosk; the person staffing the “I-L” line started at the beginning of the Ks and flipped his way through the entire stack of badge to find mine, so it was taking about 2 minutes per person in our line while the other lines were empty. The first keynote of the day, which was only 30 minutes long, ran 15 minutes late. The two main breakout rooms were woefully undersized, meaning that it was literally standing room in many of the sessions – which I declined to attend because I can’t type while standing – although there was a VIP section with open seats for those who bought the $300 VIP pass instead of getting the free general admission ticket. There was no conference wifi or charging stations for attendees. There was no free water/coffee service (and the paid food items didn’t look very appetizing); this is a mostly free conference but with sponsors such as IBM, Deloitte, Cloudera and SAS, it seems like they could have had a couple of coffee urns set up for free under a sponsor’s name. The website started giving me an error message about out of date content every time I viewed it on my phone; at least I think it was about out of date content, since it was inexplicably only in French. The EventMobi conference app was very laggy, and was missing huge swaths of functionality if you didn’t have a data connection (see above comments about no wifi or charging stations). I’ve been to a lot of conference, and the logistics can really make a big difference for the attendees and sponsors. In cases like this, where crappy logistics actually prevent attendees from going to sessions that feature vendor sponsor speakers (IBM, are you listening?), it’s inexcusable. Better to charge a small fee for everyone and actually have a workable conference.

Cloud ECM with @l_elwood @OpenText at AIIM Toronto Chapter

Lynn Elwood, VP of Cloud and Services Solutions at OpenText, presented on managing information in a cloud world at today’s AIIM chapter meeting in Toronto. This is of particular interest to Canadians, since most of the cloud service offerings that we see are in the US, and many companies are not comfortable with keeping their private data in a jurisdiction where it can be somewhat easily exposed to foreign government and intelligence agencies.

She used a building analogy to talk about cloud services:

  • Infrastructure as a service (IaaS) is like a piece of serviced land on which you need to build your own building and worry about your connections to services. If your water or electricity is off, you likely need to debug the problem yourself although if you find that the problem is with the underlying services, you can go back to the service provider.
  • Platform as a service (PaaS) is like a mobile home park, where you are responsible for your own dwelling but not for the services, and there are shared services used by all residents.
  • Software as a service (SaaS) is like a condo building, where you own your own part of it, but it’s within a shared environment. SaaS by Gartner’s definition is multi-tenant, and that’s the analogy: you are at the whim, to a certain extent, of the building management in terms of service availability, but at a greatly reduced cost.
  • Dedicated, hosted or managed is like a private house on serviced land, where everything in the house is up to you to maintain. In this set of analogies, not sure that there is a lot of distinction between this and IaaS.
  • On-premises is like a cottage, where you probably need to deal with a lot of the services yourself, such as water and septic systems. You can bring in someone to help, but it’s ultimately all your responsibility.
  • Hybrid is a combo of things — cloud to cloud, cloud to on-premise — such as owning a condo and driving to a cottage, where you have different levels of service at each location but they share information.
  • Managed services is like having a property manager, although it can be cloud or on-premise, to augment your own efforts (or that of your staff).

Regardless of the platform, anything that touches the cloud is going to have a security consideration as well as performance/up-time SLAs if you want to consider it as part of your core business. From my experience, on-premise solutions can be just as insecure and unstable as any cloud offering, so good to know what you’re comparing with when you are looking at cloud versus on-premise.

Most organziations require that their cloud provider have some level of certification: of the facility (data centre), platform (infrastructure) and service (application). Elwood talked about the cloud standards that impact these, including ISO 27001, and SOC 1, 2 and 3.

A big concern is around applications in the cloud, namely SaaS such as Box or Salesforce. Although IT will be focused on whether the security of that application can be breached, business and information managers need to be concerned about what type of data is being stored in those applications and whether it potentially violates any privacy regulations. Take a good look at those SaaS EULAs — Elwood took us through some Apple and Google examples — and have your lawyers look at them as well if you’re deploying these solutions within the enterprise. You also need to look at data residency requirements (as I mentioned at the start): where the data resides, the sovereignty of the hosting company, the routing between you and the data even if the data resides in your own country, and the backup policies of the hosting company. The US Patriot Act allows the US government to access any data that passes through, is stored in, or is hosted by a company that is domiciled in the US; other countries are also adding similar laws. Although a company may have a data centre in your country, if they’re a US company, they probably have a default to store/process/backup in the US: check our the Microsoft hosting and data processing agreement, for example, which specifies that your data will be hosted and/or processed in the US unless you explicitly request otherwise. There’s an additional issue that even if your data has the appropriate residency, if an employee is travelling to a restricted country and accesses the data remotely, you may be violating privacy regulations; not all applications have the ability to filter otherwise authenticated access based on IP address. If you add this to the ability of foreign governments to demand device passwords in order to enter a country, the information accessible via an employee’s computer — not just the information stored it — is at risk for exposure.

Elwood showed a map of the information governance laws and regulations around the world, and it’s a horrifying mix of acronyms for data protection and privacy rules, regulated records retention, eDiscovery requirements, information integrity and authenticity, and reporting obligations. There’s a new EU regulation — the General Data Protection Regulation (GDPR) — that is going to be a game-changer, harmonizing laws across all 28 member nations and applying to any data collected about an EU citizen even outside the EU. The GDPR includes increased consent standards, stronger individual data rights, stronger breach notification, increased governance obligation, stronger recordkeeping requirements, and data transfer constraints. Interestingly, Canada is recognized as one of the countries that is deemed to have “adequate protection” for data transfer, along with Andorra, Argentina, the Faroe Islands, the Channel Islands (Guernsey and Jersey), Isle of Man, Israel, New Zealand, Switzerland and Uruguay. In my opinion, many companies aren’t even aware of the GDPR, much less complying with it, and this is going to be a big wake-up call. Your compliance teams need to be aware of the global landscape as it impacts your data usage and applications, whether in the cloud or on premise; companies can receive huge fines (up to 4% of annual revenue) for violating GDPR whether they are the “owner” of the data or just a data processor/host.

OpenText has a lot of GDPR information on their website that is not specific to their products if you want to read more. 

There are a lot of benefits to cloud when it comes to information management, and a lot of things to consider: agility to grow and change quickly; a services approach that requires partnering with the service provider; mobility capabilities offered by cloud platforms that may not be available for on premise; and analytics offered by cloud vendors within and across applications.

She finished up with a discussion on the top areas of concerns for the attendees: security, regulations, GDPR, data sovereignty, consumer applications, and others. Great discussion amongst the attendees, many of whom work in the Canadian financial services industry: as expected, the biggest concerns are about data residency and sovereignty. GDPR is seen as having the potential to level the regulatory playing field by making everyone comply; once the data centres and service providers start to comply, it will be much easier for most organizations to outsource that piece of their compliance by moving to cloud services. I think that cloud service providers are already doing a better job at security and availability than most on-premise systems, so once they crack the data residency and sovereignty problem there is little reason to have a private data centre. IT’s concern has mostly been around security and availability, but now is the time for information and compliance managers to get involved to ensure that privacy regulations are supported by these platforms.

There are Canadian companies using cloud services, even the big banks and government, although I am guessing that it’s for peripheral rather than core services. Although some are doing this “accidentally” as the only way to share information with external participants, it’s likely time for many companies to revisit their information management strategies to see if they can be more inclusive of property vetted cloud solutions.

We did get a very brief review of OpenText and their offerings at the end, including their software solutions and their EIM cloud offerings under the OpenText Cloud banner. They are holding their Enterprise World user conference in Toronto this July, which is the first (but likely not the last) big software company to see the benefits of a non-US North American conference location.

Twelve years – and a million words – of Column 2

In January, I read Paul Harmon’s post at BPTrends on predictions for 2017, and he mentioned that it was the 15th anniversary of BPTrends. This site hasn’t been around quite that long, but today marks 12 years of blogging here on Column 2. Coincidentally, my first post was on the BP Trends 2005 report on BPM suites!

In that time, I’ve written more than a million words in about 2,600 posts – haven’t quite got around to writing that book yet – documenting many conferences and products, as well as emerging trends and standards in BPM. I’ve collected over 3,000 comments from many of you, which I consider a measure of success: I write here to engage people and discuss ideas. Many of you have become clients, colleagues and friends over the years, and it’s always a thrill to meet someone for the first time and hear them say “I read your blog”. I know that I’ve inspired others to pick up that keyboard and start blogging, and my RSS reader is still the first place that I go for news about the industry (hint: I’m more likely to read your site if you publish a full RSS feed; I only get to the partial ones every week or so).

In the early days, I blogged more frequently, every couple of days; now I seem to be caught up in projects that consume a lot of my time and have less hours to spend focused on writing. Also, I’ve cut back on my business conference travel in the past year or so, attending only the ones where I’m presenting or where I feel that there is value for me, which gives me far fewer opportunities to blog about conference sessions. I’m not going to make any predictions about whether I’ll blog more or less in the next 12 years; I’m just happy to have a soapbox to stand on.

AIIM breakfast meeting on Feb 16: digital transformation and intelligent capture

AIIM TorontoI’m speaking at the AIIM breakfast meeting in Toronto on February 16, with an updated version of the presentation that I gave at the ABBYY conference in November on digital transformation and intelligent capture. ABBYY is generously sponsoring this meeting and will give a brief presentation/demo on their intelligent capture and text analytics products after my presentation.

Here’s the description of my talk:

This presentation will look at how digital transformation is increasing the value of capture and text analytics, recognizing that these technologies provide an “on ramp” to the intelligent, automated processes that underlie digital transformation. Using examples from financial services and retail companies, we will examine the key attributes of this digital transformation. We will review step-by-step, the role of intelligent capture in digital transformation, showing how a customer-facing financial services process is changed by intelligent capture technologies. We will finish with a discussion of the challenges of introducing intelligent capture technologies as part of a digital transformation initiative.

You can register to attend here, and there’s a discount if you’re an AIIM member.

You can read about last month’s meeting here, which featured Jason Bero of Microsoft talking about SharePoint and related Microsoft technologies that are used for classification, preservation, protection and disposal of information assets.

BPM skills in 2017–ask the experts!

Zbigniew Misiak over at BPM Tips decided to herd the cats, and asked a number of BPM experts on the skills that are required – and not relevant any more – as we move into 2017 and beyond. I was happy to be included in that group, and my contribution is here.

In a nutshell, I had advice for both the process improvement/engineering groups, and the IT groups that are involved in BPM implementations. Basically, the former needs to learn more about the potential power of automation as a process improvement tool and how BPMS can help with that; while the latter needs to stop using agile low-code BPMS tools to do monolithic, waterfall-driven implementations. I also addressed the need for citizen developers – usually semi-technical business analysts that build “end user computing” solutions directly within business units – to start using low-code BPMS tools to do this instead of spreadsheets.

On the side of skills that are no longer relevant, I’m seeing less need for Lean/Six Sigma efforts that focus on incremental process improvements rather than innovation. There are definitely industries with material assets and processes that benefit greatly from LSS methodologies, but its use in knowledge-based service organizations in waning.

Check out the entire post at the link above for the views of several others in the industry.