Unintended consequences (the good kind) of DigitalTransformation with @jkyriakidis

Jordan Kyriakidis, CEO of QRA Corp, spoke at a session at ITWC’s Toronto digital transformation conference on some of the unexpected consequences of technological advances in terms of collaboration and cross-fertilization of ideas. QRA is a tech startup in Atlantic Canada, and Kyriakidis’ examples are about how companies in that relatively small (economically) region are encouraging new ways of thinking about solving business problems through these sorts of “collisions”.

Addressing the complexity introduced by advancing technology means that we have to invent new methods and tools: he gave the example in industrial complexity where design moved from paper to computer-aided design, then added electronic design automation when the complexity of where to put which chip overwhelmed human capabilities, and now design verification allows for model-based (requirements-driven) design to be validated before more expensive engineering and production begins.

Another example in precision diagnosis and treatment was around data-driven farming, combining computer vision and big data analytics (plus drone delivery of individual plant treatment) to optimize crop yields.

His third example was of integrating and analyzing a variety of data sources about a specific athlete to allow a coach to optimize training and performance for that athlete in their chosen sport.

His main theme of precision diagnosis and treatment — essentially, doing something different for every case based on the context — can be extended in pretty much any industry: consider the attempts by many consumer-facing companies to customize individual customer experiences. Interesting look at companies that are actually doing it.

FinTech panel at ITWC DigitalTransformation 2018

Lynn Elwood, VP Cloud and Services from OpenText, hosted a panel on FinTech to close out the morning at the ITWC digital transformation conference in Toronto. She started with some background on digital transformation in financial services, where there is still a strong focus on cost reduction, but customer engagement has become more important. She included survey results with a somewhat disappointing view on paperless offices, with more than 75% of the respondents saying that they would not be going paperless for as much as five years or maybe never. Never??!! Maybe just not within the career lifetime of the respondents, but c’mon, never? I understand that digital transformation is not the same as content digitization, but if you’re still running on paper, that’s just going to fundamentally limit the degree of your transformation. At the same time, more than 75% were saying that they plan to use AI already or within the short term (hopefully to replace the people who think that they’re never going to be paperless), and most organizations said that they were equal or better than their peers in digital transformation (statistically unlikely). Unintentionally hilarious.

The panel was made up of Michael Ball, CISO Advisor for a number of firms including Freedom Mobile; Amer Matar, CTO of Moneris (a large Canadian payment processor); and Patrick Vice, partner at Insurance-Canada.ca (an industry organization for P&C insurance). Matar talked about how legacy technology holds back companies: existing companies have the advantage of being established incumbents, but newer players (e.g., Square in the payments market) can enter with a completely new business model and no legacy customers or infrastructure to drag along. Vice talked about how companies can combat this by spinning off separate business units to provide a more streamlined digital experience and brand, such as how Economical Insurance did with Sonnet (a project that I had the pleasure of working on last year), which still uses the established insurance organization behind a modern customer experience. Ball stressed that the legacy systems are evolving at a much slower rate than is required for digital transformation, and the new front ends need to go beyond just putting a friendly UI on the old technology: they need to incorporate new services to present a transformed customer experience.

They had an interesting discussion about security, and how moving to digital business models means that companies need to offer a more secure environment for customers. Many people are starting to look at security (such as two-factor authentication) as a competitive differentiator when they are selecting service providers, and while most people wouldn’t now change their bank just because it didn’t provide 2FA, it won’t be long before that is a decision point. It’s not just about cloud versus on-premise, although there are concerns about hosting Canadian customers’ financial data outside Canada, where financial laws (and government access to data) may be different; it’s about an organization’s ability to assure their customer that their information won’t be improperly accessed while offering a highly secure customer-facing portal. There’s a huge spend on security these days, but that needs to settle down as this becomes just baked into the infrastructure rather than an emergency add-on to existing (insecure) systems.

Good discussion, although it points out that it’s still early days for digital transformation in financial services.

Digital government with @AlexBenay at IT World DigitalTransformation 2018

I’ve attended IT World Canada conferences in Toronto before — easy for me to attend as a local, and some interesting content such as Technicity — and today they’re running a digital transformation conference (that oddly, has the hashtag #digitaltransformation as if that were a unique tag).

Alex Benay, CIO of the government of Canada, gave the opening keynote: with $6B/year in IT spend and more than a few high-profile mistakes under their belt that happened before he arrived in the job in early 2017, he has some views on how to do things better. He’s even written a book about digital government, but given that the federal government takes five years to write requirements, he’ll probably be long retired before we know if any of his predictions come true. He talked about some models of digital government, such as Estonia, and how the government of Canada is attempting to integrate their digital services into our everyday lives by partnering with the private sector: think Transport Canada road alerts built into your GM car, or passport renewal and customs forms triggered by an Expedia booking. He admits to a lot of obstacles, including untrained staff in spite of massive training spends, but also many enablers to reaching their goals, such as changing policies around cloud-first deployments. He finished with five core tenents for any government IT moving forward:

  • Open data by default while protecting citizens
  • Collaborate in the open
  • Grow digital talent
  • Change laws/policies to avoid situations like Facebook/Cambridge Analytica
  • Adapt business models to focus only on meeting user needs (procurement, tech management, service design)

Good principles, and I hope that our government can learn to live by them.

Integrating process and content for digital transformation: my upcoming webinar

As much as I love chatting with the newer crop of entrepreneurs about their products and ideas, sometimes it’s nice to have a conversation with someone who remembers when OS/2 was the cheapest way to buy 3.5” high density disks. You know who’s been hip-deep in the technology of content and process as long as I have? John Newton, founder and CTO of Alfresco, that’s who. John started Documentum back in 1990, around the time that I was selling off my imaging/workflow product startup and starting my services company, and while he’s stayed on the product side and I’ve stayed on the services/industry analyst side (except for a brief period as FileNet’s BPM evangelist), we’re both focused on how this technology helps companies in their digital transformation journey.

John and I will get together on a webinar about integrating process and content on July 24, sponsored by Alfresco, which will combine structured content with a free-ranging conversation. We’re planning to talk about use cases for applications that integrate process and content, some best practices for designing these applications, and overall architectural considerations for process/content applications including cloud and microservices. Add a comment here or on Twitter if there’s something in particular that you’d like us to discuss, and we’ll see if we can work it in.

I wrote a blog post for Alfresco a couple of months ago on use cases for content in process applications, stressing the importance of integrating process and content rather than leaving them as siloed applications; in general, this is what I’ve seen over the years in my practice as a systems architect and consultant helping organizations to get their content digitized and their processes automated. If you have digital content that’s locked up without any way to take actions on it, or automated processes that still require manual lookups of related content, then you should be thinking about how to integrate process and content. Tune in to our webinar for pointers from a couple of industry gray-hairs.

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!

AlfrescoDay 2018: digital business platform and a whole lot of AWS

I attended Alfresco’s analyst day and a customer day in New York in late March, and due to some travel and project work, just finding time to publish my notes now. Usually I do that while I’m at the conference, but part of the first day was under NDA so I needed to think about how to combine the two days of information.

The typical Alfresco customer is still very content-centric, in spite of the robust Alfresco Process Services (formerly Activiti) offering that is part of their platform, with many of their key success stories presented at the conference were based on content implementations and migrations from ECM competitors such as Documentum. In a way, this is reminiscent of the FileNet conferences of 20 years ago, when I was talking about process but almost all of the customers were only interested in content management. What moves this into a very modern discussion, however, is the focus on Alfresco’s cloud offerings, especially on Amazon AWS.

First, though, we had a fascinating keynote by Sangeet Paul Choudary — and received a copy of his book Platform Scale: How an emerging business model helps startups build large empires with minimum investment — on how business models are shifting to platforms, and how this is disrupting many traditional businesses. He explained how supply-side economies of scale, machine learning and network effects are allowing online platforms like Amazon to impact real-world industries such as logistics. Traditional businesses in telecom, financial services, healthcare and many other verticals are discovering that without a customer-centric platform approach rather than a product approach, they can’t compete with the newer entrants into the market that build platforms, gather customer data and make service-based partnerships through open innovation. Open business models are particularly important, and striking the right balance between an open ecosystem and maintaining control over the platform through key control points. He finished up with a digital transformation roadmap: gaining efficiences through digitization; then using data collected in the first stage while integrating flows across the enterprise to create one view of the ecosystem; and finally externalizing and harnessing value flows in the ecosystem. This last stage, externalization, is particularly critical, since opening the wrong control points can kills you business or stifle open growth.

This was a perfect lead-in to Chris Wiborg’s (Alfresco’s VP of product marketing) presentation on Alfresco’s partnership with Amazon and the tight integration of many AWS services into the Alfresco platform: leveraging Amazon’s open platform to build Alfresco’s platform. This partnership has given this conference in particular a strong focus on cloud content management, and we are hearing more about their digitial business platform that is made up of content, process and governance services. Wiborg started off talking about the journey from (content) digitization to digital business (process and content) to digital transformation (radically improving performance or reach), and how it’s not that easy to do this particularly with existing systems that favor on-premise monolithic approaches. A (micro-) service approach on cloud platforms changes the game, allowing you to build and modify faster, and deploy quickly on a secure elastic infrastructure. This is what Alfresco is now offering, through the combination of open source software, integration of AWS services to expand their portfolio of capabilities, and automated DevOps lifecycle.

This brings a focus back to process, since their digital business platform is often sold process-first to enable cross-departmental flows. In many cases, process and content are managed by different groups within large companies, and digital transformation needs to cut across both islands of functionality and islands of technology.

They are promoting the idea that differentiation is built and not bought, with the pendulum swinging back from buy toward build for the portions of your IT that contribute to your competitive differentiation. In today’s world, for many businesses, that’s more than just customer-facing systems, but digs deep into operational systems as well. In businesses that have a large digital footprint, I agree with this, but have to caution that this mindset makes it much too easy to go down the rabbit hole of building bespoke systems — or having someone build them for you — for standard, non-differentiating operations such as payroll systems.

Alfresco has gone all-in with AWS. It’s not just a matter of shoving a monolithic code base into a Docker container and running it on EC2, which how many vendors claim AWS support: Alfresco has a much more integrated microservices approach that provides the opportunity to use many different AWS services as part of an Alfresco implementation in the AWS Cloud. This allows you to build more innovative solutions faster, but also can greatly reduce your infrastructure costs by moving content repositories to the cloud. They have split out services such as Amazon S3 (and soon Glacier) for storage services, RDS/Aurora for database services, SNS for notification, security services, networking services, IoT via Alexa, Rekognition for AI, etc. Basically, a big part of their move to microservices (and extending capabilities) is by externalizing to take advantage of Amazon-offered services. They’re also not tied to their own content services in the cloud, but can provide direct connections to other cloud content services, including Box, SharePoint and Google Drive.

We heard from Tarik Makota, an AWS solution architect from Amazon, about how Amazon doesn’t really talk about private versus public cloud for enterprise clients. They can provide the same level of security as any managed hosting company, including private connections between their data centers and your on-premise systems. Unlike other managed hosting companies, however, Amazon is really good at near-instantaneous elasticity — both expanding and contracting — and provides a host of other services within that environment that are directly consumed by Alfresco and your applications, such as Amazon RDS for Aurora, a variety of AI services, serverless step functions. Alfresco Content Services and Process Services are both available as AWS QuickStarts, allowing for full production deployment in a highly-available, highly-redundant environment in the geographic region of your choice in about 45 minutes.

Quite a bit of food for thought over the two days, including their insights into common use cases for Alfresco and AI in content recognition and classification, and some of their development best practices for ensuring reusability across process and content applications built on a flexible modern architecture. Although Alfresco’s view of process is still quite content-centric (naturally), I’m interested to see where they take the entire digital business platformin the future.

Also great to see a month later that Bernadette Nixon, who we met at the Chief Revenue Officer at the event, has moved up to the CEO position. Congrats!

Upcoming webinar on digital transformation in financial services featuring @BPMdotcom and @ABBYY_USA – and my white paper

Something strange about receiving an email about an upcoming webinar, featuring two people who I know well…

 …then scrolling down to see that ABBYY is featuring the paper that I wrote for them as follow-on bonus material!

Nathaniel Palmer and Carl Hillier are both intelligent speakers with long histories in the industry, tune in to hear them talk about the role that content capture and content analytics play in digital transformation.

Obligatory futurist keynote at AIIM18 with @MikeWalsh

We’re at the final day of the AIIM 2018 conference, and the morning keynote is with Mike Walsh, talking about business transformation and what you need to think about as you’re moving forward. He noted that businesses don’t need to worry about millenials, they need to worry about 8-year-olds: these days 90% of all 2-year-olds (in the US) know how to use a smart device, making them the truly born-digital generation. What will they expect from the companies of the future?

Machine learning allows us to customize experiences for every user and every consumer, based on analysis of content and data. Consumers will expect organizations to predict their needs, before they could even voice it themselves. In order to do that, organizations need to become algorithmic businesses: be business machines rather than have business models. Voice interaction is becoming ubiquitous, with smart devices listening to us most (all) of the time and using that to gather more data on us. Face recognition will become your de facto password, which is great if you’re unlocking your iPhone X, but maybe not so great if you don’t like public surveillance that can track your every move. Apps are becoming nagging persuaders, telling us to move more, drink more water, or attend this morning’s keynote. Like migratory birds that can sense magnetic north, we are living in a soup of smart data that guides us. Those persuasive recommendations become better at predicting our needs, and more personalized.

Although he started by saying that we don’t need to worry about millenials, 20 minutes into his presentation Walsh is admonishing us to let the youngest members of our team “do stuff rather than just get coffee”. It’s been a while since I worked in a regular office, but do people still have younger people get coffee for them?

He pointed out that rigid processes are not good, but that we need to be performance-driven rather than process-driven: making good decisions in ambiguous conditions in order to solve new problems for customers. Find people who are energized by unknowns to drive your innovation — this advice is definitely more important than considering the age of the person involved. Bring people together in the physical realm (no more work from home) if you want the ideas to spark. Take a look at your corporate culture, and gather data about how your own teams work in order to understand how employees use information and work with each other. If possible, use data and AI as the input when designing new products for customers. He recommended a next action of quantifying what high performance looks like in your organization, then work with high performers to understand how they work and collaborate.

He discussed the myth of the simple relationship between automation and employment, and how automating a task does not, in general, put people out of work, but just changes what their job is. People working together with the automation make for more streamlined (automated) standard processes with the people focused on the things that they’re best at: handling exceptions, building relationships, making complex decision, and innovating through the lens of combining human complexity with computational thinking.

In summary, the new AI era means that digital leaders need to make data a strategic focus, get smart about decisions, and design work rather than doing it. Review decisions made in your organization, and decide which are best made using human insight, and which are better to automate — either way, these could become a competitive differentiator.

Automation and digital transformation in the North Carolina Courts

Elizabether Croom, Morgan Naleimaile and Gaynelle Knight from the North Carolina Courts led a breakout session on Thursday afternoon at AIIM 2018 on what they’ve done to move into the digital age. NC has a population of over 10 million, and the judiciary adminstration is integrated throughout the state across all levels, serving 6,800 staff, 5,400 volunteers and 32,000 law enforcement officers as well as integrating and sharing information with other departments and agencies. New paper filings taking up 4.3 miles of shelving each year, yet the move to electronic storage has to be done carefully to protect the sensitivity of the information contained within these documents. For the most part, the court records are public records unless they are for certain types of cases (e.g., juveniles), but PII such as social security numbers must be redacted in some of these documents: this just wasn’t happening, especially when documents are scanned outside the normal course of content management. The practical obscurity (and security) of paper documents was moving into the accessible environment of electronic files.

They built their first version of an enterprise information management systems, including infrastructure, taxonomy, metadata, automated capture and manual redaction. This storage-centric phase wasn’t enough: they also needed to address paper file destruction (due to space restrictions), document integrity and trustworthiness, automated redaction of PII, appropriate access to files, and findability. In moving along this journey, they started looking at declaring their digital files as records, and how that tied in with the state archives’ requirements, existing retention schedules and the logic for managing retention of records. There’s a great deal of manual quality control currently required for having the scanned documents be approved as an official record that can replace the paper version, which didn’t sit well with the clerks who were doing their own scanning. It appears as if an incredible amount of effort is being focused on properly interpreting the retention schedule logic and trigger sources: fundamentally, the business rules that underlie the management of records.

Moving beyond scanning, they also have to consider intake of e-filed documents — digitally-created documents that are sent into the court system in electronic form — and the judicial branch case management applications, which need to consume any of the documents and have them readily available. They have some real success stories here: there’s an eCourts domestic violence protection order (DVPO) process where a victim can go directly to a DV advocate’s office and all filings (including a video affidavit) and the issue of the order are done electronically while the victim remains in the safety of the advocate’s office.

They have a lot of plans moving forward to address their going-forward records capture strategy as well as addressing some of the retention issues that might be resolved by back-scanning of microfilmed documents, where documents with different retention periods may be on the same roll of film. Interestingly, they wouldn’t say what their content management technology is, although it does sound like they’re assessing the feasibility of moving to a cloud solution.

AIIM18 keynote with @jmancini77: it’s all about digital transformation

I haven’t been to the AIIM conference since the early to mid 90s; I stopped when I started to focus more on process than content (and it was very content-centric then), then stayed away when the conference was sold off, then started looking at it again when it reinvented itself a few years ago. These days, you can’t talk about content without process, so there’s a lot of content-oriented process here as well as AI, governance and a lot of other related topics.

I arrived yesterday just in time for a couple of late-afternoon sessions: one presentation on digital workplaces by Stephen Ludlow of OpenText that hit a number of topics that I’ve been working on with clients lately, then a roundtable on AI and content hosted by Carl Hillier of ABBYY. This morning, I attended the keynote where John Mancini discussed digital transformation and a report released today by AIIM. He put a lot of emphasis on AI and machine learning technologies; specifically, how they can help us to change our business models and accelerate transformation.

We’re in a different business and technology environment these days, and a recent survey by AIIM shows that a lot of people think that their business is being (or about to be) disrupted, and digital transformation is and important part of dealing with that. However, very few of them are more than a bit of the way towards their 2020 goals for transformation. In other words, people get that this is important, but just aren’t able to change as fast as is required. Mancini attributed this in part to the escalating complexity and chaos that we see in information management, where — like Alice — we are running hard just to stay in place. Given the increasing transparency of organizations’ operations, either voluntarily or through online customer opinions, staying in the same place isn’t good enough. One contributor to this is the number of content management systems that the average organization has (hint: it’s more than one) plus all of the other places where data and content reside, forcing workers to have to scramble around looking for information. Most companies don’t want to have a single monolithic source of content, but do want a federated way to find things when they need it: in part, this fits in with the relabelling of enterprise content management (ECM) as “Content Services” (Gartner’s term) or “Intelligent Information Managment” (AIIM’s term), although I feel that’s a bit of unnecessary hand-waving that just distracts from the real issues of how companies deal with their content.

He went through some other key findings from their report on what technologies that companies are looking at, and what priority that they’re giving them; looks like it’s worth a read. He wrapped up with a few of his own opinions, including the challenge that we need to consider content AND data, not content OR data: the distinction between structure and unstructured information is breaking down, in part because of the nature of natively-digital content and in part because of AI technologies that quickly turn what we think of as content into data.

There’s a full slate of sessions today, stay tuned.