Invasion of the bots: intelligent healthcare applications at @UnitedHealthGrp

Dan Abdul, VP of technology at UnitedHealth Group (a large US healthcare company) presented at AIIM 2018 on driving intelligent information in US healthcare, and how a variety of AI and machine learning technologies are adding to that: bots that answer your questions in an online chat, Amazon’s Alexa telling you the best clinic to go to, and image recognition that detects cancer in a scan before most radiologists. The US has an extremely expensive healthcare system, much of that caused by in-patient services in hospitals, yet a number of initiatives (telemedicine, home healthcare, etc.) do little to reduce the hospital visits and the related costs. Intelligent information can help reduce some of those costs through early detection of problems that are easily treatable before they become serious enough to require hospital care, prediction of other conditions such as homelessness that often result in a greater need for healthcare services. These intelligent technologies are intended to replace healthcare practitioners, but assist them by processing more information faster than a person can, and surface insights that might otherwise be missed.

Abdul and his team have built a smart healthcare suite of applications that are based on a broad foundation of data sources: he sees the data as being key, since you can’t look for patterns or detect early symptoms without the data on which to apply the intelligent algorithms. With aggregate data from a wider population and specific data for a patient, intelligent healthcare can provide much more personalized, targeted recommendations for each individual. They’ve made a number of meaningful breakthroughs in applying AI technologies to healthcare services, such as identifying gaps in care based on treatment codes, and doing real-time monitoring and intervention via IoT devices such as fitness trackers.

These ideas are not unique to healthcare, of course; personalized recommendations based on a combination of a specific consumer’s data plus trends from aggregate population data can be applied to anything from social services to preventative equipment maintenance.

AIIM18 keynote with @jmancini77: it’s all about digital transformation

I haven’t been to the AIIM conference since the early to mid 90s; I stopped when I started to focus more on process than content (and it was very content-centric then), then stayed away when the conference was sold off, then started looking at it again when it reinvented itself a few years ago. These days, you can’t talk about content without process, so there’s a lot of content-oriented process here as well as AI, governance and a lot of other related topics.

I arrived yesterday just in time for a couple of late-afternoon sessions: one presentation on digital workplaces by Stephen Ludlow of OpenText that hit a number of topics that I’ve been working on with clients lately, then a roundtable on AI and content hosted by Carl Hillier of ABBYY. This morning, I attended the keynote where John Mancini discussed digital transformation and a report released today by AIIM. He put a lot of emphasis on AI and machine learning technologies; specifically, how they can help us to change our business models and accelerate transformation.

We’re in a different business and technology environment these days, and a recent survey by AIIM shows that a lot of people think that their business is being (or about to be) disrupted, and digital transformation is and important part of dealing with that. However, very few of them are more than a bit of the way towards their 2020 goals for transformation. In other words, people get that this is important, but just aren’t able to change as fast as is required. Mancini attributed this in part to the escalating complexity and chaos that we see in information management, where — like Alice — we are running hard just to stay in place. Given the increasing transparency of organizations’ operations, either voluntarily or through online customer opinions, staying in the same place isn’t good enough. One contributor to this is the number of content management systems that the average organization has (hint: it’s more than one) plus all of the other places where data and content reside, forcing workers to have to scramble around looking for information. Most companies don’t want to have a single monolithic source of content, but do want a federated way to find things when they need it: in part, this fits in with the relabelling of enterprise content management (ECM) as “Content Services” (Gartner’s term) or “Intelligent Information Managment” (AIIM’s term), although I feel that’s a bit of unnecessary hand-waving that just distracts from the real issues of how companies deal with their content.

He went through some other key findings from their report on what technologies that companies are looking at, and what priority that they’re giving them; looks like it’s worth a read. He wrapped up with a few of his own opinions, including the challenge that we need to consider content AND data, not content OR data: the distinction between structure and unstructured information is breaking down, in part because of the nature of natively-digital content and in part because of AI technologies that quickly turn what we think of as content into data.

There’s a full slate of sessions today, stay tuned.

Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make changes while maintaining the integrity of legacy enterprise processes. Borrowell is a fintech company focused on lending applications: free credit score monitoring, and low-interest personal loans for debt consolidation or reducing credit card debt. They partner with existing financial institutions such as Equifax and CIBC to provide the underlying credit monitoring and lending capabilities, with Borrowell providing a technology layer that’s more than just a pretty face: they use a lot of information sources to create very accurate risk models for automated loan adjudication. As Borrowell’s deep learning platforms learn more about individual and aggregate customer behaviour, their risk models and adjudication platform becomes more accurate, reducing the risk of loan defaults while fine-tuning loan rates to optimize the risk/reward curve.

Great application of AI/ML technology to financial services, which sorely need some automated intelligence applied to many of their legacy processes.