Pandemic-driven digital transformation in the legal world: this genie is not going back in the bottle

When I write a present about the type of digital transformation that the pandemic is forcing on firms in order to survive, I usually use examples from financial services and insurance, since that’s where I do most of my consulting. However, we see examples all around us as consumers, as every business of every size struggles to transform to an online model to be able to continue providing us with goods and services. And once both the consumers and the businesses see the benefits of doing some (not all) transactions online, there will be no going back to the old way of doing things.

I recently moved, and just completed the closing on the sale of my previous home. It’s been quite a while since I last did this, but it was always (and I believe still was until a few months ago) a very paper-driven, personal service type of transaction. This time was much easier, and almost all online; in fact, I’ve never even met anyone from my lawyer’s office face-to-face, I didn’t use a document courier, and I only saw my real estate agent in person once. All documents were digitally signed, and I had a video call with my lawyer me to walk through the documents and verify that it was me doing the signing. I downloaded the signed documents directly, although the law office would have been happy to charge me to print and mail a copy. To hand over the keys, my real estate agent just left their lockbox (which contained the keys for other agents to do showings) and gave the code to my lawyer to pass on to the other party once the deal was closed. Payments were all done as electronic transfers.

My lawyer’s firm is obviously still struggling with this paradigm, and provided the option to deliver paper documents, payments and keys by courier (in fact, I had to remind them to remove the courier fee from their standard invoice). In fact, they no longer offer in-person meetings: it has to be a video call. Yes, you can still sign physical documents and courier them back and forth, but that’s going to add a couple of days to the process and is more cumbersome than signing them digitally. Soon, I expect to see pricing from law firms that strongly encourages their clients to do everything digitally, since it costs them more to handle the paper documents and can create health risks for their employees.

Having gone through a real estate closing once from the comfort of my own home, I am left with one question: why would we ever go back to the old way of doing this? I understand that there are consumers who won’t or can’t adopt to new online methods of doing business with organizations, but those are becoming fewer every day. That’s not because the millennial demographic is taking over, but because people of all ages are learning that some of the online methods are better for them as well as the companies that they deal with.

Generalizing from my personal anecdote, this is happening in many businesses now: they are making the move to online business models in response to the pandemic, then finding that for many operations, this is a much better way of doing things. Along the way, they may also be automating some processes or eliminating manual tasks, like my lawyer’s office eliminating the document handling steps that used to be done. Not just more efficient for the company, but better for the clients.

As you adjust your business to compensate for the pandemic, design your customer-facing processes so that they make it easier (if possible) for your customer to do things online than the old way of doing things. That will almost always be more efficient for your business, and can greatly improve customer satisfaction. This does not mean that you don’t need people in your organization, or that your customers can’t talk to someone when required: automating processes and tasks means that you’re freeing up people to focus on resolving problems and improving customer communications, rather than performing routine tasks.

As one of my neighbourhood graffiti artists so eloquently put it, “6 feet apart but close 2 my ❤”.

OpenText Enterprise World 2020, Day 1

The last time that I was on a plane was mid-February, when I attended the OpenText analyst summit in Boston. For people even paying attention to the virus that was sweeping through China and spreading to other Asian countries, it seemed like a faraway problem that wasn’t going to impact us. How wrong we were. Eight months later, many businesses have completely changed their products, their markets and their workforce, much of this with the aid of technology that automates processes and supply chains, and enables remote work.

By early April, OpenText had already moved their European regional conference online, and this week, I’m attending the virtual version of their annual OpenText World conference, in a completely different world than in February. Similar to many other vendors that I cover (and have attended virtual conferences for in the past several months), OpenText’s broad portfolio of enterprise automation products has the opportunity to make gains during this time. The conference opened with a keynote from CEO Mark Barrenechea, “Time to Rethink Business”, highlighting that we are undergoing a fundamental technological (and societal) disruption, and small adjustments to how businesses work aren’t going to cut it. Instead of the overused term “new normal”, Barrenechea spoke about “new equilibrium”: how our business models and work methods are achieving a stable state that is fundamentally different than what it was prior to 2020. I’ve presented about a lot of these same issues, but I really like his equilibrium analogy with the idea that the landscape has changed, and our ball has rolled downhill to a new location.

He announced OpenText Cloud Edition (CE) 20.4, which includes five domain-oriented cloud platforms focused on content, business network, experience, security and development. All of these are based on the same basic platform and architecture, allowing them to updated on a quarterly basis.

  • The Content Cloud provides the single source of truth across the organization (via information federation), enables collaboration, automates processes and provides information governance and security.
  • The Business Network Cloud deals directly with the management and automation of supply chains, which has increased in importance exponentially in these past several months of supply chain disruption. OpenText has used this time to expand the platform in terms of partners, API integrations and other capabilities. Although this is not my usual area of interest, it’s impossible to ignore the role of platforms such as the Business Network Cloud in making end-to-end processes more agile and resilient.
  • The Experience Cloud is their customer communications platform, including omnichannel customer engagement tools and AI-driven insights.
  • The Security and Protection Cloud provides a collection of security-related capabilities, from backup to endpoint protection to digital forensics. This is another product class that has become incredibly important with so many organizations shifting to work from home, since protecting information and transactions is critical regardless of where the worker happens to be working.
  • The Developer Cloud is a new bundling/labelling of their software development (including low-code) tools and APIs, with 32 services across eight groupings including capture, storage, analysis, automation, search, integration, communicate and security. The OpenText products that I’ve covered in the past mostly live here: process automation, low-code application development, and case management.

Barrenechea finished with their Voyager program, which appears to be an enthusiastic rebranding of their training programs.

Next up was a prerecorded AppWorks strategy and roadmap with Nic Carter and Nick King from OpenText product management. It was fortunate that this was prerecorded (as much as I feel it decreases the energy of the presentation and doesn’t allow for live Q&A) since the keynote ran overtime, and the AppWorks session could be started when I was ready. Which begs the question why it was “scheduled” to start at a specific time. I do like the fact that OpenText puts the presentation slides in the broadcast platform with the session, so if I miss something it’s easy to skip back a slide or two on my local copy.

Process Suite (based on the Cordys-heritage product) was rolled into the AppWorks branding starting in 2018, and the platform and UI consolidated with the low-code environment between then and now. The sweet spot for their low-code process-centric applications is around case management, such as service requests, although the process engine is capable of supporting a wide range of application styles and developer skill levels.

They walked through a number of developer and end-user feature enhancements in the 20.4 version, then covered new automation features. This includes enhanced content and Brava viewer integration, but more significantly, their RPA service. They’re not creating/acquiring their own RPA tool, or just focusing on one tool, but have created a service that enables connectors to any RPA product. Their first connector is for UiPath and they have more on the roadmap — very similar rollout to what we saw at CamundaCon and Bizagi Catalyst a few weeks ago. By release 21.2 (mid-2021), they will have an open source RPA connector so that anyone can build a connector to their RPA of choice if it’s not provided directly by OpenText.

There are some AppWorks demos and discussion later, but they’re in the “Demos On Demand” category so I’m not sure if they’re live or “live”.

I checked out the content service keynote with Stephen Ludlow, SVP of product management; there’s a lot of overlap between their content, process, AI and appdev messages, so important to see how they approach it from all directions. His message is that content and process are tightly linked in terms of their business usage (even if on different systems), and business users should be able to see content in the context of business processes. They integrate with and complement a number of mainstream platforms, including Microsoft Office/Teams, SAP, Salesforce and SuccessFactors. They provide digital signature capabilities, allowing an external party to digitally sign a document that is stored in an OpenText content server.

An interesting industry event that was not discussed was the recent acquisition of Alfresco by Hyland. Alfresco bragged about the Documentum customers that they were moving onto Alfresco on AWS, and now OpenText may be trying to reclaim some of that market by offering support services for Alfresco customers and provide an OpenText-branded version of Alfresco Community Edition, unfortunately via a private fork. In the 2019 Forrester Wave for ECM, OpenText takes the lead spot, Microsoft and Hyland are some ways back but still in the leaders category, and Alfresco is right on the border between leaders and strong performers. Clearly, Hyland believes that acquiring Alfresco will allow it to push further up into OpenText’s territory, and OpenText is coming out swinging.

I’m finding it a bit difficult to navigate the agenda, since there’s no way to browse the entire agenda by time, but it seems to require that you know what product category that you’re interested in to see what’s coming up in a time-based format. That’s probably best for customers who only have one or two of their products and would just search in those areas, but for someone like me who is interested in a broader swath of topics, I’m sure that I’m missing some things.

That’s it for me for today, although I may try to tune in later for Poppy Crum‘s keynote. I’ll be back tomorrow for Muhi Majzoub’s innovation keynote and a few other sessions.

IBM Managed Services voted off the island

IBM made an interesting announcement earlier this month: they are spinning off their Managed Infrastructure Services (that is, when they run your old-school data center on their premises) to a separate company, leaving the hybrid cloud services in the mothership. This will let them really call themselves a cloud company; to quote the press release, “IBM will move from a company with more than half of its revenues in services to one with a majority in high-value cloud software and solutions”. Also, and this is only my guess, it opens the door to completely selling off the managed infrastructure services NewCo.

Hat tip to Bloor Research for posting about this, and for their comment that IBM’s hybrid cloud “isn’t quite cloud”.

Disruption in 2020: now down on the farm

I’ve been writing and presenting a lot over the past several months about the disruption that the pandemic has brought to many aspects of business, and how successful businesses are harnessing technology to respond to that disruption. In short, the ones that use the technology to become more flexible are much more likely to be coming out of this as a success.

I usually work with financial services clients on their technology strategy and execution, but this story caught my eye on how farmers are embracing Zoom calls and much more to make their operations work better. To quote the article, “the pandemic has sped up the adoption of technology in the agricultural industry as farmers spend more time with digital tools and programs and less time having face-to-face meetings”, which is exactly what’s happening in many other industries. If you thought small family farms were low-tech, think again: the farmer interviewed here uses his iPhone to monitor conditions in his fields, market his products, and track weather predictions from wherever he is. And starting this year, due to social distancing protocols, he orders his seed and supplies online, and uses Zoom to talk to experts about problems that arise during the growing season.

He thinks it’s working out well, which probably means that he’ll continue to work this way in the future. This is a theme that I’m hearing in many other types of businesses: once they’ve had to use technology and reorganize their business to accommodate the current disruption, they’re probably not going back to the old way of doing things.

There is definitely a big lesson here for businesses of any size: failure to innovate is going to cause failure, period.

My writing on the Trisotech blog: better analysis and design of processes

I’ve been writing some guest posts over on the Trisotech blog, but haven’t mentioned them here for a while. Here’s a recap of what I’ve posted over there the past few months:

In May, I wrote about designing loosely-coupled processes to reduce fragility. I had previously written about Conway’s Law and the problems of functional silos within an organization, but then the pandemic disruption hit and I wanted to highlight how we can avoid the sorts of cascading supply chain process failures that we saw early on. A big part of this is not having tightly-coupled end-to-end processes, but separating out different parts of the process so that they can be changed and scaled independently of each other, but still form part of an overall process.

In July, I helped to organize the DecisionCAMP conference, and wrote about the BPMN-CMMN-DMN “triple crown”: not just the mechanics of how the three standards work together, but why you would choose one over the other in a specific design situation. There are some particular challenges with the skill sets of business analysts who are expected to model organizations using these standards, since they will end up using more of the one that they’re most familiar with regardless of its suitability to the task at hand, as well as challenges for the understandability of multi-model representations that require a business operations reader of the models to be able to see how this BPMN diagram, that CMMN model and this other DMN definition all fit together.

In August, I focused on better process analysis using analytical techniques, namely process mining, and gave a quick intro to process mining for those who haven’t seen it in action. For several months now, we haven’t been able to do a lot of business “as is” analysis through job shadowing and interviews, and I put forward the idea that this is the time for business analysts to start learning about process mining as another tool in their kit of analysis techniques.

In early September, I wrote about another problem that can arise due to the current trend towards distributed (work from home) processes: business email compromise fraud, and how to foil it with better processes. I don’t usually write about cybersecurity topics, but I have my own in-home specialist, and this topic overlapped nicely with my process focus and the need for different types of compliance checks to be built in.

Then, at the end of September, I finished up the latest run of posts with one about the process mining research that I had seen at the (virtual) academic BPM 2020 conference: mining processes out of unstructured emails, and queue mining to see the impact of queue congestion on processes.

Recently, I gave a keynote on aligning intelligent automation with incentives and business outcomes at the Bizagi Catalyst virtual conference, and I’ve been putting together some more detailed thoughts on that topic for this months’ post. Stay tuned.

Disclosure: Trisotech is my customer, and I am compensated for writing posts for publication on their site. However, they have no editorial control or input into the topics that I wrote about, and no input into what I write here on my own blog.

Closing comments from Bizagi Catalyst 2020

By the time we got to day 3 of the virtual Bizagi Catalyst 2020, Bizagi had given up on their event streaming platform and just published all of the pre-recorded presentations for on-demand viewing. We were supposed to do a live wrap-up at the end of the day with Rob Koplowitz of Forrester Research, Bizagi CEO Gustavo Gómez and myself, moderated by Bizagi’s Senior Director of Product Marketing Rachel Brennan, so we went ahead and recorded that yesterday. It’s now up on the on-demand page, check it out:

This was my first time speaking at — or attending! — Bizagi Catalyst, and I’m looking forward to more of them in the future. Hopefully somewhere more exciting than my own home office.

Hype Cycle: better as a rear-view mirror than a look ahead

Interesting analysis and visualization of 25 years of Gartner hype cycles by Mark Mine, Director of the Technology Innovation Group at the Walt Disney Studios:

As Cory Doctorow pointed out in his post (where I first saw this), “His key insight is that while the Gartner Hype Cycle isn’t much of a predictive tool, it’s a fantastic historical record: looking back on it sheds a lot of insight on how we felt about technology and what those feelings meant for each technology’s future.”

Keep this in mind when you’re looking at the next hype cycle: although Gartner may be intending to predict (and even drive) the future of technology, they’re not all that accurate. However, the history of the data is a fascinating look into technological culture.

Bizagi Catalyst 2020 Day 2 keynotes and hackathon

With a quick nod to Ada Lovelace Day (which was yesterday) and women in technology, Bizagi’s Catalyst virtual conference kicked off today with a presentation by Rachel Brennan, Senior Director of Product Marketing, and Marlando Rhule, Professional Services Director, on some of the new industry accelerators available from Bizagi. The first of these was onboarding and KYC (know your client) process, including verification and risk assessment of both business and individual clients. Secondly was a permit lifecycle management process, specifically for building (and related) permits for municipal and state governments; it orchestrates communications between multiple applications for zoning and inspections, gathers information and approvals, generates letters and permits, and drives the overall process.

Coming soon, they will be releasing the APQC Process Classification Framework for Bizagi Modeler: the APQC frameworks are a good source of pre-built processes for specific industries as well as cross-industry frameworks.

Rachel also announce the Bizagi hackathon, which runs from October 19 to November 14. From the hackathon website:

It can be a new and innovative Widget to improve Bizagi´s forms, a new connector that extends Bizagi´s capabilities to connect with external systems or an experience-centric process using Bizagi Sites and Stakeholder concepts.

As with yesterday, the platform was pretty unstable, but eventually the second session stated with Luigi Mule of Blue Prism and their customer Royston Clark from Old Mutual, a financial services group in African and Asian markets. As I mentioned yesterday (and last week at another virtual conference) BPM vendors and RPA vendors are finally learning how to cooperate rather than position themselves as competitors: BPM orchestrates processes, and invokes RPA bots to perform tasks as the steps in the process. Eventually, many of the bots will be replaced with proper APIs, but in the meantime, bots provide value through integrating with legacy systems that don’t have exposed APIs.

At Old Mutual, they have 170 bots, 70% of which are integrated in a Bizagi process. Since they started with Blue Prism, they have automated the equivalent of eight million minutes of worker time: effectively, they have given that amount of time back to the business for more value-added activities. The combination of Bizagi and Blue Prism has given them a huge increase in agility, able to change and automate processes in a very short time frame.

Next up was supposed to be my keynote on aligning intelligent automation with incentives and business outcomes, but the broadcast platform failed quite spectacularly and Bizagi had to cancel the remainder of the day, which included the planned live Q&A after my presentation (I’d like to imagine that I’m so popular that I broke the internet). That also means that we missed the Q&A, but feel free to ask questions in the comments here, or on Twitter. You can see my slides below, and the keynote is recorded and available for replay.

I’ve been writing and presenting about aligning incentives with business processes for a long time, since I recognized that more collaborative and ad hoc processes needed to have vastly different metrics than our old-school productivity widget-counting. This was a good opportunity to revisit and update some of those ideas through the pandemic lens, since worker metrics and incentives have shifted quite a bit with work from home scenarios.

Assuming they have the event platform back online tomorrow, I’ll be back for a few of the sessions, and to catch up on some of today’s sessions that I missed.

Bizagi Catalyst 2020, Day 1

This week, I’m attending the virtual Bizagi Catalyst event, and I’ll be giving a short keynote and interactive discussion tomorrow. Today, the event kicked off with an address by CEO Gustavo Gomez on the impact of technology innovation, and the need for rapid response. This is a message that really resonates right now, as companies need to innovate and modernize, or they won’t make it through this current crisis. Supply chains are upside-down, workforces are disrupted, and this means that businesses need to change quickly to adapt. Gomez’ message was to examine your customer-facing processes in order to make them more responsive: eliminate unnecessary steps; postpone steps that don’t require customer interaction; and automate tasks. These three process design principles will improve your customer experience by reducing the time that they spend waiting while they are trying to complete a transaction, and will also improve the efficiency and accuracy of your processes.

He had the same message as I’ve had for several months: don’t stand still, but use this disruption to innovate. The success of companies is now based on their ability to change, not on their success at repetition: I’m paraphrasing a quote that he gave, and I can’t recall the original source although it’s likely Bill Drayton, who said “change begets change as much as repetition reinforces repetition”.

I unfortunately missed quite a bit of the following session, by Mata Veleta of insurance provider SCOR due to a glitchy broadcast platform. I did see the part of her presentation on how Bizagi supports them on their transformation journey, with a digitalization of a claims assessment application that was live in six weeks from design to go-live during the pandemic — very impressive. They are embracing the motto “think big, start small, move fast”, and making the agile approach a mindset across the business in addition to an application development principle. They’re building another new application for medical underwriting, and have many others under consideration now that they see how quickly they can roll things out.

The broadcast platform then fell over completely, and I missed the product roadmap session; I’m not sure if Bizagi should be happy that they had so many attendees that they broke the platform, or furious with the platform vendor for offering something that they couldn’t deliver. The “all-singing, all-dancing” platforms look nice when you see the demo, but they may not be scalable enough.

I went back later in the day and watched the roadmap session replay, with Ed Gower, VP Solutions Consulting, and Andrea Dominguez, Product Manager. Their roadmap has a few guiding themes: intelligent automation orchestration primarily through improved connectors to other automation components including RPA; governance to provide visibility into this orchestration; and a refreshed user experience on all devices. Successful low-code is really about what you can integrate with, so the focus on connectors isn’t a big surprise. They have a new connector with ABBYY for capture, which provides best-of-breed classification and extraction from documents. They also have a Microsoft Cognitive Services Connector for adding natural language processing to Bizagi applications, including features such as sentiment analysis. There are some new features coming up in the Bizagi Modeler (in December), including value stream visualizations.

The session by Tom Spolar and Tyler Rudkin of HSA Webster Bank was very good: a case study on how they use Bizagi for their low-code development requirements. They stated that they use another product for the heavy-duty integration applications, which means that Bizagi is used as true no/low-code as well as their collaborative BPMN modeling environment. They shared a lot of best practices, including what they do and don’t do with Bizagi: some types of projects are just considered a poor fit for the platform, which is a refreshing attitude when most organizations get locked into a Maslow’s hammer cognitive bias. They’ve had measurable results: several live deployments, the creation of reusable BPMN capabilities, and reduced case duration.

The final session of the day was a sneak peek at upcoming Bizagi capabilities with Kevin Guerrero, Technical Marketing Manager, and Francisco Rodriguez, Connectors Product Manager. Two of the four items that they covered were RPA-related, including integration with both UiPath and Automation Anywhere. As I saw at the CamundaCon conference last week, BPM vendors are realizing that integration with the mainstream RPA platforms is important for task automation/assistance, even if the RPA bots may eventually be replaced with APIs. Bizagi will be able to trigger UiPath attended bots on the user’s desktop, and start bots from the Bizagi Work Portal to exchange data. We saw a demo of how this is created in Bizagi Studio, including graphical mapping of input/output parameters with the bot, then what it looks like in the user runtime environment. They also discussed their upcoming integration with the cloud-based Automation Anywhere Enterprise A2019, calling cloud-based bots from Bizagi.

Moving on from RPA, they showed their connector with Microsoft Cognitive Services Form Recognizer, allowing for extraction of text and data from scanned forms if you’re using an Azure and Cognitive Services environment. There are a number of pre-defined standard forms, but you can also train Form Recognizer if you have customized versions of these forms, or even new forms altogether. They finished up with their new SAP Cloud Connector, which works with S/4HANA. We saw a demo of this, with the SAP connection being setup directly in Bizagi Studio. This is similar to their existing SAP connector, but with a direct connection to SAP Cloud.

I’ll be back for some of the sessions tomorrow, but since I have a keynote and interactive Q&A, I may not be blogging much.

Disclosure: I am being compensated for my keynote presentation, but not for anything that I blog here. These are my own opinions, as always.

CamundaCon 2020.2 Day 2: roadmap, business-driven DMN and ethical algorithms

I split off the first part of CamundaCon day 2 since it was getting a bit long: I had a briefing with Daniel Meyer earlier in the week on the new RPA integration, and had a lot of thoughts on that already. I rejoined for Camunda VP of Product Management Rick Weinberg’s roadmap presentation, which covered what’s coming in 2021. If you’re a Camunda customer, or thinking about becoming one, you should check out the replay of his session if you missed it. Expect to see updates to decision automation, developer experience, process monitoring and interoperability.

I tuned in to the business architecture track for a presentation by David Ibl, Enterprise Architect at LV 1871 (a German insurance company) on how they enabled their business specialists to perform decision model simulation and test case definition using their own DMN Manager based on the Camunda modeler toolkit. Their business people were already using BPMN for modeling processes, but were modeling business decisions as part of the process, and needed to use externalize the rules from the processes in order to simplify the processes. This was initially done by moving the decisions to code, then calling that from within the process, but that made the decisions much less transparent to the business. Now, the business specialists model both BPMN and DMN in Signavio, which are then committed to git; these models are then pulled from git both for deployment and for testing and simulation directly by the business people. You can read a much better description of it written by David a few months ago. A good example (and demo) on how business people can model, test and simulate their own decisions as well as processes. And, since they’re committed to open source, you can find the code for it on github.

I also attended a session by Omid Tansaz of Nexxbiz, a Camunda consulting services partner, on their insurance process monitoring capability that allows systems across the entire end-to-end chain of insurance processes to be monitored in a consolidated fashion. This includes broker systems, front- and back-off systems within the insurer, as well as microservices. They were already using Camunda’s BPM engine, and started using Optimize for process visualization since Optimize 3.0 can include external event sources (from all of the other systems in the end-to-end process) as well as the Camunda BPM processes. This is one of the first case studies of the external event capability in Optimize, since that was only released in April, and show the potential for having a consolidated view across multiple systems: not just visibility, but compliance auditing, bottleneck analysis, and real-time issue prevention.

The conference closed with a keynote by Michael Kearns from the University of Pennsylvania on the science of socially-aware algorithm design. Ethical algorithms (the topic of his recent book written with Aaron Roth) are not just an abstract concept, but impact businesses from risk mitigation through to implementation patterns. There are many cases of how algorithmic decision-making shows definite biases, and instead of punting to legal and regulatory controls, their research looks at technical solutions to the problem in the form of better algorithms. This is a non-trivial issue, since algorithms often have outcomes that are difficult to predict, especially when machine learning is involved. This is exactly why software testing is often so bad (just to inject my own opinion): developers can’t or don’t consider the entire envelope of possible outcomes, and often just test the “happy path” and a few variants.

Kearns’ research proposes embedding social values in algorithms: privacy, fairness, accountability, interpretability and morality. This requires a definition of what these social values mean in a precise mathematical. There’s already been some amount of work on privacy by design, spearheaded by the former Ontario Information and Privacy Commissioner Ann Cavoukian, since privacy is one of the better-understood algorithmic concepts.

Kearns walked us through issues around algorithmic privacy, including the idea that “anonymized” data often isn’t actually anonymized, since the techniques used for this assume that there is only a single source of data. For example, redacting data within a data set can make it anonymous if that’s the only data set that you have; as soon as other data sets exist that contain one or more of the same unredacted data values, you can start to correlate the data sets and de-anonymize the data. In short, anonymization doesn’t work, in general.

He then looked at “differential privacy”, which compares the results of an algorithm with and without a specific person’s data: if an observer can’t tell the discern between the outcomes, then the algorithm is preserving the privacy of that person’s data. Differential privacy can be implemented by adding a small amount of random noise to each data point, which makes is impossible to figure out the contribution of any specific data point., and the noise contributions will cancel out of the results when a large number of data points are analyzed. Problems can occur, however, with data points that have very small values, which may be swamped by the size of the noise.

He moved on to look at algorithmic fairness, which is trickier: there’s no agreed-upon definition of fairness, and we’re only just beginning to understand tradeoffs, e.g., between race and gender fairness, or between fairness and accuracy. He had a great example of college admissions based on SAT and GPA scores, with two different data sets: one for more financially-advantaged students, and the other for students from modest financial situations. The important thing to note is that the family financial background of a student has a strong correlation with race, and in the US, as in other countries, using race as an explicit differentiator is not allowed in many decisions due to “fairness”. However, it’s not really fair if there are inherent advantages to being in one data set over the other, since those data points are artificially elevated.

There was a question at the end about the role of open source in these algorithms: Kearns mentioned OpenDP, an open source toolset for implementing differential privacy, and AI Fairness 360, an open source toolkit for finding and mitigating discrimination and bias in machine learning models. He also discussed some techniques for determining if your algorithms adhere to both privacy and fairness requirements, and the importance of auditing algorithmic results on an ongoing basis.