bpmNEXT 2018: All DMN all the time, with Trisotech, Bruce Silver Associates and Red Hat

First session of the afternoon on the first day of bpmNEXT 2018, and this entire section is on DMN (decision management notation) and the requirement for decision automation based on DMN.

Decision as a Service (DaaS): The DMN Platform Revolution, Trisotech

Denis Gagne of Trisotech, who knows as much about DMN and other related standards as anyone around, started off the session with his ideas on the need for decision automation driven by requirements such as GDPR. He walked through their suite of decision-related products that can be used to create decision services to be consumed by other applications, as well as their conformance to the DMN standards. His demo showed a decision model for determining the best price to offer a rental vehicle customer, and walked through the capabilities of their platform with this model: DMN style check, import/export, execution, team collaboration, and governance through versioning. He also showed how decision models can be reused, so that elements from one model can be used in another model. Then, he showed how to take portions of the model and define them as a service using a visual wrapper, much like a subprocess wrapper visualization in BPMN, where the relationship lines that cross the service boundary become the inputs and outputs to the service. Cool. The service can then be deployed as an executable service using (in his demo) the Red Hat platform, test its execution using from a generated HTML form, generate the REST API or Open API interface code, run predefined test cases based on DMN TCK, promote the service from test to production, and publish it to an API publisher platform such as WSO2 for public consumption. The execution environment includes debugging and audit logs, providing traceability on the decision services.

Timing the Stock Market with DMN, Bruce Silver Associates

Bruce Silver, also a huge contributor to BPMN and DMN standards, and author of the BPMN Method & Style books and now the DMN M&S, presented an application for buying a stock at the right time based on price patterns. For investors who time the market based the pricing, the best way to do this is to look at daily min/max trends and fit them to one of several base type models. Bruce figured that this could be done with a decision table applied to a manipulated version of the data, and automated this for a range of stocks using a one-year history, processing in Excel, and decision services in the Trisotech cloud. This is a practical example of using decision services in a low-code environment by non-programmers to do something useful. His demo showed us the decision model for doing this, then the data processing (smoothing) done in Excel. However, for an application that you want to run every day, you’re probably not going to want to do the manual import/export of data, so he showed how to automate/orchestrate this with Microsoft Flow, which can still use the Excel sheet for data manipulation but automate the data import, execute the decision service, and publish the results back to the same Excel file. Good demonstration of the democratization of creating decisioning applications by through easy-to-use tools such as the graphical DMN modeler, Excel and Flow, highlighting that DMN is an execution language as well as a requirement language. Bruce has also just published a new book, DMN Cookbook, co-authored with Edson Tirelli of Red Hat, on getting started DMN business implementations using lightweight stateless decision services called via REST APIs.

Smarter Contracts with DMN, Red Hat

Edson Tirelli of Red Hat, Bruce Silver’s co-author on the above-mentioned DMN Cookbook, finished this section of DMN presentations with a combination of blockchain and DMN, where DMN is used to define the business language for calculations within a smart contract. His demo showed a smart land registry case, specifically a transaction for selling a property involving a seller, a buyer and a settlement service created in DMN that calculates taxes and insurance, with the purchase being executed using cryptocurrency. He mentioned Vanessa Bridge’s demo from earlier today, which showed using BPMN to define smart contract flows; this adds another dimension to the same problem, and likely no reason why you wouldn’t use them all together given the right situation. Edson said that he was inspired, in part, by this post on smart contracts by Paul Lachance, in which Lachance said “a visual model such as a BPMN and/or DMN diagram could be used to generate the contract source code via a process-engine”. He used Ethereum for the blockchain smart contract and the Ether cryptocurrency, Trisotech for the DMN models, and Drools for the rules execution. All in all, not such a far-fetched idea.

I’m still catching flak for suggesting the now-ubiquitous Ignite style for presentations here at bpmNEXT; my next lobbying effort will be around restricting the maximum number of words per slide. 🙂

bpmNEXT 2018: Here’s to the oddballs, with ConsenSys, XMPro and BPLogix

And we’re off with the demo sessions!

Secure, Private Decentralized Business Processes for Blockchains, ConsenSys

Vanessa Bridge of ConsenSys spoke about using BPMN diagrams to create smart contracts and other blockchain applications, while also including privacy, security and other necessary elements: essentially, using BPM to enable Ethereum-based smart contracts (rather than using blockchain as a ledger for BPM transactions and other BPM-blockchain scenarios that I’ve seen in the past). She demonstrated using Camunda BPM for a token sale application, and for a boardroom voting application. For each of the applications, she used BPMN to model the process, particularly the use of BPMN timers to track and control the smart contract process — something that’s not native to blockchain itself. Encryption and other steps were called as services from the BPMN diagram, and the results of each contract were stored in the blockchain. Good use of BPM and blockchain together in a less-expected manner.

Turn IoT Technology into Operational Capability, XMPro

Pieter van Schalkwyk of XMPro looked at the challenges of operationalizing IoT, with a virtual flood of data from sensors and machines that needs to be integrated into standard business workflows. This involves turning big data into smart data via stream processing before passing it on to the business processes in order to achieve business outcomes. XMPro provides smart listeners and agents that connect the data to the business processes, forming the glue between realtime data and resultant actions. His demo showed data being collected from a fan on a cooling tower, bringing in data the sensor logs and comparing it to manufacturer’s information and historical information in order to predict if the fan is likely to fail, create a maintenance work order and even optimize maintenance schedules. They can integrate with a large library of action agents, including their own BPM platform or other communication and collaboration platforms such as Slack. They provide a lot of control over their listener agents, which can be used for any type of big data, not just industrial device data, and integrate complex and accurate prediction models regarding likelihood and remaining useful life predictions. He showed their BPM platform that would be used downstream from the analytical processing, where the internet of things can interact with the internet of people to make additional decisions required in the context of additional information such as 3D drawings. Great example of how to filter through hundreds of millions data points in streaming mode to find the few combinations that require action to be taken. He threw out a comment at the end that this could be used for non-industrial applications, possibly for GDPR data, which definitely made me think about content analytics on content as it’s captured in order to pick out which of the events might trigger a downstream process, such as a regulatory process.

Business Milestones as Configuration, BPLogix

Scott Menter and Joby O’Brien of BPLogix finished up this section on new BPM ideas with their approach to goal orientation in BPM, which is milestone-based and requires understanding the current state of a case before deciding how to move forward. Their Process Director BPM is not BPMN-based, but rather an event-based platform where events are used to determine milestones and drive the process forward: much more of a case management view, usually visualized as a project management-style GANTT chart rather thana flow model. They demonstrated the concept of app events, where changes in state of any of a number of things — form fields, activities, document attachments, etc. — can record a journal entry that uses business semantics and process instance data. This allows events from different parts of the platform to be brought together in a single case journal that shows the significant activity within the case, but also to be triggers for other events such as determining case completion. The journal can be configured to show only certain types of events for specific users — for example, if they’re only interested in events related to outgoing correspondence — and also becomes a case collaboration discussion. Users can select events within the journal and add their own notes, such as taking responsibility for a meeting request. They also showed how machine learning and rules can be used for dynamically changing data; although shown as interactions between fields on forms, this can also be used to generate new app events. Good question from the audience on how to get customers to think about their work in terms of events rather than procedures; case management proponents will say that business people inherently think about events/state changes rather than process. Interesting representation of creating a selective journal based on business semantics rather than just logging everything and expecting users to wade through it for the interesting bits.

We’re off to lunch. I’m a bit out of practice at live-blogging, but hope that I captured some of the flavor of what’s going on here. Back with more this afternoon!

bpmNEXT 2018 kicks off: keynotes with @JimSinur and @NeilWD

It’s the first day of bpmNEXT, the conference for BPM visionaries and free thinkers to get together, share ideas, show their cool new stuff, meet new friends and get reacquainted with old ones. This is an opportunity for technologists (primarily senior technical people from BPM vendors) to give demos in a very structured format, but it’s not really a place for customers: more like the BPM Think Tanks of old. Organized by Bruce Silver and Nathaniel Palmer, themselves both long-time contributors to the industry, with content provided by a lot of people who are loud and proud about their technology.

That very structured format, in case you haven’t read about or attended bpmNEXT before, is a strictly limited Ignite-style presentation followed by a live demo. This limits the amount of time that presenters can spend showing slides and forces them to get to the good stuff.

You can see the the agenda here, and we started out the first day with a few keytnoes from industry thought leaders before getting to the demo presentations. I’ll cover those in this post, then do individual posts for each section of demos (usually three in a section). These will be rough notes since there’s a lot of information that goes by quickly; you’ll be able to see video of all of the sessions, most likely on the bpmNEXT YouTube channel (where you can also see previous years’ sessions).

The Future of Process in Digital Business, Jim Sinur

Jim Sinur, a long-time Gartner analyst who is now with Aragon Research, spoke about trends in digital businesses. Most of this was a plug for Aragon and their research reports that seemed focused on customer organizations, which doesn’t seem like a good fit with this audience where most of the people in the room are well-versed in these technologies and how to apply them in real life.

I’d really like to see more conversational sessions rather than presentations for the keynotes, or at least content that is more directly relevant to the audience.

A New Architecture for Automation, Neil Ward-Dutton

Neil Ward-Dutton, who heads up MWD Advisors, presented a distillation of the conversations that they’re having with customer organizations, starting with the difficult choices that they have to make in terms of which technologies to choose: for example, when RPA vendors tell them that they don’t need BPM any more. he went through some insights into the technologies that are impacting CIOs’ strategic decisions — no surprises there — then presented a schematic model for how work happens in organizations as a basis for understanding how different technologies impact different parts of their work. The framework categorizes work into exploratory, transactional and programmatic, and he walked through what each of those types defines up front, and how the technologies are used within that. Good view of how to help organizations think about their work and how to develop automation strategies that address different work styles and applications.

Although a lot of his presentation was aimed at a general audience that could include customers, Neil finished up with a bit on next moves for vendors and technologists as the technology market changes: there are a lot of mergers and acquisitions going on, and older technologies are being replaced with newer ones in specific instances. He had some recommendations about rearchitecting products and adding value, shifting from one-size-fits-all products to collections of independent runtime services in order to support cloud architectures (especially elastic computing requirements) and provide more flexibility in product offerings.

Obligatory futurist keynote at AIIM18 with @MikeWalsh

We’re at the final day of the AIIM 2018 conference, and the morning keynote is with Mike Walsh, talking about business transformation and what you need to think about as you’re moving forward. He noted that businesses don’t need to worry about millenials, they need to worry about 8-year-olds: these days 90% of all 2-year-olds (in the US) know how to use a smart device, making them the truly born-digital generation. What will they expect from the companies of the future?

Machine learning allows us to customize experiences for every user and every consumer, based on analysis of content and data. Consumers will expect organizations to predict their needs, before they could even voice it themselves. In order to do that, organizations need to become algorithmic businesses: be business machines rather than have business models. Voice interaction is becoming ubiquitous, with smart devices listening to us most (all) of the time and using that to gather more data on us. Face recognition will become your de facto password, which is great if you’re unlocking your iPhone X, but maybe not so great if you don’t like public surveillance that can track your every move. Apps are becoming nagging persuaders, telling us to move more, drink more water, or attend this morning’s keynote. Like migratory birds that can sense magnetic north, we are living in a soup of smart data that guides us. Those persuasive recommendations become better at predicting our needs, and more personalized.

Although he started by saying that we don’t need to worry about millenials, 20 minutes into his presentation Walsh is admonishing us to let the youngest members of our team “do stuff rather than just get coffee”. It’s been a while since I worked in a regular office, but do people still have younger people get coffee for them?

He pointed out that rigid processes are not good, but that we need to be performance-driven rather than process-driven: making good decisions in ambiguous conditions in order to solve new problems for customers. Find people who are energized by unknowns to drive your innovation — this advice is definitely more important than considering the age of the person involved. Bring people together in the physical realm (no more work from home) if you want the ideas to spark. Take a look at your corporate culture, and gather data about how your own teams work in order to understand how employees use information and work with each other. If possible, use data and AI as the input when designing new products for customers. He recommended a next action of quantifying what high performance looks like in your organization, then work with high performers to understand how they work and collaborate.

He discussed the myth of the simple relationship between automation and employment, and how automating a task does not, in general, put people out of work, but just changes what their job is. People working together with the automation make for more streamlined (automated) standard processes with the people focused on the things that they’re best at: handling exceptions, building relationships, making complex decision, and innovating through the lens of combining human complexity with computational thinking.

In summary, the new AI era means that digital leaders need to make data a strategic focus, get smart about decisions, and design work rather than doing it. Review decisions made in your organization, and decide which are best made using human insight, and which are better to automate — either way, these could become a competitive differentiator.

Transforming compliance at Farmers Insurance with @rafael_moscatel

In the last Thursday breakout of AIIM 2018, I attended a session on initiatives within the compliance department at Farmers Insurance to modernize their records management, presented by Rafael Moscatel. Their technology includes IGS’ Virgo to manage retention schedules, Legal Hold Pro for legal holds and custodian compliance, and Box for content governance. They started in 2015 with an assessment and plan, then built a new team with the appropriate expertise going forward, then updated their policy and governance, and finally brought in the three new key technology components in 2017. For an insurance company, that’s pretty fast.

Their retention policy is based on 12 big buckets, which are primarily aligned with business functions, making it easy for employees to understand what they are from a real-world standpoint. Legal Hold Pro replaced an old customized SharePoint system, and works together with Box Governance for e-discovery. He went through a lot of the details of how the technologies work together and what they’re doing with them, but the key takeaway for me is that an insurance company — what I know through a lot of experience to be an extremely conservative industry that’s struggling to transform themselves — is realizing that they need to shake things up in terms of how compliance of digital records are managed in order to move forward into the future. He ended up with some great comments on how to work with the business people, especially the executives, to bring programs like this to fruition.

Great talk by a knowledgeable and well-spoken presenter; my end-of-the-day writing doesn’t do it justice.

Automation and digital transformation in the North Carolina Courts

Elizabether Croom, Morgan Naleimaile and Gaynelle Knight from the North Carolina Courts led a breakout session on Thursday afternoon at AIIM 2018 on what they’ve done to move into the digital age. NC has a population of over 10 million, and the judiciary adminstration is integrated throughout the state across all levels, serving 6,800 staff, 5,400 volunteers and 32,000 law enforcement officers as well as integrating and sharing information with other departments and agencies. New paper filings taking up 4.3 miles of shelving each year, yet the move to electronic storage has to be done carefully to protect the sensitivity of the information contained within these documents. For the most part, the court records are public records unless they are for certain types of cases (e.g., juveniles), but PII such as social security numbers must be redacted in some of these documents: this just wasn’t happening, especially when documents are scanned outside the normal course of content management. The practical obscurity (and security) of paper documents was moving into the accessible environment of electronic files.

They built their first version of an enterprise information management systems, including infrastructure, taxonomy, metadata, automated capture and manual redaction. This storage-centric phase wasn’t enough: they also needed to address paper file destruction (due to space restrictions), document integrity and trustworthiness, automated redaction of PII, appropriate access to files, and findability. In moving along this journey, they started looking at declaring their digital files as records, and how that tied in with the state archives’ requirements, existing retention schedules and the logic for managing retention of records. There’s a great deal of manual quality control currently required for having the scanned documents be approved as an official record that can replace the paper version, which didn’t sit well with the clerks who were doing their own scanning. It appears as if an incredible amount of effort is being focused on properly interpreting the retention schedule logic and trigger sources: fundamentally, the business rules that underlie the management of records.

Moving beyond scanning, they also have to consider intake of e-filed documents — digitally-created documents that are sent into the court system in electronic form — and the judicial branch case management applications, which need to consume any of the documents and have them readily available. They have some real success stories here: there’s an eCourts domestic violence protection order (DVPO) process where a victim can go directly to a DV advocate’s office and all filings (including a video affidavit) and the issue of the order are done electronically while the victim remains in the safety of the advocate’s office.

They have a lot of plans moving forward to address their going-forward records capture strategy as well as addressing some of the retention issues that might be resolved by back-scanning of microfilmed documents, where documents with different retention periods may be on the same roll of film. Interestingly, they wouldn’t say what their content management technology is, although it does sound like they’re assessing the feasibility of moving to a cloud solution.

Invasion of the bots: intelligent healthcare applications at @UnitedHealthGrp

Dan Abdul, VP of technology at UnitedHealth Group (a large US healthcare company) presented at AIIM 2018 on driving intelligent information in US healthcare, and how a variety of AI and machine learning technologies are adding to that: bots that answer your questions in an online chat, Amazon’s Alexa telling you the best clinic to go to, and image recognition that detects cancer in a scan before most radiologists. The US has an extremely expensive healthcare system, much of that caused by in-patient services in hospitals, yet a number of initiatives (telemedicine, home healthcare, etc.) do little to reduce the hospital visits and the related costs. Intelligent information can help reduce some of those costs through early detection of problems that are easily treatable before they become serious enough to require hospital care, prediction of other conditions such as homelessness that often result in a greater need for healthcare services. These intelligent technologies are intended to replace healthcare practitioners, but assist them by processing more information faster than a person can, and surface insights that might otherwise be missed.

Abdul and his team have built a smart healthcare suite of applications that are based on a broad foundation of data sources: he sees the data as being key, since you can’t look for patterns or detect early symptoms without the data on which to apply the intelligent algorithms. With aggregate data from a wider population and specific data for a patient, intelligent healthcare can provide much more personalized, targeted recommendations for each individual. They’ve made a number of meaningful breakthroughs in applying AI technologies to healthcare services, such as identifying gaps in care based on treatment codes, and doing real-time monitoring and intervention via IoT devices such as fitness trackers.

These ideas are not unique to healthcare, of course; personalized recommendations based on a combination of a specific consumer’s data plus trends from aggregate population data can be applied to anything from social services to preventative equipment maintenance.

Anarchy in Edmonton: no, it’s not hockey, it’s Google Drive

I’m in a breakout session at the AIIM 2018 conference, and Kristan Cook and Gina Smith-Guidi are talking about their work at the City of Edmonton in transitioning from network drives to Google Drive for their unstructured corporate information. Corporate Records and Information Management (CRIM) is part of the Office of the City Clerk, and is run a bit independently of IT and in a semi-decentralized manner. They transitioned from Microsoft Office to Google Suite in 2013, and wanted to apply records management to what they were doing; at that time, there was nothing commercially available, so hired a Google Apps developer to do it for them. They needed the usual records management requirements: lifecycle management, disposition and legal hold reporting, and tools to help users to file in the correct location; on top of that, it had to be easy to use and relatively inexpensive. They also managed to reconcile over 2000 retention schedules into one master classification and retention schedule, something that elicited gasps from the audience here.

What they offer to the City departments is called COE Drive, which is a functional classification — it just appears as a folder in Google Drive — then the “big bucket” method below that top level, where documents are filed within a subfolder that represents the retention classification. When you click New in Google Drive, there’s a custom popup that asks for the primary classification and secondary classification/record series, and a subfolder within the secondary classification. This works for both uploaded files and newly-created Google Docs/Sheets files. Because these are implemented as folders in Google Drive, access permissions are applied so that users only see the classifications that apply to them when creating new documents. There’s also a simple customized view that can be rolled out to most users who only need to see certain classifications when browsing for documents. Users don’t need to know about retention schedules or records management, and can just work the way that they’ve been working with Google Drive for five years with a bit of a helper app to help them with filing the documents. They’re also integrating Google File Stream (the sync capability) for files that people work on locally on their desktop, to ensure that they are both backed up and stored as proper records if required.

The COE Drive is a single account drive, I assume so that documents added to the COE Drive have their ownership set to the COE Drive and are not subject to individual user changes. There’s not much metadata stored except for the date, business area and retention classification; in my experience with Google Drive, the search capabilities mean that you need much less explicit metadata.

It sounds as if most of the original work was done by a single developer, and now they have new functionality created by one student developer; on top of that, since it’s cloud-based, there’s no infrastructure cost for servers or software licences, just subscription costs for Google Apps. They keep development in-house both to reduce costs and to speed deployment. Compare the chart on the right with the cost and time for your usual content and records management project — there are no zeros missing, the original development cost was less than $50k (Canadian). That streamlined technology path has also inspired them to streamline their records management policies: now, changes to the retention schedule that used to require a year and five signatures can now be signed off by the City Clerk alone.

Lots of great discussion with the audience: public sector organizations are very interested in any solution where you can do robust content and records management using low-cost cloud-based tools, but many private sector companies are seeing the benefits as well. There was a question about whether they share their code: they don’t currently do that, but don’t have a philosophical problem with doing that — watch for their Github to pop up soon!

AIIM18 keynote with @jmancini77: it’s all about digital transformation

I haven’t been to the AIIM conference since the early to mid 90s; I stopped when I started to focus more on process than content (and it was very content-centric then), then stayed away when the conference was sold off, then started looking at it again when it reinvented itself a few years ago. These days, you can’t talk about content without process, so there’s a lot of content-oriented process here as well as AI, governance and a lot of other related topics.

I arrived yesterday just in time for a couple of late-afternoon sessions: one presentation on digital workplaces by Stephen Ludlow of OpenText that hit a number of topics that I’ve been working on with clients lately, then a roundtable on AI and content hosted by Carl Hillier of ABBYY. This morning, I attended the keynote where John Mancini discussed digital transformation and a report released today by AIIM. He put a lot of emphasis on AI and machine learning technologies; specifically, how they can help us to change our business models and accelerate transformation.

We’re in a different business and technology environment these days, and a recent survey by AIIM shows that a lot of people think that their business is being (or about to be) disrupted, and digital transformation is and important part of dealing with that. However, very few of them are more than a bit of the way towards their 2020 goals for transformation. In other words, people get that this is important, but just aren’t able to change as fast as is required. Mancini attributed this in part to the escalating complexity and chaos that we see in information management, where — like Alice — we are running hard just to stay in place. Given the increasing transparency of organizations’ operations, either voluntarily or through online customer opinions, staying in the same place isn’t good enough. One contributor to this is the number of content management systems that the average organization has (hint: it’s more than one) plus all of the other places where data and content reside, forcing workers to have to scramble around looking for information. Most companies don’t want to have a single monolithic source of content, but do want a federated way to find things when they need it: in part, this fits in with the relabelling of enterprise content management (ECM) as “Content Services” (Gartner’s term) or “Intelligent Information Managment” (AIIM’s term), although I feel that’s a bit of unnecessary hand-waving that just distracts from the real issues of how companies deal with their content.

He went through some other key findings from their report on what technologies that companies are looking at, and what priority that they’re giving them; looks like it’s worth a read. He wrapped up with a few of his own opinions, including the challenge that we need to consider content AND data, not content OR data: the distinction between structure and unstructured information is breaking down, in part because of the nature of natively-digital content and in part because of AI technologies that quickly turn what we think of as content into data.

There’s a full slate of sessions today, stay tuned.

The Evolution of Privacy Regulations – an @AIIM1Canadian seminar

SeshatAIIM Toronto runs some great morning seminars every month or so, and today the guest is Else Khoury of Seshat Information Consulting to talk about privacy regulations. In the face of recent privacy gaffes from the Facebook fiasco (the breach that wasn’t a breach) to Alphabet Labs not thinking about where public data that they collect in Toronto will be stored (hello, data sovereignty), and with the upcoming GDPR regulations, privacy is hot right now. Khoury, who brightened our day by telling us that her company is named after the Egyptian goddess of recordkeeping, covered both Canadian and EU privacy frameworks.

In Canada, we’ve had the Privacy Act since 1983, which governs federal government offices and how they handle data about employees and citizens, including Freedom of Information. PIPEDA (Personal Information Protection and Electronic Documents Act) came in 2000, setting rules for how private organizations handle personal information. As technology evolved, the Digital Privacy Act of 2015 made major amendments to PIPEDA regarding mandatory breach reporting and recordkeeping. Khoury briefly covered FIPPA (Freedom of Information and Protection of Privacy Act) and MFIPPA, which apply the same sort of regulations as the Privacy Act but for provincial and municipal governments. PHIPA (Personal Health Information Protection Act) protects our health-related information across all types of health care providers, and was updated quite recently to state that “use” includes viewing information after a few cases of nosy health care workers who looked up records on people who they shouldn’t have. There is (or soon will be) mandatory reporting of PHIPA breaches in most provinces, including reporting to the regulatory colleges for different types of health care workers. There is also a privacy framework for electronic health records (EHR) under new revisions to PHIPA.

There are analogous privacy regulations in many other countries; for example, the US HIPAA serves the same purpose as our PHIPA, while GDPR is a broader regulation that will cover data across all organizations rather than our division by private and public sector.

There was a good discussion on security versus privacy: security is often focused on keeping external parties out, whereas privacy has to do with how people handle data inside an organization, although these are often intertwined issues. Of course, it’s possible to have a privacy breach (e.g., inappropriate internal access) without a security breach and vice versa. Khoury pointed out that a lot of privacy regulations have to do with processes; in my experience, compliance regulations in general are very process-driven, and the best way to both avoid privacy breaches as well as prove that you have safeguards in place is to implement and audit processes around how data is handled.

She moved on to GDPR, which comes into effect in the EU in May of this year; GDPR covers all personal data of EU residents, since often the combination of data from multiple sources can be used to identify individuals even when a specific identifier (such as name) is not present. As with the 10 privacy principles in Canadian privacy regulations, GDPR has a set of key principles, and uses the concept of Privacy by Design that was co-developed by Ontario’s privacy commissioner. GDPR has specific rules around data retention, specifically not keeping data longer than is required, then securely destroying it. This led to a really interesting discussion of how companies that provide recommendations handle retention of historical data about your interactions with them, such as Netflix or Amazon: will we need to explicitly give them permission to keep information about our past purchases/consumption in order for them to give us better recommendations? GDPR will forever shift data permissions from opt-out to opt-in for Europeans, although that has been creeping up on us for a while.

One of the most talked-about GDPR principles is the right to be forgotten — Google has already received millions of take-down requests under that part of the regulation — although it doesn’t apply to most health care data since that is required to provide proper medical care to an individual. They also have breach reporting regulations similar to Canada’s PIPEDA requirements, and pretty significant penalties if a breach occurs or an organizations can be proven to be non-compliant.

She finished up with a discussion of how privacy regulation changes are likely to impact organizations, and how to operationalize privacy regulations, which depends on the type of data you handled (PI versus PHI), how you interact with it (processing versus controlling), and if you have a privacy management program in place. You’ll need to assess your holdings — what data you collect, how it’s used, who has access, how long it is retained, how it to secured and destroyed — and develop a privacy management team that includes involvement of senior management and every department, not just a data privacy officer (DPO). You’ll need to develop a privacy management program that includes a breach response process, ensure that everyone is trained in privacy management, then audit and adapt it over time. If you’re subject to the GDPR, then you’ll also need processes for expunging data from your systems due to “right to be forgotten” requests in a timely fashion.

You’ll also need to develop a framework for data protection impact assessments (DPIA, aka privacy impact assessments or PIA) which is a proactive risk assessment for new programs or systems that use personal data: interestingly, the first part of this is often mapping the information flow processes that cover collection, storage and access. Performing DPIA/PIA is part of what Khoury’s company does for organizations, and she had a good checklist of the steps involved, as well as pointing out that they should be a regular part of your privacy management program, not something that’s just done at the end as an audit step.

As always, great content at the AIIM Toronto morning seminars, and I look forward to the next one.