Companies across the globe are dealing with the impact of Europe’s new General Data Protection Regulation (GDPR), as it has extraterritorial legal reach, revising privacy policies and practices (such as those annoying pop-ups about using cookies on many website, a notice required by GDPR). One of the topics of the work we were doing in Salzburg was whether boards needed to have expertise to address the use of AI in the company’s business processes and possibly, products and services.
A question boards must consider is the implication of GDPR with the use of Artificial Intelligence (AI) or Machine Learning (ML). GDPR carries severe penalties, and significant privacy issues tend to carry high reputational cost. With the heightened concerns around AI, ML and privacy, there will be brighter lights shining on issues, when they arise.
As your company moves into the use of these new technologies, are you prepared? Is your board?
With GDPR in effect just over six months, it is too early to know the impact – good or bad. Do you see GDPR as an impediment or an enabler of AI and ML for your company? Are there legal frameworks you can imagine or are aware of that may be a better approach? Is your company weighing these issues?
The more data processed by AI or ML system, the better and more accurate the technology is able to complete its tasks. When that data is personally identifying of individuals, questions come to the fore regarding privacy. There are also privacy concerns regarding the outputs of the AI or ML system that paints a portrait of an individual that may reveal personal attributes that the individual may prefer remain private. Sometimes, indeed, data may not be personally identifying, but could be compared with data that are, with the result of identifying an individual. The European Court of Justice has already held where this is “likely reasonably,” the former data moves into the class of data protected by the Data Protection Directive, the predecessor to the GDPR.
In Europe, the GDPR, in part, addresses these issues directly, stating in Article 9: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.”
The GDPR requires consent of a data subject (i.e. the person whose data is being processed) be freely given, specific and informed, and unambiguous – and by a clear affirmative act, such as a writing or speaking. “Specific and informed” means that consent is granted only for that particular purpose for which the consent is being sought, and does not extend to other (e.g., new) purposes. Further, consent can be withdrawn at any time and the individual has a right to have the data deleted (i.e., the right to be forgotten).
An alternative to obtaining consent is to anonymize, de-identify or pseudonymize the data, which allows a data processor to use the data for purposes beyond which consent was obtained. However, the effectiveness of anonymization is only as good as the extent to which the anonymization is irreversible. As the Information Commissioner’s Office of the UK points out, it may not be possible to establish with absolute certainty a particular dataset is irreversible, especially when taken together with other data that may exist elsewhere.
GDPR Article 5 sets out “principles relating to processing of personal data” including “lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability.” Some of the principles may be contrary to the use of AI and ML, which must first collect as much data as possible, and then analyze the data after collection (the “learning” process). This process makes complying with the purpose limitation and data minimization principles challenging.
Article 22 protects data subjects from decisions based solely on “automated individual decision-making, including profiling” which produce legal effects or similarly significantly affects the data subject. The requirement can be overcome if the data subject gives explicit consent. As well, the restriction addresses decisions based solely on automated processing. Therefore, for decisions such as applications for credit, loans, health insurance, or in the case of job interviews, performance appraisals, school admissions, or court ordered incarceration, the automation can (and many would say should) be used to inform a human decision, not supplant it.
The use of an AI and ML for “decision-making including profiling” must also be “explainable” to the data subject. But it is an open question as to the extent of the explainability – and to what degree the data subject must understand. Barriers to understanding an algorithm include the technical literacy of the data-subject individual and a mismatch between the mathematical optimization in high-dimensionality characteristic of machine learning (i.e., conditional probabilities generated by ML) and the demands of human-scale reasoning and styles of interpretation (i.e., human understanding of causality).
There are competing views on whether the provisions of GDPR enable or are barriers to AI and ML. For example, does the GDPR right to withdraw consent weigh in the decision of a company to use the data? It may be a challenge to delete data in widely federated datasets, and doing so diminishes the “learning” based on the data. With each new use for data, the company is required to go back to get consent. Is that alone an impediment? With the growing range of devices collecting data (i.e., Internet of Things), will it be possible to get specific and informed consent as a practical matter?
In contrast to those raising concerns, Jeff Bullwinkel at Microsoft has written that the GDPR framework strikes the right balance between protecting privacy and enabling the use of AI – provided the law is interpreted reasonably.
What is your view? How is your company weighing these issues? Do you see the GDPR as an enabler? Blocker? Do you know enough about the GDPR to make informed decisions? Does the rest of your board know enough? Given the potential liabilities and risks to the company, do you think it should?
Disclaimer: The Salzburg Questions for Corporate Governance is an online discussion series introduced and led by Fellows of the Salzburg Global Corporate Governance Forum. The articles and comments represent opinions of the authors and commenters, and do not necessarily represent the views of their corporations or institutions, nor of Salzburg Global Seminar. Readers are welcome to address any questions about this series to Forum Director, Charles E. Ehrlich: firstname.lastname@example.org
Read the original article here
Photo by Franck V.