Quebec's proposed Bill 64 has made waves with its sweeping changes inspired by the General Data Protection Regulation ("GDPR"). At Fasken, we've written about the ways in which Bill 64 could impact your business, from its expansion of people's rights regarding their personal information to restrictions on the ways you can use such personal information. This installation of our series combines our Emerging Tech and Privacy expertise in a discussion on automated decision making, which is most often used as a result of artificial intelligence algorithms. The ability of automated systems to make decisions about individuals is not new: computer programs have been used to determine automated credit scoring since the 1990s. However, new developments in machine learning and AI, increased public awareness and a greater prevalence of automated decision making systems in everyday life have raised questions of what rights individuals should have regarding this use of their personal information.
As a result, the introduction of automated decision making ("ADM") concerns in privacy laws is a relatively new development introduced by the GDPR in the context of certain decisions of importance for individuals. This bulletin dives into the proposed introduction of a framework for using ADM within Quebec's privacy legislation. It also discusses distinctions between Bill 64 and the GDPR, a useful starting point for those looking to align their GDPR-based compliance system with the upcoming changes proposed by Bill 64.
What do we mean by ADM?
An ADM is a system that can make a decision by automated means without requiring any human involvement. For example, think of applying to a loan online or filling out an aptitude quiz for a job application. Chances are, there was no human involved in making that decision, meaning it is purely automated processing. Automated decisions are being used in all kinds of contexts, from triaging patients, to automated underwriting in the insurance industry, to matching potential couples on dating apps[1]. These decision invisibly shape people's live in a myriad of ways.
To understand why the new developments under Bill 64 are so important, we need to consider what a decision based exclusively on automated processing means. ADM is not synonymous with Artificial Intelligence ("AI"). An ADM is a broad term referring to all automated processing, and it can include the use of AI technology or not. Automated decisions can powered by AI or can be based on formulas that process information in relatively predictable ways. AI is not just designed to apply prewritten rules; it is designed to search for patterns, to learn from experiences and to select the best choice from a given range of options . AI often relies on Machine Learning ("ML") a system in which information, called "inputs", are inserted into the system, which produces a decision which is called the "output". The challenge with ML is that is operates like a "black box" where the logic of how a decision was made can be hidden. Moreover, ML systems often use self-teaching algorithms, in which the system actually generates new rules, so a person may not even be involved in choosing how a system makes decisions.
The distinction between the type of technologies used for ADM is important because it has concrete implications on how the system can be explained or how its decisions can be subject to review. This brings us to our next point: how can you comply with Bill 64 and its requirements when using ADM?
Complying with Bill 64: Be transparent about your use of ADM systems
If Bill 64 passes in its current state, it will introduce an obligation on any "person carrying on an enterprise" to inform individuals when they use "information to render a decision based exclusively on an automated processing" at the time of or before the decision. This is a timely addition to privacy legislation in Quebec, as it does not address automated decisions at all in its current state.
Bill 64 is aimed at decisions that are based exclusively on an automated processing, implying it does not extend to processing where there was human intervention. Our understanding is that Bill 64 is intended to address decisions that are only made on an automated basis without a person participating in making the decision. This would exclude decisions where automated processing is used as a tool to assist a decision-maker. For example, an ADM system that decides whether someone should be given a loan would be covered, but an ADM system that merely provides a report to a bank employee, who ultimately decides to approve the loan, would not be covered.
Bill 64 also proposes that organizations would have to inform the individual if that person makes a request for information. Individuals would be allowed to receive information about:
- which personal information was used to render the decision;
- what are the reasons and the principal factors and parameters that led to the decision; and
- the right of the person whose personal information is collected to have the personal information used to render the decision corrected[2].
Bill 64 also states that the person must be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.
Yet, Bill 64 is silent about how this obligation should be applied, for example, what type of information needs to be provided when giving the reasons and principal factors that led to the decision. Do organizations need to provide the entire reasoning of the decision and all the rules coded into the system? Do they have to explain how they came to a decision concerning a specific person or can they provide general information?
Though it is possible to adapt privacy notices to inform individuals that AI is being used for ADM, explaining how a decision was made could be more complex. An AI system may use a wide array of different data points, spot trends within larger databases or weigh many different data points to come to a decision. This process may be too complex to explain, making the decision-making process opaque and impenetrable at times. This has spurred a movement for "explainable AI" where decisions can be understood and communicated.
New sanctions under Bill 64
Bill 64 not only creates these new obligations to inform people that automated decisions are being made about them and allows them the chance to give their input, it also imposes serious monetary administrative penalties for failing to meet those obligations. Bill 64 grants the Commission d'accès à l'information ("CAI") new powers to sanction failure to inform individuals or respond to their requests concerning automated processing, including imposing administrative monetary penalties as high as C$10,000,000 or 2% of worldwide turnover, whichever is greater. Bill 64 also creates a new private right of action for individuals to bring claims for injuries to their privacy rights. You can read more about sanctions in our previous publications.
Where Bill 64 differs from the GDPR
The GDPR introduced the right for people to be informed by controllers as to whether an ADM system is being used, including for profiling. Data subjects (i.e. individuals whose personal information are being processed by an entity subject to GDPR) are also entitled to get meaningful information about the logic involved and about the significance and the envisaged consequences of the processing for them. Bill 64 states that individuals would have the principal factors and parameters that led to the decision, but does not go as far as stating that the individual needs to informed of the consequences of the decision for themselves.
The GDPR also provides a right "not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her"[3]. This right has been framed as a "right to human intervention" when the decision has legal effects or other important effects on them[4]. The same article of the GDPR also adds that (1) safeguards must be put in place to safeguard the data subject's rights, freedoms and interests when they are subject to automated decisions, (2) that the data subject must have the chance to express their point of view on the decision, and (3) that the data subject must be able to contest the decision. For example, a decision on whether someone should be granted parole, or any other decsion that has an important impact, such as being refused an online credit application should not be done solely by an automated decision. Even if using ADM is permitted, safeguards must be implemented so that the individual's rights are preserved throughout the process, and they must be able to express their disagreement and contest the decision. Safeguards can take many forms such as using privacy by design to limit what information is collected or implementing measures to minimize the risk of unfairness or discrimination in the decision making process.
Bill 64 introduces a requirement that the person concerned must be given the opportunity to submit observations to a member of staff who is qualified to review the automated decision. Thus, both Bill 64 and the GDPR give the person the right to express their opinion on the automated decision and to obtain a "human review" of the decision. However, Bill 64 is not framed as a right to not be subject to automated decisions, and there is no mention of further safeguards as in the GDPR.
Implementing mechanisms for rights requests
The impacts of Bill 64 will be felt across the tech industry in Montreal because of its importance as a center of development for AI. There are several ways in which organizations can prepare for the new ADM requirements should Bill 64 pass. As the saying goes: an ounce of prevention is worth a pound of cure. Algorithmic Impact Assessments ("AIA") are designed to help assess and mitigate the risks associated with deploying an automated decision system. They are a critical means to document and understand what data inputs, data outputs and parameters are used. An AIA can also determine what are impacts the ADM will have on people. For more information on evaluating the privacy implications of a project, check out Fasken's bulletin explaining how to evaluate privacy-related factors.
An AIA can be used to detect issues such as potential unfairness and discrimination in an ADM, and to implement privacy by design. By implementing preventative measure at the design phase, compliance is made easier, and potential issues are addressed throughout the design process rather than designing a system that is revealed to have compliance issues later on. For example, putting in place mechanism for people to correct personal information or designing a mechanism to explain automated decisions could be accomplished throughout the design process.
Conclusion: Prevention is better than cure
To conclude, organizations that use ADM systems will be faced with a challenges in implementing this new requirement. ADM systems that use predictive algorithms penetrates ever greater spheres of activity, from use by governments, banks, insurance companies, advertising and even hospitals. Companies that use these technologies need to think of how to make the best possible use of the automated processing while providing individuals with enough transparency and accountability for them to understand how their personal information is used. This is not only an opportunity for compliance with the law, but one that can enhance consumer trust in brands.
BILL 64 RESOURCE CENTER - Visit our Bill 64 Resource Center for all the information you need to help you to cope with the changes that might be made to the legislation. DISTRIBUTION LIST - If you do not want to miss our next bulletins and any other relevant information on this subject, sign up now on our distribution list to receive all communications related to this new Bill. |
[1] Tinder, Privacy, https://www.help.tinder.com/hc/en-us/articles/360003082172-Profiling-and-automated-decision-making-at-Tinder; Doctors are using AI to triage covid-10 patients, MIT Technology Review, https://www.technologyreview.com/2020/04/23/1000410/ai-triage-covid-19-patients-health-care/.
[2] Bill 64, 12.1
[3] General Data Protection Regulation, art. 22
[4] European Commission, can I be subject to automated individual decision-making, including profiling, https://ec.europa.eu/info/law/law-topic/data-protection/reform/rights-citizens/my-rights/can-i-be-subject-automated-individual-decision-making-including-profiling_en.