Skip to main content
Bulletin

Navigating a New Frontier: Artificial Intelligence and Privacy Considerations

Fasken
Reading Time 9 minute read
Subscribe
Share
  • LinkedIn

Overview

Privacy & Cybersecurity Law Bulletin

The convergence of human and machine creativity through artificial intelligence (AI) presents intriguing possibilities for businesses. These systems have the unique ability to enhance productivity and efficiency through data analysis and solution generation. However, where infinite possibilities exist, legal considerations inevitably follow.

Background

The European Union (EU) has been proactive in addressing the considerations associated with AI by implementing Europe’s first Artificial Intelligence Act (AI Act). This Act is one of the forerunners in AI regulation and sets a global precedent for responsible AI governance, inspiring various organizations and experts to advocate for similar measures within their respective governments. In response to similar calls for AI regulation, Canada has introduced the Artificial Intelligence and Data Act (AIDA) within Bill C-27. However, Bill C-27 remains a proposed piece of legislation and has not yet been enacted into law. Given the current parliamentary order, there is a significant chance that it may never be adopted. The bill should therefore be seen as a statement of intent, which could change if it fails to pass or is reintroduced in a future parliamentary session.

The regulations in AIDA differ from those of the EU, yet both share the common objective of promoting the responsible and safe use of AI systems. Both equally confront the challenge of technology outpacing the speed of enactment of regulatory frameworks. For instance, the rise of generative AI raises a wide array of new legal considerations, particularly those relating to individual privacy. Even the courts are confronted with unprecedented questions as they try to keep pace with evolving technologies.

DIAGRAM A – A BRIEF OVERVIEW OF ARTIFICIAL INTELLIGENCE

GENERATIVE ARTIFICIAL INTELLIGENCE
 A branch of AI that uses machine learning techniques to generate new content using what the model learned from the data it was trained on.
 
 

Comparing the EU and Canada’s Responses to AI

The EU’s AI Act was adopted on May 21, 2024, taking a risk-based approach to regulating AI. This implies assessing each AI system and categorizing it based on the level of risk it presents, such that different AI applications must adhere to varying compliance requirements depending on the perceived threats to society. For instance, systems classified as 'high-risk'[1] are subject to stringent regulations, which include providing detailed technical documentation, ensuring transparency regarding their training data, and undergoing mandatory third-party conformity assessments for certain applications. Conversely, AI systems deemed to pose 'minimal or zero risk'[2] face significantly less scrutiny, but they are still required to comply with the fundamental principle of transparency.[3]

The regulations under AIDA take a distinct approach compared to the EU's AI Act. The framework may appear less precise, yet still aims to promote responsible AI design, development, and usage. The following four types of systems are subject to AIDA’s regulations:

DIAGRAM B – SYSTEMS SUBJECT TO AIDA’S REGULATIONS

ARTIFICIAL INTELLIGENCE SYSTEM

A technological system that, using a model, makes inferences to generate output, including predictions, recommendations, or decisions.

MACHINE LEARNING MODEL

A digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns. 

GENERAL-PURPOSE SYSTEM

An artificial intelligence system that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development. 

HIGH-IMPACT SYSTEM

An artificial intelligence system of which at least one of the intended uses may reasonably be concluded to fall within a class of uses set out in the schedule.

 

AIDA similarly adopts a risk-based approach to mitigate certain harms associated with AI systems. These harms include (a) physical or psychological harm, (b) damage to an individual's property, and (c) economic loss. The expansion of this list depends on the evolution of AI.

Both the AI Act and AIDA serve as examples for other countries considering AI regulation despite their material differences. However, these laws must continuously evolve to effectively address new legal challenges.

Recent Developments in AI

Generative AI has advanced the field with its unique abilities to autonomously generate text, images, music, and videos. However, the legal considerations surrounding these technologies remain in question. Even the EU and Canada, as leaders in AI governance, are struggling to provide definitive answers.

The European Commission initially proposed its AI regulation in April 2021, at a time when generative AI was not the primary focus. Since then, the technological landscape has evolved tremendously, and the EU has reacted by outlining a new approach to regulating AI. Generative AI is now regulated under the AI Act[4] and there are specific obligations targeted at providers of generative AI systems. Specifically, it requires providers to implement advanced safeguards to prevent the generation of content that breaches EU laws, document and disclose the use of copyrighted training data, and meet stringent transparency requirements. These measures are primarily designed to protect intellectual property rights and prevent the creation and distribution of deceptive content. Furthermore, generative AI systems must adhere not only to AI-specific regulations but also to the General Data Protection Regulation (GDPR).[5] This dual layer of oversight indicates that providers must still adhere to general privacy standards while pushing the boundaries of innovation.

Canada’s regulatory response to the rise of generative AI is less stringent than that of the EU. Instead of explicitly defining generative AI, it implicitly includes it under a broad definition of AI. That broad definition arguably subjects generative AI to the same regulatory requirements as other AI systems.[6]

DIAGRAM C – AIDA’S BROAD DEFINITION FOR AI SYSTEMS

ARTIFICIAL INTELLIGENCE SYSTEM

“…neural networks, machine learning, or other techniques to generate content, make predictions or recommendations, or... ”

 

The federal government has also demonstrated its commitment to helping organizations implement AI safely through its Code of Practice.[7] This Code is designed to promote the secure deployment of AI systems and emphasizes six fundamental principles: safety, fairness and equity, transparency, human oversight and monitoring, validity and robustness, and accountability. It also encourages stakeholders to engage in conversations on the topic, recognizing that these are novel issues being navigated together.

Generative AI and New Privacy Concerns

This discussion suggests that both the AI Act and AIDA, despite their differences, are taking progressive steps in regulating AI. However, neither the EU’s nor Canada’s legislative frameworks is primarily focused on privacy protection alone. Instead, each considers the broad range of impacts these systems may have on individuals' rights and freedoms, including privacy rights. This raises the following questions: How do privacy concepts intersect with AI? Is it still relevant to discuss data collection? From usage to output, is this considered communication?

The answers to these questions are gradually unfolding in Canada. The federal government recently issued content [8] on the use of generative AI, emphasizing responsible and ethical practices. These principles apply to Large Language Models (LLMs) and other forms of generative AI. This information is valuable for organizations intending to implement AI practices; however, it does not function in the same capacity as formal legislation.

With that said, on September 26, 2023, the Standing Committee on Industry and Technology (INDU) began a detailed review of Bill C-27.[9] The committee has been actively listening to input from a variety of experts and stakeholders to guide potential changes to the legislation. Much of the debate has been centered on addressing the possible negative impacts of artificial intelligence. The review process is currently paused but is scheduled to resume in September 2024. One notable contribution came from Privacy Commissioner Philippe Dufresne on October 19th, 2023. He advocated for the mandating of Privacy Impact Assessments (PIAs) for high-risk operations, such as those involving AI. He further highlighted that privacy threats rank among the top three risks identified by G7 members in an OECD report focused on generative AI.

Will this ongoing dialogue begin to clarify some of the questions surrounding generative AI and privacy concerns?

The Law of the Horse: Acting Now

These questions underscore the need for pragmatic solutions within existing regulatory frameworks at this current stage. This position is inspired by the “Law of the Horse”, an expression that originates from Judge Frank H. Easterbrook's 1996 lecture, Cyberspace and the Law of the Horse. Judge Easterbrook used it to criticize the creation of overly specialized legal fields, advocating instead for applying general legal principles to new areas of law. In the context of generative AI, it implies leveraging principles from established legal norms and adapting them to the unique context of these systems. This approach would help prevent gaps in regulation especially as technologies continue to advance and urge policymakers to operate within the scope of our familiar boundaries.

The recent decision by the Commission d’accès à l’information (CAI) supports this notion. On November 9, 2022, the CAI concluded an investigation into an AI tool designed to predict the risk of student dropout, analyzing a variety of de-identified student data.[10] The tool was initially described as a mere data retrieval mechanism, however, the CAI recognized that the tool generated intricate information similar to what would typically require substantial human statistical expertise for interpretation. The CAI classified the outputs of this AI system as personal information, extending privacy laws to encompass the generative capabilities of the tool.

The perspective that AI-based indicators represent a ‘new collection and new use’ of personal information has prompted a reevaluation of the relationship between AI systems and privacy legislation in Canada.[11] It has clarified that AI systems are indeed subject to Quebec’s existing privacy laws, though the future of AI regulation in Canada remains uncertain.

Fasken’s Role

If you are thinking about incorporating AI into your business, do not let these uncertainties deter you. Our team is committed to maintaining a leading position in AI regulation by adopting stances to address all emerging legal considerations. We will keep you updated with the latest developments on this topic.


 

[1] High-risk systems are those used in the following domains: biometric data, critical infrastructures, education and vocational training, employment and worker management, access to essential private and public services, law enforcement services, migration, asylum, and border controls, or administration of justice and democratic processes (article 6(2), AI Act).

[2] Minimal or zero risk systems include applications such as spam filters or video games enabled by AI. Unless qualifying as a high-risk system, the majority of AI systems fall into this category.

[3] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final.

[4] Article 6(2) of the AI Act defines generative AI as a system “specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio or video”.

[5] European Union. (2018). General Data Protection Regulation (GDPR).

[6] Government of Canada. (2022). Artificial Intelligence and Data Act: An Overview.

[7] Innovation, Science and Economic Development Canada. (2023). Consultation on the Development of a Canadian Code of Practice for Generative Artificial Intelligence Systems.

[8] Government of Canada. (2023). Principles for responsible, trustworthy and privacy-protective generative AI technologies.

[9] Digital Charter Implementation Act, 2022. See https://www.parl.ca/legisinfo/en/bill/44-1/c-27

[10] Commission d’accès à l’information du Québec, Enquête concernant le Centre de services scolaire du Val-des-Cerfs, 2022

[11] Ibid.

Contact the Authors

If you have any questions regarding the implications of the EU’s AI Act or Canada’s AIDA on privacy and cybersecurity, please contact a member of our Privacy & Cybersecurity group.

Contact the Authors

Authors

  • Rémi Slama, LLM, Associate, Montréal, QC, +1 514 397 7462, rslama@fasken.com
  • Emma Peress, Student, Montréal, QC, +1 514 397 7631 , eperess@fasken.com
  • Kateri-Anne Grenier, Partner | CO-LEADER, PRIVACY & CYBERSECURITY, Québec, QC, +1 418 640 2040, kgrenier@fasken.com

    Subscribe

    Receive email updates from our team

    Subscribe