Skip to main content
Bulletin

Navigating Artificial Intelligence Risks for Financial Institutions: 2024 Insights from OSFI, FCAC and the AMF

Fasken
Reading Time 5 minute read
Subscribe
Share
  • LinkedIn

Overview

Financial Services Bulletin

Introduction

As the adoption of artificial intelligence (AI) by financial institutions continues to rapidly increase, regulators are becoming increasingly focused on understanding and mitigating the risks associated with a growing range of AI use cases.

On September 24, 2024, the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC) released a report on AI uses and risks at federally regulated financial institutions (FRFIs). The report is bolstered by findings from an AI questionnaire sent to FRFIs late last year, requesting feedback on AI and quantum computing preparedness. In February 2024, Quebec’s Autorité des marchés financiers (AMF) published a discussion paper encouraging adoption of certain best practices for responsible AI implementation and use in the financial sector. The publications work together to paint a picture of the current risk environment and to suggest a way forward.

The Growing Prevalence of AI in the Canadian Financial Sector

The OSFI-FCAC report highlights a significant increase in AI adoption among financial institutions. In 2019, approximately 30% of FRFIs were utilizing AI, a figure that rose to 50% by 2023 and is projected to reach 70% by 2026. Additionally, 75% of financial institutions that responded to the questionnaire plan to invest in AI over the next three years as it becomes a strategic priority. AI is being leveraged for various applications, and is already significantly impacting areas such as operational efficiency and customer engagement. Importantly, AI is increasingly being used for core functions at insurance companies (underwriting and claims management) and deposit-taking institutions (credit risk activities and compliance monitoring).

The AMF's discussion paper presents similar findings, emphasizing the transformative potential of AI in the financial sector. It notes that AI can lead to the development of new financial products and services, improve client segmentation, and enhance the overall client experience.

Risks Presented by AI Use

In general, the use of AI by financial institutions not only introduces risks but increases the scope and impact of existing risks. The OSFI-FCAC report categorizes AI-related risks into internal and external risks.

1. Internal Risks

Internal risks are those that affect the financial institution and its products and services, including data governance risks and model risks (e.g. explainability, bias), third party risk, operational risk, cybersecurity risk and reputational risk. The AMF report highlights the potential material harms that could befall consumers due to model risks (e.g., denying access to a financial product or service as a result of a model’s discriminatory bias).

2. External Risks

The OSFI-FCAC report states that generative AI is being used by malicious actors to carry out new cyber attack strategies, and lowers the cost of such attacks, making smaller institutions more attractive targets. The report also notes fraud instances are increasing.

External risks give rise to systemic risk to market integrity and financial stability. AI systems used to automate trading processes and investment decisions could adversely affect public confidence in financial markets. Errors or drift in such models could also result in flash crashes, creating market liquidity issues. Additionally, increased AI adoption has the potential to enable malicious actors and amplify geopolitical risks associated with misinformation and disinformation.

AI Implementation Pitfalls and Recommendations

Pitfalls

The OSFI-FCAC report addresses potential pitfalls to avoid when implementing AI systems and mitigating AI risk:

  • Absence of AI risk management oversight;
  • AI controls don't consider all risks across the AI model lifecycle;
  • Lacking contingency actions or safeguards when using AI models;
  • Not updating risk frameworks and controls for gen AI risks;
  • Neglecting to provide sufficient AI training to employees;
  • Not taking a strategic approach to AI adoption; and
  • Thinking that not using AI means there are no AI risks.

Recommendations

The AMF discussion paper provides 30 recommendations for the responsible use of AI in the financial sector in a way that attempts to balance innovation and risk mitigation. These recommendations generally fall within the following categories:

  • Consumer protection: using AI in consumers’ best interest, respecting consumers’ privacy, increasing consumer autonomy, treating consumers fairly, managing conflicts of interest in consumers’ best interests, and consulting consumers required to use AI systems.
  • Transparency for consumers and the public: disclosing information about AI design and use framework, disclosing information on the use of AI in products and services, explaining consumer-related model outcomes, and providing consumers with communication channels and assistance.
  • Appropriateness of AI systems: justifying each AI use case, and prioritizing the most easily explainable treatment.
  • Responsibility: being accountable for the actions and decisions of an AI system, making employees and officers accountable with respect to the use of AI, and implementing human control proportional to AI risk.
  • AI design and use: overseeing AI design and use, establishing a code of ethics for the design and use of AI, creating an environment favourable to transparency and disclosure, establishing a consistent approach to AI system design lifecycle, facilitating the creation of diversified work teams, conducting due diligence on third-party AI systems, and using AI in a manner enabling the achievement of sustainable development objectives.
  • Managing AI-associated risks: assessing the risks associated with the use of an AI system, ensuring AI system security, governing the data used by AI systems, managing the risks associated with AI models, performing impact analyses and testing on AI systems, monitoring AI system performance on an ongoing basis, regularly auditing AI systems, and training employees and users on AI.

Conclusion

The OSFI-FCAC report and the AMF discussion paper both underscore the transformative potential of AI in the financial sector while highlighting the critical need for effective risk management. All financial institutions, even those not implementing AI models, need to be aware of how the risk environment is changing.

Contact the Authors

Fasken’s Financial Services group is actively monitoring regulatory developments in this area. For more information or to discuss a specific matter, please contact us.

Contact the Authors

Authors

  • Koker Christensen, Partner | CO-LEADER, FINANCIAL SERVICES, Toronto, ON, +1 416 868 3495, kchristensen@fasken.com
  • Isabelle Savoie, Associate, Toronto, ON, +1 416 943 8993, isavoie@fasken.com

    Subscribe

    Receive email updates from our team

    Subscribe