On March 30, the Autorité des marchés financiers (AMF, Quebec’s securities regulator) published a decision establishing its Guideline for the Use of Artificial Intelligence (the Guideline)1, applicable to authorized insurers, financial services cooperatives, authorized deposit institutions, and authorized trust companies operating in Quebec. The Guideline will come into force on May 1, 2027, giving institutions one year to achieve compliance.
This publication follows a public consultation held in 2025, which we commented on in a previous update.
This update covers the key requirements of the finalized Guideline and their implications for financial institutions, as well as the principal modifications from the version initially submitted for public consultation.
Overview of the Guideline
The Guideline is grounded in the Organisation for Economic Co-operation and Development AI Principles and other internationally recognized governance and risk management standards. It outlines the AMF’s expectations regarding the measures financial institutions should adopt to manage AI-related risks holistically and ensure clients are treated fairly.
The Guideline sets out the AMF’s expectations across five main pillars:
- Governance: The board of directors should ensure it is regularly informed of trends, risks and significant changes arising from AI system (AIS) use that could impact the institution’s risk profile. The board should maintain sufficient collective expertise to understand the risks faced by the institution, particularly when AI systems are used in critical activities. Senior management should put in place adequate governance enabling the management and control of AIS-related risks, designate an accountable member of senior management for all AIS, and ensure validation exercises are conducted at an appropriate frequency.
- Organization-wide risk management: The institution’s model risk management framework should enable it to identify, assess, control, mitigate, and monitor each AI system, providing a comprehensive view of its inherent and residual risk exposure.
- Risk-based classification: Institutions should maintain a centralized inventory of all AIS, assign each a risk rating that could be based on quantitative factors (e.g., operational, financial and security impacts) and qualitative factors (e.g., degree of autonomy, risk of non-compliance), and modulate lifecycle expectations according to that rating. The institution’s risk-based approach should calibrate the scope and frequency of validation, documentation, approval, and monitoring activities for each AI system according to its risk rating, with AI-specific controls and constraints layering on top of those already applied under the model risk management framework.
- Expectations throughout the AIS lifecycle: The Guideline addresses potential concerns at each stage of the AIS lifecycle, including the rationale for use, data quality, procurement and design, validation, approval, deployment assessment (covering cyber risk and infrastructure vulnerability), ongoing monitoring, and decommissioning. An institution’s governance framework should take into account the AMF’s expectations related to each of those stages.
- Sound commercial practices: Institutions should ensure their codes of ethics apply to situations related to the use of AI and identify and correct discriminatory factors and biases. They should also comply with client communication expectations, which include informing clients when they are interacting with an AIS and disclosing when content has been generated with the participation of an AIS. Institutions should also ensure clients have access to a human representative upon request.
As with other AMF guidelines, the Guideline is principles-based and its application should be proportionate to each institution’s nature, size, complexity, and risk profile.
Key differences from the consultation version
While the finalized Guideline is similar to the consultation version, the document has been substantially restructured. On governance, the dedicated subsections on the risk management function and the internal audit function have been removed and subsumed within the broader framework. The board’s oversight role has been broadened beyond high-risk AIS to encompass evolving trends, risks and significant changes generally, while senior management’s obligations have been simplified to ensuring adequate governance for AIS risk management.
Also, the risk classification framework has been streamlined from an exhaustive list of prescriptive factors to a principles-based approach distinguishing between quantitative factors and qualitative factors.
On client treatment, the separate sections on discrimination, bias, client data quality and consent have been consolidated, and a new expectation requires institutions to accompany any AI-generated content with a sufficiently prominent notice. Finally, the detailed Annex 2 on AI-related risks has been replaced with a concise list of AIS-specific information for the centralized inventory.
Implications for financial institutions
With the May 1, 2027, effective date now confirmed, institutions should begin their compliance efforts promptly. The streamlined, principles-based approach in the final Guideline offers institutions greater flexibility in implementation, but also requires them to exercise greater judgment in tailoring their frameworks.
Key practical priorities include: establishing or updating a centralized AIS inventory with appropriate risk ratings, reviewing governance structures to ensure clear accountability at the senior management and board levels, embedding pre-deployment risk assessments into AIS rollout procedures, and implementing processes for disclosing AI-generated content to clients and providing access to a human interlocutor.
Institutions that had begun aligning their practices with the 2025 consultation version should review their work against the finalized text, particularly with respect to the simplified risk classification criteria and the removal of formerly prescriptive governance subsections. The Guideline applies in addition to the AMF’s existing Model Risk Management Guideline, and institutions should ensure their AIS governance dovetails with their broader model risk frameworks.
Also, institutions should also take into account the AMF’s Third-Party Risk Management Guideline, which applies where AI systems or components are developed, supplied, hosted or operated by third parties. In practice, this requires ensuring AIS related outsourcing and procurement arrangements are subject to appropriate due diligence, contractual safeguards, and ongoing monitoring consistent with the institution’s third party risk management framework.
Finally, the Office of the Superintendent of Financial Institutions’ Guideline E 23 on Model Risk Management, another key guideline addressing the use of artificial intelligence, will come into effect on May 1, 2027. This guideline, which is largely consistent with the Guideline, applies to all federally regulated financial institutions and establishes a comprehensive, enterprise wide framework for managing model risk across all models used in decision making, risk assessment, and business operations, including but not limited to AI and machine learning systems. Both guidelines adopt a principles based, risk proportionate approach aligned with international standards.
