The use of non-traditional models such as generative “GAI” and agentic AI has been gaining traction within Canadian financial institutions ‘FI’s’. Many third-party suppliers are also experimenting with AI and embedding it into their products and services but as they are not subject to the regulatory requirements directly, there is a lack of understanding of the level of risk management and transparency required.
Unlike the European Union, Canada does not have regulation that oversees third party suppliers of AI models/systems. OSFI’s model risk management guideline, E-23 was updated to account for these new risks and is currently in consultation however, the oversight of the risk posed by third-parties suppliers remains with the FI’s. The E-23 guideline makes clear that external models/systems are subject to the same requirements as internal models/systems yet the arms length relationship with external providers can make oversight and on-going monitoring challenging. When the forum gathered in early 2024, the lack of third-party risk assurance was raised as a significant challenge. Specifically, Canadian financial institution’s ability to innovate with AI while adequately managing risk is hampered by these challenges:
Third party suppliers of AI models/systems are not subject to regulatory oversight in Canada.
In the EU, providers of foundational models are subject to requirements before they can place a model in the market or in a service and must conduct a conformity assessment by an accredited third party.
Third party suppliers are increasingly embedding AI into their products and services yet FI’s have very little influence over these suppliers
Suppliers of AI models/systems give very little documentation on how their systems (models) work
Suppliers of AI models/systems provide very little assurances on the quality of vendor data feeding models as well as the robustness of testing and continuous monitoring
Over the course of many months members of CRTA’s AI Forum gained consensus on a set of minimum expectations for third-party suppliers. OSFI’s E23 guideline was used as a baseline but we took a holistic risk-based approach and considered other relevant risk guidelines in our work. The resulting output is a set of minimum expectations that mirror internal requirements and address risks along the model risk life cycle. This framework should help improve the mutual understanding of the controls needed to effectively manage AI risks, improve the transparency of on-going monitoring of AI models/systems and shorten contract processes.
