We are pleased to publish our most recent thought leadership paper: Moving Beyond Principles - Addressing AI Operational Challenges. It explores the current state of artificial intelligence and machine learning (AI/ML) adoption by examining how AI/ML is being used by financial institutions and the different governance approaches being adopted to manage AI/ML risks.
The paper delves into three challenges that organizations face when deploying AI/ML and brings perspectives of practitioners, academics, and technologists to share their experience, provide insights into current thinking and offer practical guidance on how to deploy AI/ML effectively and safely.
Commenting on the paper, Adam Leon Smith, Chief Technology Officer at Dragonfly, a UK and European based AI technology and testing consultancy, stated:
"Although the financial services industry has significant experience in managing model risk, the risks and controls needed for AI systems are fundamentally different. The regulatory drivers are also evolving from a focus on financial risk, to wider societal risks."
This follows our recent panel session the Next Evolution of AI Adoption at our annual event on November 16, 2021. Commenting on the paper, Matt Fowler, CRTA Board Member stated: “As we discussed at our November event the advances in deployment of AI based models within the financial services industry, and beyond, is at a critical point. The principles that so many regulators, Banks and others drafted three years ago are an excellent foundation, but the time has come for those to become practices. The need for practical approaches to governance and oversight are now required and this paper explores how those will become a reality.”
A special thank you to our expert contributors who volunteered their time and shared their insights and perspectives
Natalia Bailey, Formerly Policy Advisor, Institute of International Finance, (IIF), Dr. Fiona Browne, Head of Software Development and Machine Learning, Datactics, Stephanie Kelley, PhD Candidate Management Analytics, Smith School of Business, David Van Bruwaene, CEO, Fairly.AI, Adam Leon Smith, Chief Technology Officer, Dragonfly, Steve Sweetman, Principal Program Manager, Responsible AI Engineering, Microsoft, Patricia Thaine, CEO and Founder, Private AI
This paper begins where Safeguarding AI Use Through Human-Centred Design, the first white paper issued by the Canadian Regulatory Technology Association (CRTA) in May 2020, leaves off. The initial paper examined the risks introduced by artificial intelligence (AI) systems and how they are being managed during the product development life cycle.
The paper delves into three challenges that organizations face when deploying AI/ML and brings perspectives of practitioners, academics, and technologists to share their experience, provide insights into current thinking and offer practical guidance on how to deploy AI/ML effectively and safely.
Commenting on the paper, Adam Leon Smith, Chief Technology Officer at Dragonfly, a UK and European based AI technology and testing consultancy, stated:
"Although the financial services industry has significant experience in managing model risk, the risks and controls needed for AI systems are fundamentally different. The regulatory drivers are also evolving from a focus on financial risk, to wider societal risks."
This follows our recent panel session the Next Evolution of AI Adoption at our annual event on November 16, 2021. Commenting on the paper, Matt Fowler, CRTA Board Member stated: “As we discussed at our November event the advances in deployment of AI based models within the financial services industry, and beyond, is at a critical point. The principles that so many regulators, Banks and others drafted three years ago are an excellent foundation, but the time has come for those to become practices. The need for practical approaches to governance and oversight are now required and this paper explores how those will become a reality.”
A special thank you to our expert contributors who volunteered their time and shared their insights and perspectives
Natalia Bailey, Formerly Policy Advisor, Institute of International Finance, (IIF), Dr. Fiona Browne, Head of Software Development and Machine Learning, Datactics, Stephanie Kelley, PhD Candidate Management Analytics, Smith School of Business, David Van Bruwaene, CEO, Fairly.AI, Adam Leon Smith, Chief Technology Officer, Dragonfly, Steve Sweetman, Principal Program Manager, Responsible AI Engineering, Microsoft, Patricia Thaine, CEO and Founder, Private AI
This paper begins where Safeguarding AI Use Through Human-Centred Design, the first white paper issued by the Canadian Regulatory Technology Association (CRTA) in May 2020, leaves off. The initial paper examined the risks introduced by artificial intelligence (AI) systems and how they are being managed during the product development life cycle.
Press Release issued on February 1, 2022
|