Canadian RegTech Association
  • Home
  • Accueil
  • Leadership
    • Strategic Advisors
  • Direction
    • conseillers strategiques
  • Join
  • Adhésion
  • RegTech Directory
  • News
  • Nouvelles
  • Events
    • Past Events
  • Blog
  • Blog_FR
    • Pleins Feux sur les Members
  • Contactez nous
  • Événements
  • Media Library
    • Sponsor Videos - Annual Event
  • RegTech Round Up
  • Resources
    • AI paper - Moving Beyond Principles
    • ESG Paper - Canada's Position in the Global ESG Movement
  • RegTech Member Profiles
  • Firm of the Month
  • Contact Us

Event Summary -A Look into the Future - Emerging Trends and the Use of AI for Model Risk Management

12/2/2019

0 Comments

 
Picture
​On November 27, 2019 the Canadian Regulatory Technology Association (CRTA) (Association canadienne de la technologie réglementaire (ACTR)) and KPMG co-hosted A Look into the Future – Emerging Trends and the Use of AI for Model Risk Management (MRM). Many thanks to KPMG for the ideal space, refreshments and hospitality.
The morning opened with a presentation by Mike McCausland of KPMG’s recent Global Model Risk Survey[1] of domestic and global banks, which uncovered some very interesting (and perhaps concerning) trends, including that:
  • regulatory supervision of MRM has increased in recent years but only 1/3 of banks have gone through MRM an exam, and most were very recent.  Banks across the globe align their model definitions with SR 11-7 but interpretations vary, so more clear guidance is needed
  • SAS is the most common programming language for model development, but R and Python are becoming more prevalent, and C++ is still used within the capital markets area of many large banks
  • only a third of respondents have established a model risk appetite and associated parameters
  • only half responded that they are actively investing in automated solutions or new technologies to improve model risk management

Carl Barrelet of KPMG followed with a presentation on the essential elements of a governance framework for the use and validation of AI models, pointing out that OSFI’s primary concern – which is shared by Canadian banks – is legal and reputational risk.
We then enjoyed case-study presentations from two vendors with products and services that are relevant to and useful for MRM.   First, Paul Finlay presented Xanadu.ai’s “Silver Hammer” – a toolkit for actioning model validation that is modeled after Scikit Learn.  Paul pointed out that model development is often completely decoupled from validation, in large part because developers are not aware of the validation standards.  This undoubtedly results in tremendous inefficiencies.  Silver Hammer enables firms to manage the development and validation processes, with clear accountability and traceability.
Next, Tony Bethell presented Cluster Seven’s services, which help firms identify, manage and monitor “shadow” IT – applications implemented and managed by end-users rather than the corporate IT function.  Tony shared a “spider diagram” of one firm’s data sources for a particular model.  While the firm thought there were a few dozen input files, Cluster Seven uncovered several hundred!  Shadow IT is one of the biggest threats to a firm’s reputation, and a major contributor to legal risk.
The morning ended with a thought-provoking discussion, moderated by Craig Davis of KPMG, among industry experts from various parts of the industry: Manuel Morales (National Bank and the University of Montreal), Carl Barrelet (KPMG), Paul Finlay (Xanadu.ai) and Dina Duhon (BMO Financial Group).  There was consensus among the panelists that the greatest risks of MRM are data quality and bias, privacy and governance.   Bias is inherent in any data set; understanding it from the outset is critical to the development of low-risk models.  They also concurred on the importance of explainability; not all models need to be explained, but for most, a firm’s inability to explain to users or customers why a model produced its results can lead to serious consequences.  Paul Finlay commented that the hype around AI leads to overengineering of solutions beyond the needs of a problem and causes explainability issues.
To bring the discussion back to practical examples, the panel cited some interesting use cases for AI, such as:
  • moving away from traditional rules by applying neural networks to detect patterns in credit card data
  • using natural language processing (NLP) with unstructured data to analyze retail risk (e.g. income verification using credit card transactions)
  • applying machine learning and deep learning to see patterns of market risk and yield the same results as Black Scholes models – faster and more efficiently
  • using NLP to read documents, such as LIBOR contracts that need to be amended
Dina Duhon summed it all up: AI for model risk management is both art and science, and practitioners should keep their eye on the prize – solving the most pressing business problems where AI is truly needed.
By 10:30, attendees – representing financial institutions, RegTech firms, regulators and others – were back at their desks with plenty of food for thought.

[1] The survey was conducted between June 2019 and August 2019 among model risk management executives from 48 significant banks representing 16 countries/regions/jurisdictions globally, including 5 North American banks.
Written by:  Wendy Rudd, Member of the Board, CRTA
​
​
0 Comments



Leave a Reply.

Home

Privacy Policy

Contact us

Terms of Use


© 2019 L’Association canadienne de la technologie réglementaire
  • Home
  • Accueil
  • Leadership
    • Strategic Advisors
  • Direction
    • conseillers strategiques
  • Join
  • Adhésion
  • RegTech Directory
  • News
  • Nouvelles
  • Events
    • Past Events
  • Blog
  • Blog_FR
    • Pleins Feux sur les Members
  • Contactez nous
  • Événements
  • Media Library
    • Sponsor Videos - Annual Event
  • RegTech Round Up
  • Resources
    • AI paper - Moving Beyond Principles
    • ESG Paper - Canada's Position in the Global ESG Movement
  • RegTech Member Profiles
  • Firm of the Month
  • Contact Us