Moderated by: Doron Telem, National Leader, Risk Consulting, KPMG
Are we ready for the power of AI? Is it still a tool that’s too complex, too perplexing to give agency to and implement in professional environments? We know it’s already running in the background of our daily lives, collecting and analyzing, predicting and tailoring, but how do we decipher the mystery and realize its potential?
We’re now producing a massive amount of data, making it increasingly difficult and unwise to analyze and model using traditional approaches. We are at a decision point. The advent of artificial intelligence and machine learning presents us with tools with limitless potential to solve increasingly complex problems. It also presents us with unique risks. Understanding these risks is a step towards better utilization. Recently, 3 drivers in the implementation of AI were discussed at a session presented by the Canadian Regulatory Technology Association.
People only care about understanding models that impact them – why I was denied a bank loan, why I am in a high-risk insurance bracket, for example – effectively the causal relationships between the input of data and output of a decision. But not all models can be interpreted. AI is best applied to solve inherently complex problems but the more we ask of the model, the more parameters we force it to address, and the less ‘intelligent’ it can be! We should not be asking the model to trace every point in a decision, but rather to define the major components of a decision – i.e. not each step on the journey, but the general direction it used to get there. Ultimately someone needs to be accountable for the decisions made and to trace errors in order to fine-tune the model. To that end, people like Paul Finlay and (Silverhammer.ai) are building a tool to interrogate models – to define the critical path so we can all more clearly understand how decisions are being made.
Models should be designed to be fair, but what is fair? Data is inherently biased – human review and input is baked into it. This is true of every data set. So how to mitigate and limit this bias? Data sets need to have detailed specifications of each attribute – where it’s come from and what it’s intended use is. We need to layer tools on top of our models to identify and eliminate bias. Fair Learn Toolkit is such a tool, employed by data scientists to use, understand, and eliminate bias in their own models. Lastly, and ironically, the input parameters within a model should be reviewed and augmented manually to foresee and mitigate bias. This is a practice called ‘harms modelling’. This loop is supported by AI itself. The relationship becomes symbiotic – we want AI to support humans to look for bias and humans to support AI tools.
The concern of where the data comes from, and eliminating bias, is quickly followed by understanding what it’s used for and how it will be protected. Transparency, trust and value to end users and stakeholders must be articulated clearly and effectively, else tools will remain in the lab vs. on the street. This trust can be achieved technically through design of controls and safeguards. Anonymity, built-in noise, and end-to-end encryption are relatively simple problems to solve.
Ultimately, developing AI tools is a fine balance. The less we intervene, the more effective the tool can be – but without addressing explainability, bias and privacy at the start, these tools will remain outside the public realm of transparency and trust. Following the session, it’s clear that what’s needed in this new age is to deepen our collective understanding through communication. To realize the possibility of machine learning and artificial intelligence, we need to discuss novel concerns with a cohort of technology and regulatory bodies.
Author: Paul McRorie, MSC, Director, Wholesale Bank Technology, Canadian FI
This session will be available for replay until May 31, 2020. The CRTA also published a white paper on this topic, which is available for download at www.canadianregtech.ca.