top of page

FAIRNESS, EXPLAINABILITY AND TRANSPARENCY: WHAT’S THE LATEST ON AI IN REGTECH?

Date: April 6, 2022.

[Image]


Written by Matt Fowler, Former Board Member of CRTA


Following our November event where I was privileged to host an engaging panel of experts, representing a variety of industries, the team at Canadian RegTech have continued to partner with our member firm InvestNI (Investment Northern Ireland) as well as the various RegTech companies they represent. Here, as a follow on to the session, Dr. Fiona Browne talks about the focus that Datactics is putting on explainability and transparency as well as the need to develop a strong MLOps (Machine Learning Operations) framework, as use of data and the advanced algorithmic techniques associated develop at pace.


Dr. Fiona Browne, Head of Software development and ML at Datactics 

Datactics develops market-leading data quality technology from its base in Belfast, Northern Ireland. Our 60-strong firm provides user-friendly solutions across all industries, particularly for banks and government departments who are saddled with very large, messy data often in multiple platforms and silos and have a wide array of evolving regulations to demonstrate compliance with. In the last three years we have focused on augmenting our technology with machine learning and AI techniques. This approach is accelerating the level of data quality operations automation and prediction, with full explainability.   Although we are at the nascent stages of production AI, there are green shoots of good practice across the MLOps environment, especially in the areas of fairness, explainability and transparency. 


Fairness  

For example, definition of fairness metrics to measure potential bias in AI datasets have been proposed by the likes of Microsoft (AI Fairness Checklist) and IBM (AI Fairness 360). Based on these metrics, practical steps can be taken to address issues such as balancing a dataset and penalising a bias at the algorithmic level, through favouring a particular outcome post-processing.  


Explainability   

Similarly, the area of Explainable AI (XAI) is growing. Here we use XAI techniques to aid in explaining the predictions made by a ‘classifier’. Classifiers are rules used by machines to classify data. They are a type of machine learning algorithm which automatically orders, labels, or categorizes data into one or more of a set of ‘classes’, for example labelling mail as “spam” or “not spam.” There are both supervised and unsupervised classifiers; supervised classifiers are fed training datasets from which they can learn to classify data according to how the data has been trained (usually by humans). Unsupervised classifiers detect similarities, patterns, differences and anomalies in data and label them, accordingly, depending on the type of classifier used.   


One example of XAI used by Datactics is Local Interpretable Model-agnostic Explanations (LIME), to provide a measure of feature importance; that is, to explain to what extent which features in the data have driven the prediction. The type of explanation will differ depending on audience, and different processes may be needed to provide an explanation at an internal/data-scientist level, compared to an external-client level. 


Transparency  

Transparency can be embedded into the process using data and model cards. This provides a structured framework for reporting on the AI model provenance, usage, and ethics-informed evaluation to provide a detailed overview of a model’s suggested uses and limitations. This could be extended to the data side, providing a traffic-light visualisation of the quality of the data used to train a model. In turn, this could highlight metadata such as the data provenance, the consent sought, compliance with regulations, along with quality and fairness metrics.   


Datactics is from Northern Ireland, the region of the UK uniquely located on the island of Ireland, and a centre of excellence for FinTech and RegTech. The region is home to a thriving cluster of companies creating technology trusted by the world's leading financial and professional services firms. Their products and services improve regulatory compliance, risk management, trading, fund administration, client services, investment operations, payments, cyber security and analytics.  


Dr Fiona Browne was one of the industry experts who contributed to the CRTA paper: Moving Beyond Principles – Addressing AI Operational Challenges. She participated on a panel on this subject at your annual event: Next Evolution of AI Adoption. For more information on Datactics reach out directly by connecting with   Brendan McCarthy of Datactics  to set up a demo to see their market-leading data quality technology first hand. ​ In closing, as the drive for enhanced transparency and security in global finances services continues, look to Northern Ireland for the latest fintech and RegTech solutions. Our technology helps power the markets.Michael Barton from Invest Northern Ireland can connect you with the right technology partner for your company from a network of world class partners.

bottom of page