Written by Donna Bales in her capacity as strategic advisor for Compliance AI In the last few years, we have seen the acceleration of RegTech. RegTech being the use of emerging technology to improve the effectiveness and efficiency of compliance. We have witnessed technologies such as robotic process automation (RPA), machine learning and artificial intelligence (AI/ML) and blockchain be incorporated into the compliance workflow to reduce the burden and cost of compliance. AI/ML in particular has been used for point solutions and software applications to assist with making better predictions for routine tasks in areas such as fraud prevention and market surveillance. Yet, the use of AI/ML to make decisions is still relatively nascent but that is about to change.
The use of emerging technologies for more complex compliance workflows has been held back by legacy architecture and outdated processes which has prevented system wide changes. I believe we are now at an inflection point which will change the shape of regulatory compliance. Circling back to five years ago, the way I would differentiate FinTech and RegTech is that Fintech is a strategic enabler while RegTech is about improving productivity and cost reduction; however, now the lines are blurred. RegTech is now graduating from a task-based function to an end-to-end system-wide approach. There are a number of reasons for this transition.
1 Comment
Contribution from Clausematch - Written by: Claudia Coutinho-De SommaYou’ve heard the expression that the best customers get the best customer experience, right? Well…the saying holds true in the buying experience.
POV: You’ve received a sales email/call that has piqued your interest. You’ve agreed to a meeting, curious to learn more about the offering. Fast forward. The product you’ve now seen can effectively solve the challenges your team is having, and ultimately supports the businesses’ bigger objectives. You think, “I need to speak with other members of the team. I’ll reach back out once that’s done.” Inevitably, it slips through the cracks and it now feels like you’re being chased by a vendor trying to understand how those internal conversations went. Feeling like a topic of the past, a quick “thanks, but we’re not interested,” or worse, no reply which in either case leads to what feels like endless sales follow-ups. ![]() In the past 10 years, the rise of social media has caused our society to have the benefits of sharing news immediately along with the risks associated with how quickly that news can spread. Democracies around the world have been warning of the use of social media by “bad actors” to spread misinformation and undermine governments, the electoral process and encourage discord. The effects of social media on mental health has been highlighted, most particularly for youth, and the use of social media by authoritarian governments as a weapon to control citizens has been a source of concern. It’s not just governments that can feel the sting of social media. For organizations the risks associated with social media can include reputational risks, legal and employment risks and information and security risks and mistakes can ruin careers, damage corporate reputations and hit the bottom line. In today’s fast-moving world, organizations have little choice but to incorporate social media into their communications arsenal. How can organizations manage the risks associated with social media, while taking advantage of the benefits?
So, the simple answer to the question – “Can I fund polluting industries and still claim carbon neutrality?” is yes. The long answer is yes you can, provided you position it in the right way. For example: you purchase things like carbon offsets that “neutralize” the pollution emissions, or claim you are working toward carbon neutrality at some future date, or you make ESG commitments that don’t include upstream and downstream measurements, or very narrowly define what’s included in neutrality in small print. The reality isn’t particularly inspiring for those who truly want a move to a more sustainable and equitable economy. It becomes very difficult for an individual interested in investing in companies with a committed ESG strategy to identify and compare those companies in a meaningful way. There is no standard for reporting, each index and rating system is different, collects different data points and weighs ESG strategies using varying methods. In other words – it’s impossible to compare rating systems because you don’t know how they are measuring the outcome, and they rank companies differently on their indices. It also allows an organisation to cherry pick the ratings that paint them in the best light.
Written by Paul Childerhose, Member of the Board![]() CThe month of June saw the Canadian federal government release Bill C-27, that includes several new proposed Acts, which if enacted into law will dramatically enhance the rights of individuals and the protection of their privacy and protection of their data. In addition, the Artificial Intelligence and Data Act (AIDA) is part of this major bill, and introduces requirements designed to protect Canadians from the harms and biased outputs that AI systems are capable of generating. Impact assessments are mandated and if an AI system is assessed as “high-impact”, there are further requirements, including public disclosure. Many of the applied uses for AI within financial institutions are bound to be considered as high impact. The Act will most certainly have a heavy influence in shaping OSFI’s planned consultation on Model Risk Management (E-23) that had been anticipated for this year but has now been pushed out to March 2023, with final guidance planned for publication by the end of 2023 and target implementation by June 2024. One week after the introduction of Bill C-27, during the Collision Technology Conference in Toronto, Canada’s Minister of Innovation François-Philippe Champagne announced that the funding ($443MM) allocated in the 2021 budget for Phase-2 of the Pan-Canadian Artificial Intelligence Strategy will be focused on the commercialization and standardization of AI. Importantly, Phase-2 funding directs $8.6MM to the Standards Council of Canada “to advance the development and adoption of standards and a conformity assessment program related to AI.” Bill C-27 - The Digital Charter Implementation Act |
Most of the time ESG reporting is associated with “Climate Change” or “Sustainability” or “Carbon Emissions”, which gives the illusion that ESG is there to address climate change. The reality is that ESG is an organisational approach to managing the risks associated with, not just an excellent or poor environmental record, but also how an organisation’s approach to business affects its internal effectiveness and potential for growth, the communities in which it operates and society as a whole. In a way it’s an ERM program for sustainability, diversity and inclusion and societal good. If an organisation is lacking in one or all of those areas, adopting an ESG program will provide the impetus for management to look at the risks and opportunities both internally and externally in all of those areas. As mentioned in CRTA’s most recent publication on ESG, there are many challenges to implementing a good ESG program, but the most important aspect of any good program is Board and Executive support. It requires an entirely new approach to the strategic goals and what has traditionally been considered “value” particularly in publicly traded companies. The rise of ESG has come with a general recognition that a focus on immediate shareholder value and profit is a very narrow view of an organisation’s health and potential for future growth. In addition, as more shareholders became “shareholder activists”, the pressures on Boards to implement meaningful ESG programs and changes became more important. Noted in the CRTA article “Challenges in Implementing ESG Programs”, two thirds of board members indicated that ESG is linked to the company strategy but only a quarter understood it well. Education and understanding on ESG must become an integral part of the organisation for a meaningful program to be successful. |
An introduction from Matt Fowler, Board member of the CRTA

Following our November event where I was privileged to host an engaging panel of experts, representing a variety of industries, the team at Canadian RegTech have continued to partner with our member firm InvestNI (Investment Northern Ireland) as well as the various RegTech companies they represent. Here, as a follow on to the session, Dr. Fiona Browne talks about the focus that Datactics is putting on explainability and transparency as well as the need to develop a strong MLOps (Machine Learning Operations) framework, as use of data and the advanced algorithmic techniques associated develop at pace.
Dr. Fiona Browne, Head of Software development and ML at Datactics
Datactics develops market-leading data quality technology from its base in Belfast, Northern Ireland. Our 60-strong firm provides user-friendly solutions across all industries, particularly for banks and government departments who are saddled with very large, messy data often in multiple platforms and silos and have a wide array of evolving regulations to demonstrate compliance with. In the last three years we have focused on augmenting our technology with machine learning and AI techniques. This approach is accelerating the level of data quality operations automation and prediction, with full explainability.
Although we are at the nascent stages of production AI, there are green shoots of good practice across the MLOps environment, especially in the areas of fairness, explainability and transparency.
Fairness
For example, definition of fairness metrics to measure potential bias in AI datasets have been proposed by the likes of Microsoft (AI Fairness Checklist) and IBM (AI Fairness 360). Based on these metrics, practical steps can be taken to address issues such as balancing a dataset and penalising a bias at the algorithmic level, through favouring a particular outcome post-processing.
Dr. Fiona Browne, Head of Software development and ML at Datactics
Datactics develops market-leading data quality technology from its base in Belfast, Northern Ireland. Our 60-strong firm provides user-friendly solutions across all industries, particularly for banks and government departments who are saddled with very large, messy data often in multiple platforms and silos and have a wide array of evolving regulations to demonstrate compliance with. In the last three years we have focused on augmenting our technology with machine learning and AI techniques. This approach is accelerating the level of data quality operations automation and prediction, with full explainability.
Although we are at the nascent stages of production AI, there are green shoots of good practice across the MLOps environment, especially in the areas of fairness, explainability and transparency.
Fairness
For example, definition of fairness metrics to measure potential bias in AI datasets have been proposed by the likes of Microsoft (AI Fairness Checklist) and IBM (AI Fairness 360). Based on these metrics, practical steps can be taken to address issues such as balancing a dataset and penalising a bias at the algorithmic level, through favouring a particular outcome post-processing.
ABOUT THE ROLE:
This role will require approximately 10 hours per week with flexibility to set the day(s) on which the work will be carried out. There is an opportunity for this position to expand into a more substantive role as the association continues to grow and the successful candidate demonstrates value add to the Membership, Advisors and Board.
You will act as a liaison between the association and our member and advisor communities.
This role will require approximately 10 hours per week with flexibility to set the day(s) on which the work will be carried out. There is an opportunity for this position to expand into a more substantive role as the association continues to grow and the successful candidate demonstrates value add to the Membership, Advisors and Board.
You will act as a liaison between the association and our member and advisor communities.
Compliance in the cloud is fraught with myths and misconceptions. This is particularly true when it comes to something as broad as disaster recovery (DR) compliance where the requirements are rarely prescriptive and often based on legacy risk-mitigation techniques that don’t account for the exceptional resilience of modern cloud-based architectures. For regulated entities subject to principles-based supervision such as many financial institutions (FIs), the responsibility lies with the FI to determine what’s necessary to adequately recover from a disaster event. Without clear instructions, FIs are susceptible to making incorrect assumptions regarding their compliance requirements for DR. In Part 1 of this two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. In Part 2, I outline five steps you can take to avoid these misconceptions when architecting DR-compliant workloads for deployment on Amazon Web Services (AWS). Authored by: Dan MacKay, FS Compliance Specialist, AWS |