Written by Paul Childerhose, Member of the Board![]() CThe month of June saw the Canadian federal government release Bill C-27, that includes several new proposed Acts, which if enacted into law will dramatically enhance the rights of individuals and the protection of their privacy and protection of their data. In addition, the Artificial Intelligence and Data Act (AIDA) is part of this major bill, and introduces requirements designed to protect Canadians from the harms and biased outputs that AI systems are capable of generating. Impact assessments are mandated and if an AI system is assessed as “high-impact”, there are further requirements, including public disclosure. Many of the applied uses for AI within financial institutions are bound to be considered as high impact. The Act will most certainly have a heavy influence in shaping OSFI’s planned consultation on Model Risk Management (E-23) that had been anticipated for this year but has now been pushed out to March 2023, with final guidance planned for publication by the end of 2023 and target implementation by June 2024. One week after the introduction of Bill C-27, during the Collision Technology Conference in Toronto, Canada’s Minister of Innovation François-Philippe Champagne announced that the funding ($443MM) allocated in the 2021 budget for Phase-2 of the Pan-Canadian Artificial Intelligence Strategy will be focused on the commercialization and standardization of AI. Importantly, Phase-2 funding directs $8.6MM to the Standards Council of Canada “to advance the development and adoption of standards and a conformity assessment program related to AI.”
0 Comments
Bill C-27 - The Digital Charter Implementation Act |
Most of the time ESG reporting is associated with “Climate Change” or “Sustainability” or “Carbon Emissions”, which gives the illusion that ESG is there to address climate change. The reality is that ESG is an organisational approach to managing the risks associated with, not just an excellent or poor environmental record, but also how an organisation’s approach to business affects its internal effectiveness and potential for growth, the communities in which it operates and society as a whole. In a way it’s an ERM program for sustainability, diversity and inclusion and societal good. If an organisation is lacking in one or all of those areas, adopting an ESG program will provide the impetus for management to look at the risks and opportunities both internally and externally in all of those areas. As mentioned in CRTA’s most recent publication on ESG, there are many challenges to implementing a good ESG program, but the most important aspect of any good program is Board and Executive support. It requires an entirely new approach to the strategic goals and what has traditionally been considered “value” particularly in publicly traded companies. The rise of ESG has come with a general recognition that a focus on immediate shareholder value and profit is a very narrow view of an organisation’s health and potential for future growth. In addition, as more shareholders became “shareholder activists”, the pressures on Boards to implement meaningful ESG programs and changes became more important. Noted in the CRTA article “Challenges in Implementing ESG Programs”, two thirds of board members indicated that ESG is linked to the company strategy but only a quarter understood it well. Education and understanding on ESG must become an integral part of the organisation for a meaningful program to be successful. |
An introduction from Matt Fowler, Board member of the CRTA

Following our November event where I was privileged to host an engaging panel of experts, representing a variety of industries, the team at Canadian RegTech have continued to partner with our member firm InvestNI (Investment Northern Ireland) as well as the various RegTech companies they represent. Here, as a follow on to the session, Dr. Fiona Browne talks about the focus that Datactics is putting on explainability and transparency as well as the need to develop a strong MLOps (Machine Learning Operations) framework, as use of data and the advanced algorithmic techniques associated develop at pace.
Dr. Fiona Browne, Head of Software development and ML at Datactics
Datactics develops market-leading data quality technology from its base in Belfast, Northern Ireland. Our 60-strong firm provides user-friendly solutions across all industries, particularly for banks and government departments who are saddled with very large, messy data often in multiple platforms and silos and have a wide array of evolving regulations to demonstrate compliance with. In the last three years we have focused on augmenting our technology with machine learning and AI techniques. This approach is accelerating the level of data quality operations automation and prediction, with full explainability.
Although we are at the nascent stages of production AI, there are green shoots of good practice across the MLOps environment, especially in the areas of fairness, explainability and transparency.
Fairness
For example, definition of fairness metrics to measure potential bias in AI datasets have been proposed by the likes of Microsoft (AI Fairness Checklist) and IBM (AI Fairness 360). Based on these metrics, practical steps can be taken to address issues such as balancing a dataset and penalising a bias at the algorithmic level, through favouring a particular outcome post-processing.
Dr. Fiona Browne, Head of Software development and ML at Datactics
Datactics develops market-leading data quality technology from its base in Belfast, Northern Ireland. Our 60-strong firm provides user-friendly solutions across all industries, particularly for banks and government departments who are saddled with very large, messy data often in multiple platforms and silos and have a wide array of evolving regulations to demonstrate compliance with. In the last three years we have focused on augmenting our technology with machine learning and AI techniques. This approach is accelerating the level of data quality operations automation and prediction, with full explainability.
Although we are at the nascent stages of production AI, there are green shoots of good practice across the MLOps environment, especially in the areas of fairness, explainability and transparency.
Fairness
For example, definition of fairness metrics to measure potential bias in AI datasets have been proposed by the likes of Microsoft (AI Fairness Checklist) and IBM (AI Fairness 360). Based on these metrics, practical steps can be taken to address issues such as balancing a dataset and penalising a bias at the algorithmic level, through favouring a particular outcome post-processing.
ABOUT THE ROLE:
This role will require approximately 10 hours per week with flexibility to set the day(s) on which the work will be carried out. There is an opportunity for this position to expand into a more substantive role as the association continues to grow and the successful candidate demonstrates value add to the Membership, Advisors and Board.
You will act as a liaison between the association and our member and advisor communities.
This role will require approximately 10 hours per week with flexibility to set the day(s) on which the work will be carried out. There is an opportunity for this position to expand into a more substantive role as the association continues to grow and the successful candidate demonstrates value add to the Membership, Advisors and Board.
You will act as a liaison between the association and our member and advisor communities.
Compliance in the cloud is fraught with myths and misconceptions. This is particularly true when it comes to something as broad as disaster recovery (DR) compliance where the requirements are rarely prescriptive and often based on legacy risk-mitigation techniques that don’t account for the exceptional resilience of modern cloud-based architectures. For regulated entities subject to principles-based supervision such as many financial institutions (FIs), the responsibility lies with the FI to determine what’s necessary to adequately recover from a disaster event. Without clear instructions, FIs are susceptible to making incorrect assumptions regarding their compliance requirements for DR. In Part 1 of this two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. In Part 2, I outline five steps you can take to avoid these misconceptions when architecting DR-compliant workloads for deployment on Amazon Web Services (AWS). Authored by: Dan MacKay, FS Compliance Specialist, AWS |
Compliance in the cloud can seem challenging, especially for organizations in heavily regulated sectors such as financial services. Regulated financial institutions (FIs) must comply with laws and regulations (often in multiple jurisdictions), global security standards, their own corporate policies, and even contractual obligations with their customers and counterparties. These various compliance requirements may impose constraints on how their workloads can be architected for the cloud, and may require interpretation on what FIs must do in order to be compliant. It’s common for FIs to make assumptions regarding their compliance requirements, which can result in unnecessary costs and increased complexity, and might not align with their strategic objectives. A modern, rationalized approach to compliance can help FIs avoid imposing unnecessary constraints while meeting their mandatory requirements. In my role as an Amazon Web Services (AWS) Compliance Specialist, I work with our financial services customers to identify, assess, and determine solutions to address their compliance requirements as they move to the cloud. One of the most common challenges customers ask me about is how to comply with disaster recovery (DR) requirements for workloads they plan to run in the cloud. In this blog post, I share some of the typical misconceptions FIs have about DR compliance in the cloud. In Part 2, I outline a structured approach to designing compliant architectures for your DR workloads. As my primary market is Canada, the examples in this blog post largely pertain to FIs operating in Canada, but the principles and best practices are relevant to regulated organizations in any country. Author: Dan MacKay, FS Compliance Specialist, AWS |
Sionic Opinion piece that takes a look at FINTRACs new regulations for real estate and cyrpto currency and how to protect your firm through an effective AML/ATF program and instill confidence. By: Tara Rodgers, Director, Sionic |

Global financial institutions have an immense challenge to be able to ensure that they are operating in compliance with a multitude of jurisdictional specific privacy regulations as it pertains to data access and data usage by their internal employees.
Understanding the regulations pertaining to data privacy and usage and how they intersect with AML programs is something that our Canadian RegTech Association member firm Arctic Intelligence has invested heavily in.
We extend our Congratulations to Darren Cade & Rose Davitt & colleagues on being recognized by A-Team Group as ‘Most Innovative Data Privacy Project by Design
Understanding the regulations pertaining to data privacy and usage and how they intersect with AML programs is something that our Canadian RegTech Association member firm Arctic Intelligence has invested heavily in.
We extend our Congratulations to Darren Cade & Rose Davitt & colleagues on being recognized by A-Team Group as ‘Most Innovative Data Privacy Project by Design