My lifelong passion has been to bring a more equitable future to underrepresented populations regardless of borders. FAIRLY’s mission is simply an extension of that but hopefully with a bigger impact at scale. Having lived in 5 different countries in the last 10 years and led technology projects at Fortune 500 companies, I have witnessed how technology can improve lives but also potentially create more inequity if the proper guardrails are not in place. My co-founder, David Van Bruwaene, came to the same conclusion via a different path. He was a practicing data scientist at another startup using Nature Language Processing to do cyberbullying detection on social media. He later became the CEO and board member of that company. It was through this bottom-to-the-top journey that he really saw the need to have a better way to manage both the financial and ethical risks of AI.
Who are your clients and what types of challenges are they facing?
FAIRLY was in Accenture’s FinTech Innovation Lab based in London, UK in early 2021. Through the lab and additional conversations with banks in the U.S. and Canada, we were able to validate and confirm the challenges faced by global heads of AI, heads of model risk and internal audit at 16 tier-1 banks. Their three major challenges are first, the speed of AI innovation is outpacing Governance, Risk and Compliance capabilities (eg. model validation) at most banks; secondly, increased AI risk is leading to more regulations adding to backlogs, thirdly, shortage of IT professionals with the requisite skillsets to meet regulatory changes happening at an unprecedented speed.
What type of emerging technology do you think will have the greatest impact on your business/industry?
Many major banks in U.S., U.K. and Canada have migrated to the cloud or are migrating to the cloud with the strategic goal to do more AI/ML. COVID has certainly accelerated this trend. However, there are unintentional risks associated with the use of AI models which FAIRLY is passionate about minimizing through FAIRLY’s automation solution. These risks can be identified in the AI model governance structure and through repeatable identification can be minimized during model development. Such risks that can be minimized are: unidentified human bias embedded into the design of the AI technology; human logic errors; ethically questionable model predictions due to insufficient testing and oversight; reputational and financial harm; failure to realize value from expensive AI projects due to under-performing and poorly understood AI models; being left behind the competition and risk that the models developed don’t yield an acceptable ROI, which could reduce future AI model funding.
Why did you join the CRTA? How do you think the CRTA will help your business (industry)?
CRTA has been an important partner for FAIRLY AI since 2020. Through CRTA, we have had many successful introductions to strategic partners and customers.