Earlier this year, the CRTA published a white paper on Responsible AI: Safeguarding AI Use Through Human-Centred Design. Its focus was on how to design responsible AI systems. The paper explored different types of AI risks. We called in expert opinion from our member community to share their experiences, provide insights on current thinking, and offer practical guidance on how to build an AI system responsibly. We heard from our members and readers that they wanted to learn more about bias risk. This podcast will take a closer look into bias risk and provide perspectives on the future of model development.
Donna Bales, co-founder of the CRTA and lead author of the paper will lead a discussion with Vishal Gossain, VP AML/ATF Analytics at Scotiabank and Paul Finlay, Machine Learning Lead at Silverhammer.ai
Join Myron Mallia-Dare, strategic advisor to the CRTA and technology lawyer from Miller Thomson as he leads a discussion with Benjamin Jacob, Partner Automation, IBM and Patrick Morrison, MBA, StereoLogic.
RPA holds great promise as it acts as a medium for process improvement and improved ROI by introducing automation to routine and highly repetitive tasks. RPA’s simplicity is one of its greatest strengths – it can be used to orchestrate workflows, integrate with your business rules and decisions, manage your content, and capture data, however implementing RPA can often be challenging due to planning challenges and excitement to get started before fully understanding the business processes, the impact to the business and the critically important “Return On Investment” (ROI) for making this investment in the first place. This podcast discusses:
1. A deep Dive into Bias Risks in AI Models