How to teach ethics to AI models

Stephen Jones of Queen’s University addresses bias in AI Removing bias from artificial intelligence (AI) models may seem as simple as removing demographic information from the data, but it may be more valuable instead to inform the model about the demographics, then weight them to offset bias.

Removing bias from artificial intelligence (AI) models may seem as simple as removing demographic information from the data, but it may be more valuable instead to inform the model about the demographics, then weight them to offset bias. 
That’s one possibility studies have supported, Stephen Thomas tells Bank Automation News in this episode of “The Buzz.” Thomas is the head of the Analytics and Artificial Intelligence (AI) Ecosystem at the Smith School of Business at Queen’s University in Canada.  
For instance, “If we tell the [AI] model what the gender is, it can account for that difference and bias,” Thomas explains. “It's helpful for the model to be less biased — knowing what the gender is — because otherwise, due to historical, societal prejudices, an average woman might look worse than an average male for no reason other than that she's a woman.”    
In today’s podcast, learn how ideas of fairness play into AI models and why what seems fair may be unfair to certain populations. Thomas also shares some best practices for creating ethical AI platforms. 

Join our newsletter

Got it. You're on the list!
© Royal Media - 2021