How banks can de-bias models to stay ahead of potential AI regulations
The most common mistake that companies make with the data when de-biasing artificial intelligence models is to remove obvious indicators such as race, gender and age, and then assume they’ve eliminated bias, David Van Bruwaene tells Bank Automation News in this episode of “The Buzz” podcast.
The most common mistake that companies make with the data when de-biasing artificial intelligence models is to remove obvious indicators such as race, gender and age, and then assume they’ve eliminated bias, David Van Bruwaene tells Bank Automation News in this episode of “The Buzz” podcast.
Van Bruwaene is the founder and CEO of Canada-based Fairly.AI, which provides an artificial intelligence (AI) governance, risk and compliance solution for automating model risk management in financial services and other industries.
Removing indicators can make bias more difficult to detect, Van Bruwaene says, but without identifiers to help detect bias, there is no opportunity to make algorithm adjustments to correct for it.
Van Bruwaene breaks down for BAN listeners what banks can do to prepare data ahead of building models and explains the standards available to guide the creation of AI models. He also shares his advice for how banks and credit unions can stay ahead of potential regulations related to AI bias.