Implicit Bias in Generative AI
“The greatest deception men suffer is from their own opinions.”
~Kevin Mitnick, renowned “white hat” hacker and author of THE ART OF INVISIBILITY
Biases. We all have them, and we at Blink all work tirelessly to recognize and eliminate bias in the work we do and the output we deliver. But what if a trusted teammate was constantly introducing it into our work, without our knowledge or consent? If that trusted teammate is AI, there is a very good chance that this is exactly what is happening.
AI output is only as good as the input it’s based on. And because that source material originates with humans, it will inherently contain bias. It remains astonishingly easy to find egregious examples of stereotyping and negative bias, in Generative AI output. What’s even more insidious and dangerous, though, is the subtle-but-real bias that creeps in at various points of entry:
Data Collection: If the source is not diverse or representative of reality, the AI algorithm will reflect that.
Data Labeling: If the human sources assigning context to the raw data interpret it in a biased way, the AI output will reflect that bias.
Model Training: Diverse, unbiased output depends on diverse, unbiased input, including balanced training data, and a model architecture robust enough to accept diverse inputs.
Deployment: Combatting AI bias is not a “one & done” proposition. It must be continually monitored, during and even after deployment.
What can we do, to minimize bias in our work?
Seek, accept, and integrate feedback. We may not catch all bias in our work…. but the people who use what we produce will! Our willingness to listen to them can make worlds of difference.
Review the Training Data. More data doesn’t automatically make it smarter data; more data can also mean more bias. It’s up to us to review and filter data before it’s used for training purposes; prioritize quality of data over quantity of data.
Perform rigorous and ongoing QA. Monitor the algorithmic process at every step, including post-deployment.
For further study:
Chapman University
https://www.chapman.edu/ai/bias-in-ai.aspx
Dataforce.AI
https://www.transperfect.com/dataforce/blog/3-ways-to-minimize-ai-bias
Harvard Business Review
https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai