Whether you’re keenly aware of it or not, your artificial intelligence models reflect your organization’s equity, diversity and inclusiveness efforts, or lack thereof.
Diversity and inclusion are essential to any thriving workplace. Research shows diverse companies can have more than twice the cash flow per employee than less diverse companies. Your level of diversity and inclusiveness directly effects the level of bias in your AI technology. The inherent bias we have seen in facial recognition is one prime example.
I Am Not A Cat
A 2019 federal study found "empirical evidence" of racial bias in facial-recognition software, finding inaccurate matches are far more common with Asian or Black people than with white people. In several instances, AI interpreted images of cats as Black and Asian individuals. Last I checked, I am not a cat.
After hours of research and interviews with AI experts, I can tell you the anecdotes are both hilarious and discouraging. My favorite was the chihuahua vs. the muffin (if you’re not familiar with that one, it’s worth a look …).
But as amusing as the stories may be, implicit or unconscious bias can have far-reaching and profoundly serious impacts.
Brain Science Breakdown
In trying to navigate the everchanging workplace, our brains generate prejudices and implicit unconscious biases intended to keep us away from harm (fight, flight, or freeze). Our prejudgments keep us physically and psychologically safe.
However, certain people benefit from unconscious biases, while others are adversely affected. The Harvard Business Review states “judgments and behaviors toward others that we're not aware of is everywhere... while this type of bias may seem less dangerous in the workplace than on the streets of Ferguson, Mo., or in a courtroom, it still leads to racial injustice.”
Upbringing, environment, and workplace cultures contribute to disadvantageous unconscious biases. But we are not doomed. There’s hope for AI and humans!
Bias In, Bias Out
Unconscious biases held by the people in your organization will reflect in the data that feeds your AI and ML algorithms, which will produce biased outcomes. Machines will repeat what they’re taught. Enterprises using emotional AI or affective computing to interpret and react to human emotions using natural language processing (NLP) or sentiment analysis, voice stress analysis, or cameras that catalog micro-expressions for personalized responses, have a clear imperative to address unconscious bias in the people who are developing those technologies.
Shifting & Setting Intentions
Shifting the way people view, perceive and judge others is both paramount to overcoming bias and less daunting than it might seem.
Three simple -- but not-so-innate -- actions to dismantle unconscious bias in AI:
1. Know better, do better, be better
Unconscious bias is almost always true to its name: unconscious, and without malicious intent. But once we are made aware of it, we must intentionally work to achieve better outcomes. Fortunately, there are countless ways to remove unconscious biases from the development process.
Take healthcare racial inequity for example: An AI algorithm that analyzes X-rays to predict the pain experienced by patients with osteoarthritis counters the tendency for Black patients’ pain to be underestimated by doctors. Go humans!
2. Demand digital diversity
If a white, male, middle-aged team is developing your recruitment engine, chances are it will be biased toward middle-aged white males. AI can’t see what someone outside those parameters might bring to the role. Humans can.
Without a diversity of people working on next-gen AI and ML, all we’ll ever get is more of what we have today. Organizations should look beyond the traditionally homogenous workforce and be open to non-traditional talent pools.
3. Stay relevant
Keep up with open-source watch lists such as the “Awful AI” database curated by AI pioneer David Dao, that track misuses of AI technology to raise awareness of systemic biases and spur development of preventive technology. Regular check-ins can help AI leaders and practitioners eradicate bias.
AI Bias Testing vs. Process & Data Governance
Process and data governance are key to AI bias testing.
Are you automating and cognitively engraving biases, or are you picking value streams that truly reflect your company values? Are you collating data that trains models and drives a better future, or data that replicates a problematic past?
Using the tools to find issue with the tools is good! This is test-driven development for AI. Write the test case that you want to succeed and understand what failure looks like, then build the functionality to ensure success.
Removing implicit biases is a challenge, since, by definition, we often don’t know they exist, but research provides hope that levels of implicit biases are decreasing. I believe that thinking and acting with intention and focus to address implicit bias will lead to more representative and unbiased AI and ML algorithms.
Missy Lawrence-Johnston leads the ISG Operating Model & Human Side of Digital offering, with more than 15 years of experience as a thought leader and culture-change expert helping government entities, nonprofits, and Fortune 100 companies. In her current role, Missy drives day-to-day deliverable execution and client relationship management and is ultimately accountable for culture deliverables and client/project-team leadership for digital and enterprise organizational effectiveness. Her highlighted competency is in global enablement for digital transformations, with a focus on team dynamics, change leadership, and empowering psychologically safe environments.