informa
/
3 MIN READ
News

The Cost of AI Bias: Lower Revenue, Lost Customers

A new survey shows tech leadership's growing concern about AI bias, AI ethics, as negative events impact revenue, customer losses, and more.

Organizational technology leaders have grown more concerned about AI bias over the last two years, according to a new survey from analytics company DataRobot. It’s not just about damage to a brand's reputation, either. Of those who have experienced negative impacts of AI bias, the largest percentage, 62%, lost revenue as a consequence. Another 61% lost customers.

No wonder technology leaders are growing more concerned.

A full 54% of technology leaders surveyed said they are very or extremely concerned about AI bias, compared to 42% who expressed this level of concern in 2019. The 2021 online survey of 350 US and UK-based CIOs and other IT leaders was conducted in June 2021. A similar online survey was conducted in June 2019.

The results indicate that more organizations are looking more closely at their algorithms, the data sets that go into training them, and the explainability of AI results -- just how did the algorithm arrive at that conclusion?

Indeed, in September 2021, Gartner identified responsible AI -- including transparency, fairness, and auditability of AI technologies -- as one of four trends driving near-term AI innovation. Forrester Research analyst Brandon Purcell told InformationWeek that the market for responsible AI solutions would double in 2022, giving organizations more help with technology to help them ensure their AI meets ethical requirements, is explainable, fair and privacy-compliant.

“It’s become a priority in any highly regarded industry,” Purcell says. There are any number of companies working on solutions, too, from tech giants to startups.

Top Concerns

When it comes to AI bias, just what are CIOs and other IT leaders worried about in particular? The top concern was loss of customer trust at 56%, followed by compromised brand reputation or social media backlash at 50%. Increased regulatory scrutiny was next at 43%, followed by loss of employee trust at 42%, mismatch with personal ethics at 37%, lawsuits at 25%, and eroding shareholder value at 22%.

These concerns are not merely about some hazy future consequences to AI bias. Organizations also cited real consequences realized from AI bias, too. A full 36% said their organization had suffered a negative impact due to an incident of AI bias in one or several of their algorithms. Of those:

  • 62% reported lost revenue; 
  • 61% lost customers;
  • 43% lost employees;
  • 35% incurred legal fees due to lawsuits or legal action, and;
  • 6% experienced damage to brand reputation or a media backlash.

The ramifications are due in part to discrimination caused by biased AI. Respondents said their organization's algorithms have inadvertently contributed to discrimination based on gender (32%), age (32%), race (29%), sexual orientation (19%), and religion (19%).

However, many of those surveyed are already working to mitigate AI bias. More than two-thirds (69%) say their organizations do data quality checks to avoid AI bias. Another 51% are training staff about how to identify and prevent AI bias. Still another 51% have hired an AI bias or ethics expert. And a full 50% said they were measuring AI decision-making factors.

Others were deploying tools to help. For instance, 47% said they were monitoring when the data changes over time, 45% said they were deploying algorithms that detect and mitigate hidden biases in training data, and 35% said they were introducing explainable AI tools.

Only 1% of respondents said they were taking no steps at all to prevent AI bias.

Who Is Responsible?

CIOs have become less involved in the AI bias initiatives over the last two years. In 2019, 49% of CIOs were involved in AI bias prevention, but the number dropped to 28% in 2021. The job title most frequently cited as involved in AI bias prevention initiatives was data scientists at 48%. Others included a third-party AI bias expert/consultant (47%), an AI ethicist (35%), a C-suite executive (32%), a customer experience team (30%), business subject matter experts (28%), and regulators and marketers, tied at 24% each.

What to Read Next:

Editor's Choice
John Edwards, Technology Journalist & Author
Jessica Davis, Senior Editor
John Edwards, Technology Journalist & Author
John Edwards, Technology Journalist & Author
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing