informa
/
4 MIN READ
Commentary

Ethical AI Lapses Happen When No One Is Watching

Just because you may not see errors on the part of artificial intelligence doesn't mean that things are fine. It's up to humans to look for ethical or other issues.

Transparency often plays a key role in ethical business dilemmas -- the more information we have, the easier it is to determine what are acceptable and unacceptable outcomes. If financials are misaligned, who made an accounting error? If data is breached, who was responsible for securing it and were they acting properly?

But what happens when we look for a clear source of an error or problem and there’s no human to be found? That’s where artificial intelligence presents unique ethical considerations.

AI shows enormous potential within organizations, but it’s still largely a solution that is looking for a problem. It’s a misunderstood concept with practical applications that have yet to be fully realized within the enterprise. Coupled with the fact that many companies lack the budget, talent, and vision to apply AI in a truly transformational way, AI is still far from critical mass and prone to misuse.

But just because AI may not be ultra-visible within day-to-day business doesn’t mean it isn’t at work somewhere within your organization. Just like many other ethical dilemmas in business, ethical lapses in AI often happen in the shadows. Intentional or not, the consequences of an AI project or application breaking ethical boundaries can be a logistical and optical nightmare. The key to avoiding ethical missteps in AI is to have corporate governance of the projects from the get-go.

Building AI with Transparency and Trust

By now, we’re all familiar with popular examples of AI gone wrong. Soap dispensers that don’t work properly for customers with dark skin, pulse oximeters that are more accurate for Caucasians, and even algorithms that predict if criminals will go back to jail are all stories of AI (arguably inadvertently) having bias.

Not only can these situations generate bad headlines and social media backlash, but they undermine more legitimate use cases for AI that won’t come to fruition if the technology continues to be viewed with mistrust. For example, in the healthcare space alone, AI has the potential to improve cancer diagnosis and flag patients with a high risk of hospital readmission for extra support. We won’t see the full benefits of these powerful solutions unless we learn to build AI people trust.

When I talk about AI with peers and business leaders, I champion the idea of transparency and governance within AI efforts from the start. More specifically, here is what I suggest:

1. Ethical AI can’t happen in a vacuum: AI applications can cause major ripple effects if implemented incorrectly. This often happens when a single department or IT team begins to experiment with AI-driven processes without oversight. Is the team aware of the ethical implications that could occur if their experiment goes wrong? Is the deployment on-the-level with the company’s existing data retention and access policies? Without oversight, it’s hard to answer these questions. And, without governance, it can be even harder to gather the stakeholders needed to remedy an ethical lapse if one does occur. Oversight shouldn’t be seen as a squash on innovation, but a necessary check to ensure AI is operating within a certain set of ethical bounds. Oversight ultimately should fall to chief data officers in organizations that have them, or the CIO if that CDO role does not exist.

2. Always have a plan: The worst headlines we’ve seen about AI projects going askew usually have something in common, the companies at the center of them weren’t prepared to answer questions or explain decisions when things went wrong. Oversight can fix this. When an understanding and healthy philosophy about AI exists at the very top of your organization, there’s less likelihood of being caught off guard by a problem.

3. Due diligence and testing are mandatory: So many of the classic examples of AI bias could have been mitigated with a bit more patience and a lot more testing. As in the hand soap dispenser example, a company’s excitement to show off its new technology ultimately backfired. Further testing could have uncovered the bias before the product was publicly unveiled. Further, any AI application needs to be heavily scrutinized from the beginning. Because of AI’s complexity and undefined potential, it must be used strategically and carefully.

4. Consider an AI oversight function: To protect client privacy, financial institutions devote significant resources to managing access to sensitive documents. Their records teams carefully classify assets and build out infrastructure to ensure only the right job roles and departments can see each one. This structure could serve as a template for building out an organization’s AI governance function. A dedicated team could estimate the potential positive or negative impact of an AI application and determine how often its results need to be reviewed, and by whom.

Experimenting with AI is an important next step for companies seeking digital disruption. It frees human workers from mundane tasks and enables certain activities -- like image analysis -- to scale in ways that weren’t financially prudent before. But it isn’t to be taken lightly. AI applications must be carefully developed with the proper oversight to avoid bias, ethically questionable decisions, and bad business outcomes. Make sure you have the right eyes trained on AI efforts within your organization. The worst ethical lapses happen in the dark.

Editor's Choice
John Edwards, Technology Journalist & Author
Jessica Davis, Senior Editor
John Edwards, Technology Journalist & Author
John Edwards, Technology Journalist & Author
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing