AI is transforming how enterprises function and engage with people. The technology offers the ability to automate simple and repetitive tasks, unlock insights hidden inside data, and help adopters make better, more informed decisions. Yet as AI firmly embeds itself into the IT mainstream, concerns are growing over its potential misuse.
To address the ethical problems that can arise from non-human data analysis and decision-making, a growing number of enterprises are starting to pay attention to how AI can be kept from making potentially harmful decisions.
AI is a powerful technology with an immense number of positive attributes. “However, to fully gauge its potential benefits, we need to build a system of trust, both in the technology and in those who produce it,” says Francesca Rossi, IBM's AI ethics global leader. “Issues of bias, explainability, data handling, transparency on data policies, systems capabilities, and design choices should be addressed in a responsible and open way.”
“AI ethics should be focused on understanding AI's impact on society, mitigating unintended consequences, and driving global innovation toward good,” explains Olivia Gambelin, an AI ethicist and CEO of ethics advisory firm Ethical Intelligence. The practice of operationalizing AI ethics involves the translation of high-level principles into concrete, detailed actions and seeks to enable technology focused on human values at the core,” she says.
Artificial Intelligence Danger Zones
There are endless ways in which AI can be misused, and many of them are already happening, says Kentaro Toyama, the W.K. Kellogg professor of community information at the University of Michigan School of Information. “Military drones making AI-based decisions to kill; deep-fake imagery offering visual ‘evidence’ of outright lies; companies buying and selling AI-based inferences about you for their commercial gain.” Another current hot topic in AI ethics is the matter of “algorithmic fairness,” he notes. “How can we ensure that digital systems are not biased against groups of people due to race, gender, or other identities?” Toyama asks.
The exuberance surrounding AI is running headfirst into a stark reality of hastily built and deployed machine learning models that may indeed meet a specific business outcome, but only at the expense of impacting disparate groups, says Scott Zoldi, chief analytics officer at FICO, an analytics firm specializing in credit scoring services. "The mystique of machine learning models results in the business users of these models becoming careless and crass in their decisioning, often not even monitoring or questioning outcomes.”
Another concern is that while AI systems tend to perform well when using the data they've been trained with, once they're hit with fresh real-world data many begin performing poorly, says Lama Nachman, Intel's director of intelligent systems labs. “This raises certain safety concerns, like an autonomous vehicle misclassifying uncommon scenes,” she notes. “It's critical that these [AI] systems have oversight and monitoring to ensure they don’t drift over time.”
Addressing AI Ethics
An AI ethics policy is essentially a set of principles and guidelines designed to inform the development and deployment of an organization's AI technologies. “It's typically based on a risk analysis approach, where people who are engaged in the definition, development, sales and/or deployment of these systems will assess the possible risks that are typically associated with AI technologies,” Nachman says. AI ethics principles generally include areas such as fairness, transparency, privacy, security, safety, accountability, inclusion, and human oversight, she adds.
With an increasing number of AI regulations hitting the books and market demand for responsible tech growing, a formal AI ethics policy is no longer simply a nice to have, but a true necessity for survival, Gambelin says. “By utilizing ethics as a key decision-making tool, AI-driven companies save time and money in the long run by building robust and innovative solutions from the start.”
Outlook for AI Ethics
Organizations developing AI technologies should begin considering ethics issues at the very start of their projects. “They must design the products with an ethical mindset,” says Anand Rao, global AI lead at business advisory firm PwC. “Ethics can’t simply be a checkmark exercise at the end of the product roll-out,” he notes.
According to 2021 PwC research, only 20% of enterprises had an AI ethics framework in place, and only 35 percent had plans to improve the governance of AI systems and processes. “However, given responsible AI was the top AI priority for executives in 2021, I’m hoping we’ll see improved numbers this year,” Rao says.
Business and IT leaders must follow a holistic, multi-disciplinary, and multi-stakeholder approach toward building AI trust, notes IBM's Rossi. The trust system should ensure that issues are identified, discussed, and resolved in a cooperative environment. “It is this kind of interdisciplinary and cooperative approach that will produce the best solutions and is most likely to lead to a comprehensive and effective environment for trustworthy AI,” she says.
Gambelin describes AI ethics as “a beautifully complex industry full of passionate individuals and motivated organizations.” She notes that AI ethics is at a critical stage. “This is a unique point in time when we, as humans, have the opportunity to reflect on what we truly want to gain from our technology, and ethics will be the tool that empowers us to put these dreams and aspirations into action.”
What to Read Next:
Ex-Googler's Ethical AI Startup Models More Inclusive Approach