informa
/
3 MIN READ
Commentary

3 Components CIOs Need to Create an Ethical AI Framework

CIOs shouldn’t wait for an ethical AI framework to be mandatory. Whether buying the technology or building it, they need processes in place to embed ethics into their AI systems.

Only 20% of companies report having an ethical artificial intelligence framework in place and just 35% have plans to improve governance of AI systems and processes in 2021, according to PwC data. It's a problem. No wonder Biden's working on an AI bill of rights.

I believe every CIO needs a responsible AI plan before implementing the technology. Businesses shouldn’t wait for this to be mandatory. It doesn’t matter if the CIO is buying the technology or building it. AI as a technology is neutral -- it is not inherently ethical or unethical. We need processes in place to confirm that ethics is embedded in AI systems.

AI gives us improved customer service, more personalized shopping experiences and faster employee hiring — but all those items could create unintended consequences, like racial or gender discrimination.

Consider hiring as an example. If the AI is not built or implemented with a responsible, ethical lens, it could end up doing more harm than good, which could lead to the business potentially bypassing candidates.

To prevent situations like this, there are three parts to creating a responsible AI framework, no matter if CIOs are building AI or installing AI tools from technology vendors.

1. Review the AI at every stage

CIOs can’t simply have responsible AI use be a checkmark exercise at the end of the product roll-out. AI should be explainable, transparent and if possible, provable, throughout its entire lifecycle. These principles should be embedded right from the time they design the product. That way, CIOs can meet the ethical principles such as accountability, lawfulness, and fairness, to name a few.

When building or implementing responsible AI, teams should focus on three items: the type of decisions the AI is making; the impact the AI’s decisions could have on humans and society; and to what extent humans are inside or outside the decision-making loop. By reviewing AI through these three areas, companies can determine the appropriate level of governance and feel more secure in the responsible use of AI. It helps mitigate the chance that the AI may make decisions laced with potential bias.

Take time to catalogue related approvals and decisions made about AI’s use along the way, too. This is a key component of creating AI governance and traceability.

2. Catalogue AI’s impact on systems

An AI model is not infallible. It changes over time and so, too, can its impact on systems change. That’s why it’s important to know what systems the AI is using and what systems it is affecting. Both these items should be closely monitored throughout the AI’s lifecycle and revisited. If there’s a change in either system, humans should step in.

AI shouldn’t operate entirely outside of human oversight and input. In fact, it’s critical that as CIOs monitor AI’s impact, they take corrective action. Many businesses today are tech-powered, but we remain human-led. Technology -- especially AI -- still needs us to intervene to confirm biases aren’t introduced to the system and to confirm its decisions remain fair.

3. Evaluate who the AI will impact and how

A responsible AI framework is about reducing potential harm done by AI. So, it’s impossible to have strong AI governance in place that doesn’t evaluate the decision made or the outcome. Does the AI pose a risk to an individual or society? Does it lead to an unethical outcome?

This can be difficult for CIOs to assess. There are ethical principles that can help guide AI use. Governments have also enacted their own regulations to try and control harmful AI. CIOs need to confirm these different frameworks are considered and the impact of AI is closely and regularly monitored.

CIOs who implement AI can see lots of benefits. However, there are numerous risks businesses need to analyze, account for and work to overcome. With the right responsible and ethical AI framework in place, CIOs can push the business to new heights and confirm the business, its employees and its customers can trust in their AI use.

Editor's Choice
John Edwards, Technology Journalist & Author
Jessica Davis, Senior Editor
John Edwards, Technology Journalist & Author
John Edwards, Technology Journalist & Author
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing