We often consider ethics as an unchangeable instance of right and wrong. But ethics involves a continual governance of conduct of an activity. So, determining ethical issues are usually measured with context. As real-world context has become more associated with data, IT teams must apply more systematic methodologies to identify how ethics of a real-world event are impacting their organization.
IT professionals must worry about how ethics is being managed from a perspective of data usage. People today encounter many instances where their data is exchanged for a product or service. Setting technological guard rails to keep that data flow safe is central to getting data ethics right for great customer expectations, such as customer privacy compliance. Thus, IT professionals must focus on identifying how data and information flows through a system to aid organizations in their quest to maintain ethics.
A key challenge lies in the many ways IT teams must determine and respond to data ethics within the technical specification of a given system. Examining how data is processed helps to surface the norms at risk. The decision from Amazon, IBM, and Microsoft to halt the availability of their facial recognition AI software to police departments is an example. The decision is partly a response to police brutality protests in the wake of the police killings of George Floyd, Tony McDade, Breonna Taylor, and other Black people across the country. It is also a response to raised questions regarding regulating surveillance tech and negative bias of face recognition involving people of color.
So how can IT best lead the ethics fight? Establishing an observability process within given DataOp and AIOps initiatives can help. Observability is a collection of processes to monitor and analyze data within a system. The purpose of observability is to assist developers and operators in understanding issues that appear within distributed systems. Observability reveals critical paths, reducing development time to remove errors and programmatic bugs. The issues associated with those errors and bugs can lead to ethical breaches
Observability works by measuring the internal status of a system based on its outputs. Those outputs consist of logs, metrics, and traces.
- Logs are telemetry data, usually consisting of structured and unstructured text emitted from an application.
- Metrics are values that expresses some data about a system.
- Traces are the activity path of a single transaction.
A platform or database environment is a viable choice to apply observability if its component activities provide data in a format of logs and metrics that an IT team wants to monitor. The monitored activity -- the actual task of collecting and displaying the data -- can then be analyzed through trace results. This arrangement for analysis implies a symbiotic relationship in monitoring and observability: If an activity is observable, then the system’s benefit to the organization can be monitored.
Observability is being applied for many developer processes such as continuous integration/continuous development (CI/CD). Good feedback in CI/CD must exist to avoid continually issuing changes without knowing if those changes lead to performance improvement or deterioration. Identifying performance changes is a good application for observability.
IT teams should also consult with developers on the latest observability features that are also arriving in cloud-based services. OWASP, a developer group that focuses on app security issues, presented a webinar on logging and monitoring features within Amazon Web Services. Developer Veliswa Boya, for example, noted how Log Insights, an AWS motion feature within its CloudWatch service, can group log events from the same source to reduce debugging time. Other platforms have introduced or are developing comparable features to address the growing demand to assess the operating environment in which data and associated applications coexist.
IT teams can use observability to ask salient questions such as if an organization’s values are being fairly represented in the system specifications being monitored and analyzed. Assumptions infused into data and metrics supporting those specifications can be questioned, and the right alerts can then be set for performance changes.
Many questions can be somewhat answered with an intuition for variance within the logs and metrics captured. Variance is a mathematical way of representing if an outlier in a set of data exist as an anomaly or an indicator of bias. This thinking can help view data ethics as a change within a continuum of information that has created an ethical dilemma for an IT team to investigate.
Understanding that continuum of information will be more essential as debates regarding the usage of deep learning technologies grows. IT teams will have to champion the right analytic choices for their organizations, as I mentioned in my post on predictive analytics. Brand perception of their organization has increasingly become influenced by how well customers feel their data is being managed. Analytic systems have changes to accommodate this perspective.
However, bridging the gap between customer perception of data ethics and enacting tools to do so will fall to IT teams. It will be up to IT to lead organizations on the never-ending fight to make data ethics work.
For more on ethical use of technologies, read these articles:
Why AI Ethics Is Even More Important Now
AI Ethics Guidelines Every CIO Should Read