• Thu. Dec 12th, 2024

How and Why Enterprises Must Tackle Ethical AI

Byadmin

Jun 14, 2021




Artificial intelligence is becoming more common in enterprises, but ensuring ethical and responsible AI is not always a priority. Here’s how organizations can make sure it is.

Credit: Feng Yu via Adobe Stock

Bias and ethics in artificial intelligence have captured the attention of the public and some organizations following some high-profile examples of it at work. For instance, there has been work that has demonstrated bias against darker skinned and female individuals in face recognition technology and a secret AI recruiting tool at Amazon that showed bias against women, among many other examples.
But when it comes to looking inside at our own houses — or businesses — we may not be very far along in prioritizing AI ethics or taking measures to mitigate bias in algorithms. According to a new report from FICO, a global analytics software firm, 65% of C-level analytics and data executives surveyed said that their company cannot explain how specific AI model decisions or predictions are made, and 73% have struggled to get broader executive support for prioritizing AI ethics and responsible AI practices. Only 20% actively monitor their models in production for fairness and ethics.
The survey of 100 C-level analytics and data execs was conducted by Corinium on behalf of FICO. The study also found that while compliance staff (80%) and IT and data analytics team members (70%) had the highest awareness of AI ethics within organizations, that understanding was patchy across the rest of the organization.
“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” said Scott Zoldi, chief analytics officer at FICO.
But just because it hasn’t been prioritized at the highest levels doesn’t mean it’s not a matter for concern. Separate research conducted by the Deloitte AI Institute found that in the AI adopter group, 95% of those surveyed expressed concerns around ethical AI, transparency, and explainability, the group’s executive director, Beena Ammanath, told InformationWeek last year. Ammanath said then that she expected more organizations to begin tackling the operationalization of AI ethics this year in 2021.
For organizations that are working with AI on more emerging technologies, a question to ask at the start is “what could go wrong with this? What are some of the consequences?”, according to Ammanath.
Another organization that has delved into AI ethics and responsibility in the course of its AI consulting work with enterprise organizations is Cloudera’s Fast Forward Labs. Ade Adewunmi is strategy and advising manager with the organization. She notes that organizations now have access to bigger and more data sets than they did a few years ago, and they are experimenting with how these data sets and machine learning can be applied.
Taking a high-level look at the data you have is an important part of any AI ethics and responsibility practice, she said.
“One of the questions we always ask of the data we have is ‘how representative is it of the world it will be applied in?'” Adewunmi said. “Organizations need to look at their data capture practices.”
That is also the case if you are acquiring data sets from other parties to augment your own data sets.
“We need to understand the meaning of the variables of the data to be sure that there are no groups that might be particularly disadvantaged,” she said.
For instance, for an energy company or telecom company that is interested in forecasting energy transmission or telecom load in order to predict customer demand so they can do better network planning, there may be other nuances to consider.
“If you are making assumptions about energy usage, there are links between socioeconomic advantages and how people use energy,” Adewunmi said. Those links might not impact the algorithmic choices, but they are important to know on the back end as people make decisions based on the information gathered from the algorithm.
Adewunmi said she also advises clients to be thinking about their use of explainable models.
“Any prediction being made is just that,” she said. “It’s a prediction made by a model.”
But decisions based on those predictions often involve human beings. To be able to make decisions, those humans need to have an understanding of how the model works and the limitations of that model.
“If you are someone with the ability to grant or deny credit you need to know why that decision was made and also apply a wider context,” Adewunmi said.
Explainability can also help humans who must work with algorithmic findings that don’t seem to make sense at first glance. For instance, Cloudera Fast Forward Labs has a prototype available that predicts churn for telecom providers looking to predict which customers are at risk to drop the service. The machine learning model found that one of the most important factors in whether someone would leave is whether they have a high degree of complaints about the service.
But it’s not the complainers who are at risk of leaving. Actually, the opposite is true. The complainers are the ones who are planning to stay for one reason or another, so they have a higher stake in the quality of the service being good. That’s why they complain. They care about the service improving. The ones without the high stake just leave if when they are dissatisfied. That’s important to know if you are a service representative who is empowered to offer incentives to customers at risk for churn.
Creating explainability is among several important steps enterprises must embed in their artificial intelligence operations in order to make responsible, ethical AI a part of doing business. A key to making it work is to ensure that these steps are part of the overall AI process.
“AI will only become more pervasive within the digital economy as enterprises integrate it at the operational level across their businesses,” said Cortnie Abercrombie, a contributor to the FICO report and founder and CEO of AI Truth.  “Key stakeholders, such as senior decision makers, board members, customers, etc. need to have a clear understanding on how AI is being used within their business, the potential risks involved, and the systems put in place to help govern and monitor it. AI developers can play a major role in helping educate key stakeholders by inviting them to the vetting process of AI models.”
Related Content:
How AI Can Save the World, or Not
What We Can Do About Biased AI
IT Leadership: 10 Ways to Unleash Enterprise Innovation
 
Jessica Davis is a Senior Editor at InformationWeek. She covers enterprise IT leadership, careers, artificial intelligence, data and analytics, and enterprise software. She has spent a career covering the intersection of business and technology. Follow her on twitter: … View Full BioWe welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.More Insights



Source link