Sooner or later, AI may do something unexpected. If it does, blaming the algorithm won’t help.
Credit: sdecoret via Adobe Stock
More artificial intelligence is finding its way into Corporate America in the form of AI initiatives and embedded AI. Regardless of industry, AI adoption and use will continue grow because competitiveness depends on it.
The many promises of AI need to be balanced with its potential risks, however. In the race to adopt the technology, companies aren’t necessarily involving the right people or doing the level of testing they should do to minimize their potential risk exposure. In fact, it’s entirely possible for companies to end up in court, face regulatory fines, or both simply because they’ve made some bad assumptions.
For example, ClearView AI, which sells facial recognition to law enforcement, was sued in Illinois and California by different parties for creating a facial recognition database of 3 billion images of millions of Americans. Clearview AI scraped the data off websites and social media networks, presumably because that information could be considered “public.” The plaintiff in the Illinois case, Mutnick v. Clearview AI, argued that the images were collected and used in violation of Illinois’ Biometric Information Privacy Act (BIPA). Specifically, Clearview AI allegedly gathered the data without the knowledge or consent of the subjects and profited from selling the information to third parties.
Similarly, the California plaintiff in Burke v. Clearview AI argued that under the California Consumer Privacy Act (CCPA), Clearview AI failed to inform individuals about the data collection or the purposes for which the data would be used “at or before the point of collection.”
In comparable litigation, IBM was sued in Illinois for creating a training dataset of images gathered from Flickr. Its original purpose in gathering the data was to avoid the racial discrimination bias that has occurred with the use of computer vision. Amazon and Microsoft also used the same dataset for training and have also been sued — all for violating BIPA. Amazon and Microsoft argued if the data was used for training in another state, then BIPA shouldn’t apply.
Google was also sued in Illinois for using patients’ healthcare data for training after acquiring DeepMind. The University of Chicago Medical Center was also named as a defendant. Both are accused of violating HIPAA since the Medical Center allegedly shared patient data with Google.
Cynthia Cole
But what about AI-related product liability lawsuits?
“There have been a lot of lawsuits using product liability as a theory, and they’ve lost up until now, but they’re gaining traction in judicial and regulatory circles,” said Cynthia Cole, a partner at law firm Baker Botts and adjunct professor of law at Northwestern University Pritzker School of Law, San Francisco campus. “I think that this notion of ‘the machine did it’ probably isn’t going to fly eventually. There’s a total prohibition on a machine making any decisions that could have a meaningful impact on an individual.”
AI Explainability May Be Fertile Ground for Disputes
When Neil Peretz worked for the Consumer Financial Protection Bureau as a financial services regulator investigating consumer complaints, he noticed that while it may not have been a financial services company’s intent to discriminate against a particular consumer, something had been set up that achieved that result.
“If I build a bad pattern of practice of certain behavior, [with AI,] it’s not just I have one bad apple. I now have a systematic, always-bad apple,” said Peretz who is now co-founder of compliance automation solution provider Proxifile. “The machine is an extension of your behavior. You either trained it or you bought it because it does certain things. You can outsource the authority, but not the responsibility.”
While there’s been considerable concern about algorithmic bias in different settings, he said one best practice is to make sure the experts training the system are aligned.
“What people don’t appreciate about AI that will get them in trouble, particularly in an explainability setting, is they don’t understand that they need to manage their human experts carefully,” said Peretz. “If I have two experts, they might both be right, but they might disagree. If they don’t agree repeatedly then I need to dig into it and figure out what’s going on because otherwise, I’ll get arbitrary results that can bite you later.”
Another issue is system accuracy. While a high accuracy rate always sounds good, there can be little or no visibility into the smaller percentage, which is the error rate.
“Ninety or ninety-five percent precision and recall might sound really good, but if I as a lawyer were to say, ‘Is it OK if I mess up one out of every 10 or 20 of your leases?’ you’d say, ‘No, you’re fired,” said Peretz. “Although humans make mistakes, there isn’t going to be tolerance for a mistake a human wouldn’t make.”
Another thing he does to ensure explainability is to freeze the training dataset along the way.
Neil Peretz
“Whenever we’re building a model, we freeze a record of the training data that we used to build our model. Even if the training data grows, we’ve frozen the training data that went with that model,” said Peretz. “Unless you engage in these best practices, you would have an extreme problem where you didn’t realize you needed to keep as an artifact the data at the moment you trained [the model] and every incremental time thereafter. How else would you parse it out as to how you got your result?”
Keep a Human in the Loop
Most AI systems are not autonomous. They provide results, they make recommendations, but if they’re going to make automatic decisions that could negatively impact certain individuals or groups (e.g., protected classes), then not only should a human be in the loop, but a group of individuals who can help identify the potential risks early on such as people from legal, compliance, risk management, privacy, etc.
For example, GDPR Article 22 specifically addresses automated individual decision-making including profiling. It states, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profile, which produces legal effects concerning him or her similarly significantly affects him or her.” While there are a few exceptions, such as getting the user’s express consent or complying with other laws EU members may have, it’s important to have guardrails that minimize the potential for lawsuits, regulatory fines and other risks.
Devika Kornbacher
“You have people believing what is told to them by the marketing of a tool and they’re not performing due diligence to determine whether the tool actually works,” said Devika Kornbacher, a partner at law firm Vinson & Elkins. “Do a pilot first and get a pool of people to help you test the veracity of the AI output – data science, legal, users or whoever should know what the output should be.”
Otherwise, those making AI purchases (e.g., procurement or a line of business) may be unaware of the total scope of risks that could potentially impact the company and the subjects whose data is being used.
“You have to work backwards, even at the specification stage because we see this. [Someone will say,] ‘I’ve found this great underwriting model,” and it turns out it’s legally impermissible,” said Peretz.
Bottom line, just because something can be done doesn’t mean it should be done. Companies can avoid a lot of angst, expense and potential liability by not assuming too much and instead taking a holistic risk-aware approach to AI development and use.
Related Content
What Lawyers Want Everyone to Know About AI Liability
Dark Side of AI: How to Make Artificial Intelligence Trustworthy
AI Accountability: Proceed at Your Own Risk
Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include … View Full BioWe welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.More Insights