Is Artificial Intelligence Prejudiced?

Managing Bias

If information being fed to a computer is biased, experts say artificial intelligence will be biased too.

Artificial intelligence (AI) may be quicker and more capable than humans, but the one thing it hasn’t yet overcome is bias. It’s true — computers can be as prejudiced as humans. Biased code can produce unintended consequences, including incorrectly stereotyping or racially profiling people.  

A computer program is only as good as the information that it’s fed, according to Andy Hickl, chief product officer at Intel’s Saffron Cognitive Solutions Group.

“Artificial intelligence ultimately has bias baked into its decisions,” he said. This can result from assumptions humans made when designing algorithms that attempt to replicate human judgment, or assumptions machines make when they learn from data.

“If the machine only has information about how a portion of people act, and no knowledge of how the rest of the world speaks, acts or behaves, then we implicitly bake bias into the results produced by artificial intelligence technology,” said Hickl.

Underlying Stereotypes

One example is the growing trend of using “word embeddings” for screening resumes. This technique uses word associations to teach computers how to identify potential job candidates.

programmers around a computer
If there’s a possibility of bias, some AI systems are designed to ask for a human to examine the results.

Researchers from Cornell University found that some associations made sense, such as the words “female” and “queen.” Other associations, however, introduced prejudice, such as associating the words “female” and “receptionist.” The result was an increase in female resumes being put in consideration only for stereotypical roles.

Bias in AI can cause much bigger problems than misunderstanding gender or age.

A 2016 study by ProPublica analyzed the risk scores of more than 7,000 people who had been arrested in Broward County, Florida in 2013 and 2014. The scores were calculated by an AI tool created by Northpointe and used in many court systems throughout the U.S.

This research showed that 80 percent of the people the tool predicted to commit a violent crime in the following two years did not actually do so.

A significant racial bias inadvertently crept in to the tool’s predictions. The problem? AI mechanics had predicted that African-American defendants would commit additional offenses twice as often as Caucasian defendants. This turned out to be totally wrong.

To specifically address this hiccup in AI tech, Hickl and his team designed a way for Saffron, an AI platform, to examine and explain conclusions. Then if there is the possibility of bias or an error, the system recommends involving a human being to evaluate the results.

Starting with Flawed Data

Understanding why bias occurs is essential to eliminating it. Inaccurate sampling strategies are one of the biggest culprits, resulting in machine learning based on skewed data.

holding a smartphone
Greater access to smartphones in higher income neighborhoods introduced a bias in reporting potholes.

For example, the City of Boston used AI technology to analyze data collected from the Street Bump project, an app-based program that allowed users to report potholes. Based on the current conditions of the roads, officials wanted know where potholes were most likely to occur.

Surprisingly, the predictions showed significantly more potholes in upper middle-income neighborhoods. Yet a closer look at the data revealed a more modest picture: the streets in those neighborhoods didn’t really have more potholes, the residents just reported them more often, due to their more frequent use of smartphones.

Boston city officials eventually found the perfect solution — having garbage trucks, which drove to all areas of town, collect the required information. When machines only have a portion of the information needed to make correct assumptions, bias is implicitly added to the results, Hickl said.

Test and Question Outcomes

So, how can bias be removed from AI technology? Hickl said the key is to empower the tools to act in the same way that humans do — test assumptions and ask for more evidence.

With its ability to help analyze large volumes of real-time data, bias-free AI technology can make a difference in how we live, work and play.

“AI will be able to give us the guidance and feedback we need to live our fullest potential,” Hickl said.

Share This Article

Read Full Story