Are Humans Making AI Biased?

How human biases and perspectives affect the intelligence in AI.

Steering your car around a hopping kangaroo in Australia would be difficult for any human to manage, let alone for an engineer in Sweden. As part of its development of a self-driving vehicle, Volvo created a large animal detection system that uses the ground as a reference point. Moose in Sweden were easy to identify, kangaroos in Australia — not so much.

The problem is the aerial movements of the kangaroo. Because they’re in the air more often than they’re on the ground, the car’s system couldn’t adjust to the kangaroo’s unique method of movement. So, Volvo began a new wave of data collection around kangaroos’ leaping motions to improve the in-development autonomous car’s computer system.

“The [car’s] designers were from Sweden, where they don’t have kangaroos,” says Susan Etlinger, analyst at Altimeter Group, an industry research firm in San Francisco. “Any Australian can tell you that kangaroos are the most common cause of traffic accidents between humans and animals. But there was no way for an algorithm to know that without data.”

Humans inherently have biases — whether conscious or not — and these may affect the way data is recorded, input, and portrayed. Volvo’s engineering process is just one example of this, and kudos to them for testing their system in a variety of geographies to uncover some of their biases. Artificial intelligence (AI) systems, algorithms, and the data they use wield a lot of power these days, and the industry is attempting to find the trajectory of its moral compass.

“Everybody at the big companies — whether that’s Adobe, or Facebook, or Amazon — know that algorithms are not inherently neutral,” Susan says. “We have to use critical thinking, and we have to expose innate bias in data to remediate it in the products, services, and systems we develop.”

People problem?

One way to begin the remediation process is to look at the people who work on AI projects. It’s no secret that there’s a diversity challenge in the high-tech industry. The demand for computer and research scientists is on the rise, but the mix of students seeking degrees in those areas remains stagnant.

The Pew Research Center recently reported that the percentage of women seeking science, technology, engineering, and mathematics (STEM) jobs has dropped to 25 percent today, down from 32 percent in 1990. Blacks and Hispanics are also underrepresented in tech, with the number of blacks working in STEM jobs rising only incrementally, from 7 percent in 1990, to 9 percent today, and Hispanic workers nudging up to 7 percent from 4 percent 28 years ago.

With this understanding also comes the understanding of the importance of diversity initiatives. Art Hopkins, a consultant at executive search and advisory firm Russell Reynolds Associates, was quoted in The Economist as saying: “Companies often treat recruiting diverse people as compliance or risk mitigation, rather than a business opportunity.”

If you consider the impact of new technology, particularly if it can keep bias in-check, then it’s easy to see the business opportunity of a diverse workforce. Hiring people from a range of backgrounds and education should cultivate more options, not limit the individual or company.

While there’s no clear-cut answer to stamping out workplace bias, recognizing that it exists — not just among people, but in AI — is a necessary first step.

“When you look at the issues that have popped up in AI over the past couple of years, many of them come from the fact that data can only reflect the reality that it sees, which stems from the reality we live in,” Susan says.

The starting line: Data

So, where’s the starting point for companies to tackle the AI diversity debate? Just as humans aren’t born with negative associations toward people, places, or things, computers aren’t initially created with bias either.

“The big message for people is to understand that computers aren’t born with bias. We think of them as this pure calculation device, but then we feed them with all of this biased data,” says Matt May, head of inclusive design at Adobe.

Govind Balakrishnan, vice president of product development and executive sponsor of the data ethics in AI/ML work stream at Adobe, emphasizes how important the data cycle is.

“It starts with where you get the data from, who owns the data, what are the privacy implications of the data, and then what do we, as the company, do with that data?” says Govind. “These are all problems that we have to tackle as we talk about data, privacy, and ethics.”

Govind further offers this example as an area where bias could be addressed: A data set is created in North America, and an algorithm is designed based on that data set. Next, training centered on that data and algorithm takes place in other geographies, such as India or China.

“In this situation, you could end up with a very different data set and algorithm, as opposed to if training was done locally, when you have people across the globe using it and providing input,” he says.

Potential gender, race, age, location, profession, or income bias, to name a few, could affect expected outcomes of any data set.

Anil Kamath, fellow and vice president of technology at Adobe, says his group is careful to examine the various models produced by machine learning. He recommends peering into the process the way you would review “black box” information. This will allow you to see how data is being used by the machine to make certain decisions.

“If a new network has learned something that is biased against a certain ethnicity, or biased against a certain person, it might not be due to the personal beliefs of the coder. It could also be that they used the wrong data set or didn’t pay enough attention to what the data set contains,” Anil says.

To that, Matt suggests engineers need to explore the unintended biases, and not just stick to their set of specifications or requirements for producing products.

“We, as engineers, build according to the specification without really thinking about who is on the other side of that interface,” he says. “With that comes a lot of the assumptions. Let’s say we’re doing computer vision for facial recognition, and we depend on finding two eyes on a face. Well, some people are born without one or both eyes. People can lose eyes to injury, or have a condition that requires them to be removed surgically. Assuming all people are made the same way like this can literally cause us to build systems that don’t recognize people as people.”

Taking it to the top

Who’s responsible for mitigating AI bias? Susan says business leaders first need to understand how AI can amplify bias, whether they are the head of retail banking at a financial services company, in charge of digital marketing at a health care provider, or at the helm of an airline.

“You have to think about where you’re using AI, and what kinds of biases might be inherent in your system that could actually end up having adverse impact,” she says. _“_It’s important for executives to make themselves aware of the fact that AI is not inherently neutral, and that it can have unintended consequences. If you’re doing something like recommendation engines, or ad targeting, you really want to make sure that, by automating and creating programmatic systems, you’re not inadvertently doing something that’s going to backfire on your customers, your reputation, and your business performance.”

Susan points to Volvo’s autonomous car and kangaroo story as an example of the need for diverse teams.

“If you have a team of all one ethnicity, nationality, gender, or what-have-you, you’re going to miss things that are obvious to people who are different from you,” she says. “So, as we’re building systems that learn, diversity in our teams becomes key to customer experience.”

Future solutions

The road to having a fully balanced employee base that codes without bias could remain rocky for awhile, but positive developments are on the horizon.

“There is an opportunity for us to use AI and machine learning to validate, or to govern, what’s being done by others,” Govind says. “We could literally have the data sets (that these algorithms are based on) evaluated by a tool or an AI algorithm that checks to see if there’s bias in there.”

Govind continues, “I’m optimistic — very optimistic — that over the next year or two, either we will create these tools, or the industry will create these tools, to make it easier to audit our data sets to ensure that none of the requirements around privacy, ownership, or bias are violated by the algorithms and the data sets that they use.”

Whatever happens in the near future, humans are currently the key to catching human-created AI bias.

“The responsibility is on us, much more, to recognize ethical issues as we’re developing these projects before we deploy something that has that unexplored bias institutionalized in it,” Matt says. “Because when a project comes out that’s based on biased data and then we trust its results, we are just reinforcing the effects of that bias.”

Read more about our future with artificial intelligence in our Human & Machine collection.