FYI.

This story is over 5 years old.

Tech

The White House Wants To End Racism In Artificial Intelligence

Can education fix AI’s fairness problem?
Image: Shutterstock

Artificial intelligence can often be just as unintentionally prejudiced as its human creators, with potentially disastrous consequences. The US government thinks educating future programmers on AI ethics will help solve our computers' fairness problem.

The White House released its report on the future of artificial intelligence research in the US on Wednesday, and it contains a slew of recommendations. In a section on fairness, the report notes what numerous AI researchers have already pointed out: biased data results in a biased machine.

Advertisement

For example, artificial intelligence is being used by law enforcement across North America to identify convicts at risk of re-offending and high-risk areas for crime. But recent reports have suggested that AI will disproportionately target or otherwise disadvantage people of colour.

If a dataset—say, a bunch of faces—contains mostly white people, or if the workers who assembled a more diverse dataset (even unintentionally) rated white faces as being more attractive than non-white faces, then any computer program trained on that data would likely "believe" that white people are more attractive than non-white.

Read More: It's Our Fault That AI Thinks White Names Are More 'Pleasant' Than Black Names

This actually happened, by the way: an AI was tasked with judging a beauty pageant, and picked nearly all-white winners from a pool where people from diverse backgrounds were represented. This principle could play out in day-to-day scenarios like job hunting, too.

"If a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could be to perpetuate past bias," the White House paper notes.

"For example, looking for candidates who resemble past hires may bias a system toward hiring more people like those already on a team, rather than considering the best candidates across the full diversity of potential applicants."

Advertisement

One part of the solution, which the report recommends, is that schools and universities teach ethics in any AI-focused courses. "Ideally, every student learning AI, computer science, or data science would be exposed to curriculum and discussion on related ethics and security topics," the report states.

Students should also be given the technical skills to apply this ethics education in their machine learning programs, the report notes. While this is an emerging problem that nobody has a really good answer for just yet, some researchers have taken the first steps to formulating a kind of "fairness algorithm" that can regulate AI systems internally in order to ensure non-biased results, even when they rely on biased data.

Understanding the fine-grain inner workings of extremely complex computer programs is still a significant challenge for AI researchers, so we've clearly a long way to go before we can formalize and then engineer some sort of fairness safeguard.

But with billions on the line for major corporations as they invest in AI, it's unlikely anybody's going to take a step back before pushing new machine learning capabilities to market at this point—the robots are coming, and we need to think about how we can ensure they don't reproduce our very human biases and prejudice.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.