AI Could Resurrect a Racist Housing Policy

FYI.

This story is over 5 years old.

Tech

AI Could Resurrect a Racist Housing Policy

And why we need transparency to stop it.

Data has always been a weapon. Between 1934 and 1968 the US Federal Housing Administration systematically denied loans to black people by using entire neighbourhoods, colour-coded by perceived risk factor, as their decision-making metric. Modern computer scientists might call this intentionally "coarse" data.

This practice, known as redlining, had damaging financial and social effects that spanned generations of black families. And now, experts worry that similar practices could return in the algorithms that make decisions about who poses a risk to their community, or, rather chillingly, who deserves to be granted a loan.

Advertisement

"In historical redlining, banks would use the supposedly neutral feature of where you live, which doesn't sound racially discriminatory at the outset, to deny loans to qualified minority applicants," Sam Corbett-Davies, a PhD student at Stanford University who co-authored a recent paper on the topic, told me over Skype.

"If, on average, everyone in a minority neighbourhood is less likely to pay back a loan," he continued, "you can say 'Look, it's a true reflection,' but the problem is they're not using all the information there."

Read More: It's Our Fault That AI Thinks White Names Are More 'Pleasant' Than Black Names

The concern over algorithmic redlining grew out of a ProPublica investigation that looked at data from Broward County, Florida, where courts use an algorithm called COMPAS to determine defendants' risk of reoffending. ProPublica found that the algorithm was consistently marking far more black people as high risk than whites, even if they never went on to reoffend. That investigation has kicked off an emerging field that investigates what constitutes fairness, mathematically speaking, and how to code it into a machine. The end goal is to make sure computers don't reproduce human prejudices.

A paper authored by Corbett-Davies and colleagues from Stanford, the University of Chicago, and the University of California Berkeley, published on the arXiv preprint server this week and awaiting peer review, proposes some new formal definitions of fairness and raises the specter of intentional—or unintentional—algorithmic redlining.

Advertisement

In the paper, the researchers demonstrate how even a "fair" algorithm in a pretrial setting can be manipulated into favouring whites over black people by a malicious designer adding digital noise to the input data of the favoured group—say, arrest rates. This created artificially "coarse" data for whites, which caused the algorithm to rate fewer of them as being high risk.

While the racist housing policies of the FHA used coarse data to harm black people, the researchers contend that the principle is the same when using coarse data to benefit a favoured group—whites, in this example. And from the perspective of the court official using the algorithm, nothing would look amiss.

Of course, there's absolutely no evidence to suggest that Northpointe, the company behind COMPAS, is intentionally designing the algorithm for redlining. But redlining doesn't have to be intentional, Corbett-Davies and his colleagues wrote. It could also be the result of "negligence or unintentional oversights."

Northpointe did not respond to Motherboard's request for comment.

An example Corbett-Davies gave me in a follow-up email involved another algorithm for deciding defendants' risks of recidivism, though he wouldn't say which. In that case, he wrote, the designers noticed that not having "stable housing" was a risk factor for white male defendants, but not for women or people of colour. So, the designers removed the metric of stable housing entirely.

Advertisement

"However, because this feature was predictive of recidivism for whites, excluding it from the algorithm probably made it slightly harder to distinguish risky vs non-risky whites," Corbett-Davies wrote. "As a result, it's likely that slightly fewer whites got a high risk score (slightly fewer also got a low risk score, because everyone was pushed toward the middle), and so fewer whites were detained than would have been detained if stable housing was factored in."

"The fact that we can't investigate the COMPAS algorithm is a problem"

Recent work has also shown that a similar phenomenon to this kind of redlining could arise by training the algorithm on a database that doesn't contain enough information on people of colour, or that itself contains explicit prejudices.

We've seen this already in algorithms that pick white winners for an international beauty competition, or decide that traditionally black names are less "pleasing" than white names. Researchers have also found that one popular training image database for algorithms contained prejudices like sexism and racism. Algorithms trained on this database run the risk of making decisions that reproduce these judgements.

So, how do we stop this from happening? According to Corbett-Davies, having access to the algorithm and its training database is the only way to know for sure that algorithms are redlining. But that's not always possible when it comes to algorithms owned by private companies, like COMPAS.

"The fact that we can't investigate the COMPAS algorithm is a problem," Corbett-Davies said over Skype. "We'd really need to get in there to see if it's missing anything. I don't think they'd be doing anything to intentionally discriminate, but it's possible that they've just left something out."

Get six of our favorite Motherboard stories every day by signing up for our newsletter .

Correction: An earlier version of this article stated that the researchers' method for creating coarse data for whites "caused the algorithm to rate them as being less of a risk on average than black people." Rather, the coarse data caused the algorithm to rate fewer whites as high risk due to the noise making both low and high-risk whites appear to have a more normative risk factor overall. This article has been updated to reflect this.