Digital Surveillance Is Class Warfare
The poor are victimized by devices and the companies who profit from them.
The many phones, laptops, and sundry gadgets most of us use every day track our location, our habits, and our preferences. The privacy threats may be pervasive, but not everybody is affected equally: the poor and marginalized get the worst of it.
That's according to a new study from researchers at the Data & Society Research Institute, which surveyed 3,000 US adults about their online habits. It concluded that smartphone users living in households making under $20,000 annually use their phone as their main internet-browsing device much more often (63 percent) than smartphone users in wealthier families (21 percent). This finding alone doesn't paint a complete picture, however, and a separate 2017 Pew Research study also showed that black and Hispanic people are more reliant on their phones than white people.
The problem with this, though, is that your phone is a little spy in your pocket, much more so than the average laptop. If you're not meticulous with your privacy settings, apps can send all sorts of personal information back to their corporate makers, including your location. The poor who rely on smartphones are thus more exposed to this surveillance.
"Thanks to reduced access to certain privacy controls on an app versus its desktop version, and these devices' vulnerability to data collection via location sharing and in-store tracking, a greater amount of data [may be] collected on marginalized groups," Data & Society researcher and study co-author Mary Madden said in an interview.
The survey didn't tackle why this disparity in phone use exists, but it might be because many providers offer sign-up deals that include a cheap smartphone while desktops still cost hundreds of dollars and require a home internet connection. Wealth disparity compels people to use a technology that further exposes them to the very same economic forces that disadvantage them, in a way that can best be described as systemic.
Data collected from poor and marginalized people can be used to target ads to people in rocky circumstances, Madden said—Facebook recently presented research to advertisers on how the platform can identify depressed teens—or to train algorithms which could eventually be used to discriminate against these groups; for example, when deciding insurance premiums or bail eligibility. Much work has been done to show that biases in data can flow up so that decision-making algorithms manifest certain terrible human prejudices.
"It's very difficult for people to have any recourse in these decisions or understand the reasons for them not getting a call back on a job application, or being sifted out of a pile of college applicants because it was determined they probably wouldn't complete four years at the school, or having a police officer show up at your doorstep," Madden said.
Trying to patch up digital privacy and security is a never-ending nightmare, so it's unlikely that consumer phones are ever going to be totally secure—for the rich or the poor. What can be done, however, is mitigating the risks associated with smartphone use. Advocates of more transparent decision-making algorithms have called for companies that make AI to release detailed information on their systems, for example.
But since the problem to begin with is stark economic equality, maybe we should start fixing that, too.
Get six of our favorite Motherboard stories every day by signing up for our newsletter .
Correction: This study looked specifically at smartphone users living in households that earn under $20,000 annually, not people in general, and the piece has been updated to reflect that. In addition, the researcher's first quote has been amended to better reflect that the paper did not include a measure of the overall amount of tracking to which this group is subjected.