FYI.

This story is over 5 years old.

Tech

There Are No Guardrails on Our Privacy Dystopia

If tech is going to infiltrate, influence, and shape all of society, it is unacceptable for tech and pure market forces to decide the limits of the surveillance state.
Image: Shutterstock

David Golumbia is an associate professor of English at Virginia Commonwealth University and the author of 'The Politics of Bitcoin: Software as Right-Wing Extremism.' Chris Gilliard has a PhD from Purdue University’s Rhetoric and Composition Program and teaches at Macomb Community College. He runs Hypervisible.

Late last year, one of us started a thread on Twitter by posing the following question:

Advertisement

We work and speak with educators, especially in higher education, who are interested in the use of technology. Chris posed the question in the hopes of generating examples he could provide to instructors. His hope was that instructors could show this list to those unfamiliar with the rampant abuses of the tech industry to make it clear why educators need to approach the incorporation of technology into education with extreme caution.

To seed the list, Chris included several fairly well-known examples of things that tech companies had done, including Facebook outing sex workers through its “people you may know” feature and Uber using “God View,” which locates all Uber active users in a particular city, to impress party-goers. Uber, Facebook, and Google were, of course, multiple offenders on the thread, which accumulated around 500 replies, many of which were equally as intrusive and jaw-dropping as the above mentioned examples.

Among the responses, many are well-known, while some remain (to this point) merely rumors, but the fact that it’s hard to tell the difference is part of our point. Facebook, Twitter users asserted, tracks non-users of the platform; tracks users even when logged out; retains copies of deleted, unsent messages; continues to look for ways to deploy facial recognition despite user protests; performs experiments on its users’ emotions; works with repressive regimes to target dissidents; and presumes that being a #BlackLivesMatter supporter means a user is black. Many users provided evidence that contribute to persistent rumors that Facebook and others listen in on phone conversations even when the apps aren’t open and phone services aren’t connected to them—though it may not even need to do that to achieve its alarming results.

Advertisement

Many services are purported to collect and even sell information that a reasonable person might imagine is protected health data. A for-profit service tracks and sells prescription data. An app was proposed to “watch” for suicidal intentions on social media. A vibrator maker tracked its users’ sex lives without disclosing it was doing so. The collection of DNA data seems ripe, and possibly already exploited, for abuse. An app to help women track their periods sells that data to aggregators.

Invasive uses of devices extend beyond health data. Roomba proposed selling maps of the homes in which its vacuum cleaners operate. The gyroscopes that track the motion of smartphones turn out to be potentially usable for speech recognition. Samsung TVs were found to be eavesdropping on people watching them. An Uber executive revealed at a party that the company’s “God View” includes a remarkable amount of information about drivers and passengers famously led Uber to write about its knowing when passengers were having one-night stands. A school used the cameras in laptops it distributed to spy on its students. Ashley Madison’s business model may have been partly dependent on something close to blackmailing its users because they had disclosed they cheated on their partners. Even something as straightforward as the data about user listening habits on Spotify turns out to be usable in ways that many might consider unsavory.

Advertisement

Thinking about these examples, when Chris gave a talk recently talk on digital redlining at the University of Oklahoma, he concluded with a “Platforms WTF” segment, in which he provided the audience with several tech scenarios and asked them to guess whether they were true or false.

Among the scenarios:

  • Amazon remotely deleted George Orwell’s books from all Kindles
  • Uber used its data to calculate which users were having one-night stands
  • ancestry.com has bought dozens of graveyards in order to extract and monetize the DNA of the corpses
  • A high-tech fashion company sells luxury items that are intentionally one use—for instance a Louis Vuitton bag that ink capsules ruin after GPS says it’s been carried once
  • A college president advocated using predictive analytics to determine which students might fail so the school could encourage those students to drop out

If you are keeping score at home, the answer key reveals T, T, F, F, T.

Even faced with the most absurd scenarios, people had this kind of “I think this is true but I really hope it’s not” reaction. No one in the audience said, about any of them, “no way is this true.” Instead, there was unease and doubt: “Maybe I just haven’t heard about this one yet,” their faces seemed to say.

We are already at the point now where it seems that everything is permissible in tech. There are almost no limits according to conscience or law, only by what is currently technically possible.

Advertisement

As a thought experiment, try to think of the most absurd, invasive, extractive thing you can imagine people might do with the available technology. Then ask yourself if a tech company has already done something similar yet. If they haven’t, as far as we know, is it because of some regulation or ethical code, or is it simply that the technical capabilities to do the thing you have imagined haven’t quite made such a scheme profitable? We have fairly vivid imaginations, and every one we’ve thought up has either been done or is in the works. So the question becomes: how does one operate in a society where everything is permissible?

In an essay of the early 1960s called “The Technological Order,” in which he picked up on some of the themes of his famous 1954 book The Technological Society, the French philosopher and theologian Jacques Ellul wrote:

And here I must emphasize a great law which I believe to be essential to the comprehension of the world in which we live, viz., that when power becomes absolute, values disappear. When man [sic] is able to accomplish anything at all, there is no value which can be proposed to him; when the means of action are absolute, no goal of action is imaginable. Power eliminates, in proportion to its growth, the boundary between good and evil, between the just and the unjust.

Ellul describes exactly the condition we are in, and outlines the guiding ethos of tech for the last 10 years: permissionless innovation and move fast and break things. But maybe this is less about the ethics of developers, coders, and technologists, and more about the expectations of society.

Advertisement

We are supremely skeptical of the role of consumer empowerment narratives about how certain technologies take hold. That said, there’s something to the notion that if there were large-scale resistance to a particular tech, or to a particular privacy invasion that was widely believed to be a bridge too far, we might set ourselves up for different outcomes. If we accept that everything is permissible and resistance is futile, then both of those things will come to be even more true than they are now.

In fact, we have some evidence that these things can happen. Witness the successful popular resistance to Google’s massively invasive Google Glass technology, whose adoption advocates told us at first was inevitable. Witness the inability of the founders of a truly awful app called Peeple, an app that was designed to allow people to assign ratings to other people without their consent (and without even using the app themselves) to overcome the loud public outcry against the invasive nature of the product.

If tech is going to infiltrate, influence, and shape all of society, it is unacceptable for tech alone, or tech alone using pure market forces, to decide what tools are or are not acceptable.

Yet these victories often prove fleeting. Despite the defeat of Google Glass, stories of its rebirth appear with some regularity; in late 2016, Snapchat introduced a product called Spectacles that appears to have failed for reasons more to do with the market than with public reaction; and most frighteningly, Chinese police now wear glasses that incorporate facial recognition tech. Even Peeple appears to be making a return, based in part on the promise of Blockchain, through an app called FriendZ that shockingly draws inspiration directly from a Black Mirror episode in which people are rated by their peers on an app; somehow, the app ignores—or arrogantly asserts it can bypass—the fact that it depicts a dystopia.

What all of this points to is what some tech critics have been saying for a long time, and what some inside tech have begun to say recently as well. If tech is going to infiltrate, influence, and shape all of society, it is unacceptable for tech alone, or tech alone using pure market forces, to decide what tools are or are not acceptable. When everyone has a stake in the outcome; everyone must have input into the initial decisions made about whether or not to pursue specific technologies.

We already have at least a stub of this idea when it comes to obviously dangerous but also potentially beneficial technologies in medicine and genetic engineering, as well as dangerous chemical, biological, and nuclear weapons. In digital technology, we have insiders including ex-Googlers Tristan Harris and James Williams and early Facebook investor Roger McNamee, among others, leading the Time Well Spent movement, which promises to deter companies from developing and selling technologies that harm us by invading privacy, among other problems.

We have numerous ex-Facebook investors and employees drawing attention to the socially destructive effects of Facebook in particular and social media in general. There is every reason to be skeptical about these efforts, but if the industry is going to change as radically as it needs to, efforts like this are necessary. But by themselves they are not sufficient: serious government regulation of the tech companies is absolutely necessary. Right now, the EU is enacting its General Data Protection Regulations (GDPR), which may be an example of government moving in the direction it must: proactively reviewing the way tech companies propose to use private data before they do it, and requiring serious alteration to the way tech companies do business in the EU and possibly worldwide. Unsurprisingly, the GDPR is receiving very little coverage in US media.

The Time Well Spent movement has proposed a “Hippocratic Oath for technology:” first, do no harm. Tech companies—and tech advocates more generally, even those outside of companies—have demonstrated that they are neither capable nor responsible enough to imagine what harms their technologies may do. If there is any hope for building digital technology that does not include an open door to wolves, recent experience has demonstrated that this must include robust engagement from the non-technical—expert and amateur alike—not just in response to the effects of technologies, but to the proposed functions of those technologies in the first place.