Science Fiction Is Not Social Reality
Image: Shutterstock

FYI.

This story is over 5 years old.

Tech

Science Fiction Is Not Social Reality

The tech industry is inspired to create our world from linear, scripted science fiction stories.

S. A. Applin, Ph.D. is an anthropologist whose research explores the domains of human agency, algorithms, AI, and automation in the context of social systems and sociability. You can find more @anthropunk.

"The trouble with a kitten is that eventually it becomes a cat.” — Ogden Nash

Science Fiction is great. It is inspirational, it is fun to read, and it gives humans incredible ideas. However, Science Fiction is not a manual for making real things in the world without oversight, accountability, and knowledge of how its outcomes will affect society.

Advertisement

Facial Recognition technology is already with us. Amazon’s Rekognition makes a fairly easy toolkit developers can use, and Microsoft’s Azure can identify emotions as well as who is experiencing them. Apple “uses advanced face recognition technology to group photos by person" in iPhoto and on phones, and offers FaceID for unlocking phones with a picture.

Other companies use the technology as well. Even Adobe offers Facial Recognition as part of its Photoshop Lightroom Classic CC. We’ve started to see the technology creep a bit at a time into various parts of our lives. So far, in the US, we might not have noticed Facial Recognition much, except as a novelty on our devices, but it is being deployed in ways that are beginning to give us pause and concern, as we learn about them, such as being deployed in K-12 schools, or aiding police departments on the back-end, or being considered to augment the cameras police officers are already wearing

One central issue for Facial Recognition and other new unproven technologies is that when big companies, with access to reach millions of people decide to “road test” this type of big surveillance, its implications become very real, very fast. Last year I wrote that humans are becoming “algorithm chow,” in that the data that we generate feeds the algorithms in companies such as Amazon, that in turn use our ‘data food’ to track and profile us, and use our footprint for surveillance. Last week, in an op-ed at The Guardian, Peter Asaro, Kelly Gates, and other academic researchers wrote that Amazon is not only building a “facial recognition infrastructure,” it is also collecting “a huge amount of personal information about people, including … what they watch and read” and via Echo and Alexa, “what people say in their homes.” They describe this data practice threat as a “massive, automated surveillance apparatus” that requires an “equally vast system for oversight,” which has “not been developed.”

Advertisement

It likely won’t be. To understand why, it is important to understand who the people that are developing these devices are, and why they are building what they are building. Their wealth, resources, and reach surpass that of governments. Additionally, regulations take time to form when new technology is introduced. This is why there have been so many calls to action from scholars, ethicists, and some tech employees. We’re worried, and rightly so.

My research examines human agency and choice in the context of automation, and my doctoral dissertation studied Silicon Valley makers and producers over a period of 8 years. I understand innovation, and in particular “disruptive innovators.” There are two possible reasons these people and the productions that they are making are dangerous: the literal interpretation of Science Fiction as an influence, and the misapplication of the Pareto Principle.

Science Fiction is not social reality

Tech creators and tech billionaires are influenced by Science Fiction for different reasons. Some of these have to do with the narrative of the ‘hero outsider’ who uses their knowledge and skill to fix a problem through engineering a solution or through adapting tools and technology in new ways to solve some type of problem. Other reasons have to do with creating a Utopian society that is “bettered” through time-saving devices that are automated. The doors in Star Trek, the just-in-time data knowledge and data access in any number of films: Bladerunner, Star Wars, Minority Report, etc. and books all are delivered seamlessly in Science Fiction. When things do break, there is often an engineering solution. Even when Science Fiction turns against mankind, as it did in 2001: A Space Odyssey, the gadgets and gear are shown as sufficiently technologically inspiring, so much so that even though it was a warning film of sorts, that element becomes minimized in favor of recreating “cool technology.”

Advertisement

Thus, one myth is that in Science Fiction, a problem is solved because the human engineers were there to apply engineering and tech solutions to solve the problem, and another myth is that automation is seamless and knowledge is easily delivered in relevant contexts to appropriate parties. The reality is that in Science Fiction, problems are solved because the writers wrote a solution; physical objects worked in perfect just-in-time automation; and data and knowledge was delivered accurately in real time because all of those solutions were described to work in ways that are not fully formed, or real.

Additionally, those tech creators and tech billionaires who are influenced by Science Fiction seem to assume that because things in Science Fiction work in the society and culture of those created future-set universes, there is an expectation bias that they will work in our real life and present, without much testing or oversight.

Gadgets, services, and technologies work in Science Fiction because it is fiction. They work because it is a narrative, and as such, their authors or filmmakers showed them working. They work because in fiction, it is very easy to make things work, because they aren’t real and don’t need to actually work.

Realizing the unreal from fiction will not make that realization work in the same way in real life. It can’t. The context, timeframe, and people are different. Most importantly, Science Fiction is fiction.

Advertisement

This concept applies to more than facial recognition. It also applies to so many other technologies that work for some, but not all of society. These include: Autonomous vehicles, drones, jetpacks, flying cars, virtual reality, augmented reality, voice recognition, recorded chat-bot conversations, voice-controlled smart home devices such as Google Home, Amazon Alexa controlled Echo, Apple’s HomePod and Siri, Microsoft’s Cortana Home Assistant, Google’s Smart Compose, Google Duplex, dockless bikes and scooters, killer robots, delivery robots, automated supermarkets, rideshare software and controls, and many, many more technologies that are actively shaping our world. Some of these we don’t see, but they manage our infrastructure. The industrial Internet of Things, manufacturing robots, logistics algorithms, and more all work to automate our lives in ways we cannot see.

Facial Recognition technology and any other data collection, surveillance, and tracking type of technology from any of the biggest companies (Google, Apple, Microsoft, Amazon, etc.) is instantiated as a real-life working plan of action, creation, invention, and deployment based upon a collection of fictitious narratives that only worked because when things are made up, they work.

Productions based on Science Fiction do not have to have interoperability issues in Science Fiction society. They do not have to work with people’s different cultural backgrounds and needs, they do not have to work differently for the aged, or disabled, and they do not have to work differently for different genders.

Advertisement

The technologies described in Science Fiction don’t have to work at all — except in the fictitious way they are described to have worked.

Companies that have invested and invented technologies based on mythology set in a mythical future, are trying to realize these, now in the present, within a society that hasn’t yet evolved to adapt to them and maybe never will.

Furthermore, humans are being told to adapt to the idea that our faces are going to identify us, in addition to the other adapting to automation we’ve been doing for decades. With machine learning, big data, and AI, we are adapting even more.

Part of our adaptation is adjusting continually to things that were created based on fiction and put in reality that don’t actually work the way they were “shown on TV” or in the novel or film when they inspired their creators. We are constantly helping algorithms function, or finding ways around them to live our lives. We find some of these amusing, and some of them horrifying (particularly when the algorithms kill us, or put us in harmful situations). It’s a threatening mess that is taking more and more of our time.

But all of them in aggregate in various hardware and software forms, should be very worrying to us. We have allowed a wealthy consolidated power base (tech billionaires and the companies they control) to build our present, and our future, based in part on fantasized stories of a mythical future.

Advertisement

These are untested. These are unrealized. These are being deployed on a massive scale.

These do not fully work.

More importantly, each instantiation of realized Science Fiction or fantasy may or may not be compatible with the visions of others doing the same thing. As such, humans and the real world we inhabit comprised of our bodies, cultures, and societies, become a collective test-bed for multiple technology “projects” at any given time. The technologies based on Science Fiction are not created in any type of synchronous manner and those deploying technologies do so while others are also deploying similar or different technologies at the same or at different times.This is a form of PolySocial Reality (PoSR), a model of the confluence of constructs with competing outcomes from differing synchronous and asynchronous communications. These activities collectively compromise successful connection and thus, successful cooperation for many.

Our governments as currently constructed can’t protect us — they don’t even understand it wholly. Untested surveillance technology can cause great harm to society. To some extent, protest measures have been effective: the employee who petitioned Microsoft not to deploy Azure with ICE got support from over 300 scholars and researchers. This in turn caused Microsoft to request to Congress that the US Government regulate the use of Facial Recognition technology. However, Microsoft did not first appeal for regulation. It took the petition and its associated consciousness raising from within for Microsoft to take action, but they do seem fully on-board now. Additionally, different initiatives from different governments try to protect their citizens in different ways in different regions. Governments seem to be more strictly bound and accountable by geography than than the Silicon Valley makers, who, even reined in with some new legal protections, still manage to reach millions.

Advertisement

The Pareto Principle

Another problem within this large-scale aggregate deployment of realized Science Fiction is that the engineers building these technologies and business people promoting them often rely on various interpretations of the Pareto Principle (also known as the 80/20 rule). Vilfredo Pareto was an economist who studied income distribution. In 1906, Pareto observed pea plants and noticed that “20% of peas plants in his garden generated 80% of its yield.” He used these ideas to study uneven wealth distribution, with a more famous example showing that 20% of landowners in Italy owned 80% of the land. Pareto showed that from an economics standpoint, inputs and outputs are unevenly distributed.

Joseph M. Juran was an engineer and management consultant. He came across Pareto’s work and applied it to his field of mechanical engineering and quality control. Juran adapted Pareto’s discovery, renaming it the Pareto Principle. The Pareto Principle was used as a mechanism to show that a small percentage of labor, could generate a larger percentage of results. The object of applying the Pareto Principle was to focus on the smallest amount of labor or effort that could create the largest yield. The Pareto Principle was adopted in business and management curriculum and engineering continued to use it as a way to explain product development decisions. Over time, multiple interpretations of The Pareto Principle in software design and marketing has created a belief system among some that a "minimum viable product" is good enough to ship. In this way, the 80/20 rule has been modified through applied contexts to support incomplete and untested software being released.

Not surprisingly, this is why so many people have to work so hard at cooperation when there is high levels of automation present that hadn’t anticipated their needs. So much doesn’t include outlier cases, which is the real world where humans live and dwell. This means that software that does not work 100 percent accurately and robustly is being deployed to police departments and governments, as well as private businesses. This is dangerous.

The bias ratio for failure for Facial Recognition in some cases is 92 percent in others in others less than 50%, and in one study one US tech company claimed that its facial-recognition system had “an accuracy rate of more than 97 percent, although this rate was the result of testing with a 77 percent male and 83 percent Caucasian data set.” Thus, technology companies are deploying technologies rooted in a constellation of multiple fantasies that weren’t fully formed or considered.

The authors of the narratives that inspired these capabilities do not have any accountability for their creative work and its threats to (and failures in) society—and they shouldn’t have to. However, clearly tech billionaire readers are treating Science Fiction as a specification guide, rather than than the intended story, and therein lies the problem.

If the automated world is going to work for people in the present, our highest chances of successful “Science Fiction inspired” living will only work for us if we are in that 97 percent accuracy rate (which incidentally, matches the profiles of the tech billionaires controlling most of the companies creating these technologies.) Those of us who aren’t in that 97 percent (and many of us are not), and who lack power or wealth of the tech elite, have become test subjects to the whims of those inspired by Science Fiction to shape our real world into that of fantasy, and for this, we have no choice but to be a part of this experiment.

The time has come for these labs, offices, startups, corporations, and entities to hire Social Scientists to help them understand that their visions may not mesh with our realities and may need to be reined in or adapted before hitting the market—if at all. This will only get worse as more and more technology is deployed without oversight.

We’re overwhelmed, and overstimulated, and over adapting way more rapidly than we can. As a result, we are developing all kinds of strategies to get through our days, and very few of them are realized (or function) with the easy automation illustrated in Science Fiction.