FYI.

This story is over 5 years old.

Tech

We Need to Tell Better Stories About Our AI Future

It's not as simple as man versus machine.

Discussions about the ethics, safety, and societal impact of Artificial Intelligence seem to come back to the same cultural touch points found in AI stories that warn of worst-case scenarios. Whether in press coverage or in policy position papers, we keep going back to the same stories. We need to tell more diverse and realistic stories about AI if we want to understand how these technologies fit into our society today, and in the future.

Advertisement

In Kubrick's HAL 9000, the calm assistant shuts Dave out of the system in 2001. In The Terminator, the AI defense system SkyNet becomes self-aware and initiates a nuclear holocaust to decimate the human race. In Ex Machina, the AI sex robot becomes self-aware and passes the new Turing test by convincing the human to empathize with her and help her escape.

These prominent AI stories pit man versus machine, relying on one of the most fundamental examples of narrative conflict to move the story forward. It's not surprising that stories we tell tend towards this form—that we should position ourselves as the protagonist and pit the AI technology in opposition as the antagonist.

At first glance Minority Report also reads like another man versus machine story, but on closer reading the conflict lies less in understanding or exposing the pre-cog technology itself, than in confronting and escaping the society that implements this deterministically judgmental technology. The technology is the catalyst for action but the real conflict of Minority Report lies between man and the state, or man versus society.

If we continue to rely on these sci-fi extremes, we miss the realities of the current state of AI, and distract our attention from real and present concerns.

But within the AI technology industry, the story arc follows a different pattern. It's less a conflict between man and the technology and rather a conflict between man versus himself. The highly publicized and celebrated benchmarks of progress in artificial intelligence focus on ways that man (that is, engineers) have built systems that are capable of beating humans at ever more complex and subtle challenges: from games of strategy like StarCraft and Go, to feats of knowledge and language parsing in the subtle play of language and puns of Jeopardy. While the technology competes against expert players, these moments are really about celebrating an engineering achievement that bests man at his own game. It's no wonder that the AI engineering community doesn't take kindly to discussions that start with nods to Terminator and Minority Report. The AI community is more interested in its own hero narratives that accomplish the impossible task of reproducing intelligent behavior.

Advertisement

So why do these AI narratives matter? These stories are the reason we are having conversations about AI accountability, but they are also grossly oversimplified, and fall into the trap of narrative fallacy. We end up using them as shorthands and heuristics to guide our conversations and decision making, when they are extreme and perhaps focused on the wrong points of conflict.

The public perception created by these hollywood narratives arguably leads to efforts like the Partnership on Artificial Intelligence to Benefit People and Society. A partnership between Facebook, Google, Amazon, IBM, and Microsoft, the group aims to address and get beyond these inescapable associations. In their mission statement, they write: "We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for explaining the technology." In stating that, "With artificial intelligence, we are summoning the demon," Elon Musk has evoked the most primordial narrative of all: the fight between good and evil. But what if we're getting the narrative wrong?

But if we continue to rely on these sci-fi extremes, we miss the realities of the current state of AI, and distract our attention from real and present concerns. And we also miss an opportunity to provide the public with a more practical and useful framing to understand a future where we live side by side with artificially intelligent systems.

Advertisement

*

What if we pushed to imagine alternative stories about artificial intelligence to explore some of the more pressing and realistic concerns we face, rather than skewing the discussion towards apocalyptic, dystopian, worst-case scenarios? Instead of pitting man versus machines, we can consider versions of the story that presume man working with machines, and instead focus on other elements of storytelling to frame our understanding in more useful and productive ways.

For example, we could focus on character development to inform interaction design with AI agents. What are their personalities? What are they motivated by, what are they capable of, and how do we relate to them in our daily lives? As Wolfram Alpha creator Stephen Wolfram has argued, one of the most important areas for AI advancement relies on developing shared language and logic for communicating and relating with these intelligent characters. Beyond butlers and secretaries, what roles and personalities could these AI agents take on?

Voice and character can be developed through stories told in first-person narrative. What would it be like to tell the story from the perspective of the artificial intelligence? AI processing can be so complex and multilayered that its outputs are not describable in a humanly intelligible way, so much so that engineers—much less auditing bodies or average users—can't describe their inner workings. Exercises in writing first-person narratives from the perspective of the AI system might get us closer to understanding and evaluating machine logic and thought process, especially in areas where accountability for outcomes and judgements is important.

Advertisement

We will continue to come up against the AI inscrutability problem, so we might need to look to more experimental forms of narrative to articulate that unknowability and ontological novelty. Think about nonlinear narrative, or postmodern and impressionistic storytelling. Outputs from Deep Dream illustrate how AI pattern recognition processes can be overly sensitive, seeing eyes or faces in things when they aren't there (known as "pareidolia"), producing uncanny and surreal images that while visually meaningless, give us some perspective on how the system "sees."

DeepDream image which started with white noise. Image: MartinThoma

Most of these narrative efforts will contribute little to get closer to some form of transparency for AI systems, but perhaps these narrative tools can move the bar on other AI challenges like scrutability, legibility, intelligibility, and interpretability, and therefore support more subtle and dynamic discussions about AI accountability among stakeholders and the wider public.

Throughout civilizations, storytelling is one of the most important vehicles for expressing ethics and value systems and passing them on. We ought to look to fables, parables, myths, and folklore from a wider pool than what Hollywood and western science fiction have given us. What if we think about AI through the retelling of the Monkey King? Or if we encountered an AI interface of riddles of the Sphinx? Through series of sessions on AI in Asia, the Digital Asia Hub has been exploring how to take into account local contexts and concerns while addressing these global scale shifts in technology.

A Seoul taxi. Image: Flickr/boyce.michael

When I visited Seoul for one of the Digital Asia Hub sessions, I couldn't help but notice all the lion-like creatures protecting the corners of every palace and temple I visited. The character even appeared as the mascot on the bright orange taxis throughout the city. The Unicorn-lion is actually the symbol of the city of Seoul itself. The haetae, known also from Chinese folklore as the xiezhi, is an omniscient mythical beast. It has a righteous temperament and is also a symbol of justice and law. Faced with a dispute, he wields his instinctive judgment by butting the wrong or guilty party with his horn. I wondered, how might an AI haetae behave? Would he be any more trustworthy or reliable than some our existing, problematic recidivism and algorithmic sentencing tools? Are these AI systems unquestionable, impenetrable gods? Or tools of our society?

Perhaps an AI unicorn-lion is a stretch of the imagination, but we need to get more creative with the ways we frame the conversation about societal concerns with AI. The stories we tell about technology direct our attention and determine our future.