Might Intelligent Machines One Day Convince Us It's Time to Die?
Persuasive weaponry and intelligent systems are already changing our lives. Data & Society examines how, with a little help from speculative fiction.
Image: Geralt, Pixabay. Creative Commons
We recently ran Robin Sloan's short speculative fiction story, The Counselor, as part of a partnership with the think/do tank Data & Society. That story, which explores one way AI might be deployed to influence human behavior, is part of the Intelligence & Autonomy project, which is tapping science fiction to elucidate the findings of its real-world research. This article is the second installment in that effort—a non-fiction investigation into the themes raised in the The Counselor. When read together, they offer a unique experience, and a novel approach to examining some of the thorniest issues we'll have to confront in the not-too-distant future. -the Terraform editors
In Robin Sloan's The Counselor, a dying man squares off against a machine of his own creation. However, the combat is not physical, but emotional—the system deploys a formidable arsenal of persuasive weaponry in an attempt to convince the patient to end his own life. The Counselor asks questions, uses strategic silence, and is intelligently designed to deploy religion when it might convince a patient to make the decision that the machine desires. The patient, for his part, fends off these tactics and tries to avoid repeated attempts by the AI to rattle his determination to survive.
While couched within a vision of the future, the central conflict of the story points to an increasingly important debate in the design of systems that use machine intelligence. What are the types of persuasive methods we should permit designers of machines to use, and under what contexts are certain methods inappropriate?
This is not as speculative as a question as it might seem. Even the basic choice of whether or not to grant a system the ability to converse with its user makes that system more able to persuade and shape behavior than it would other be able to. This has been long known: ELIZA, a rudimentary early "chatterbot", was able to elicit sensitive personal information and emotional reaction from users with a simple system of polite and persistent questioning.
The choice to embody an intelligent system in some physical form, particularly anthropomorphized forms, is also a persuasive weapon. One project by MIT's Kate Darling reinforces earlier research that that the movement and appearance of robots—intelligent systems with a human-like body—can create emotional connections that make users hesitant to attack or destroy them.
In a more recent context, some of the most powerful intelligent systems are those that persuade without having an identifiable embodiment as a persona or a physical presence at all. The Facebook Newsfeed—a massive system of machine intelligence that filters and distributes content customized to users—influences through its recommendations and its presentation of information. That exerts persuasive power over our electoral behavior, our emotions, and the visibility of opposing viewpoints.
To that end, the proliferation of persuasive weapons is a problem that extends far beyond the machinations of a hypothetical future AI. While popular culture would narrow our conception of artificial intelligence to software with a face, voice, or body, this issue is present in everything from the design of the self-driving car to the musical suggestions of a recommendation engine.
Intelligent systems have designers that make choices about the behavior of the systems they create. What The Counselor depicts in fact is a battle between the patient and the earlier version of himself that designed the system in the first place. On this count, designers of digital and physical objects have always attempted to shape the behavior of the people who use their products. However, intelligent systems bring three new things to the table that mark a break from the past.
For one, intelligent systems can "listen" to the behavior of users, quantifying tastes and biases in ways that allow the system to better elicit certain types of behavior from users. Whereas designers of earlier, less reconfigurable technologies had to be content with a level of imperfection in their persuasive design, machine intelligence empowers designers to enact persuasion in ever more targeted ways that can be rapidly and continuously optimized.
Second, intelligent systems are often persistent. Users might live with an agent like Siri for an extended period of time, permitting the formation of patterns of dependence formerly difficult for device designers to access. With cloud architecture, intelligent systems can also outlive the devices they temporarily "inhabit"—collecting data and creating ongoing interactions with users over time.
Finally, it may be difficult to clearly interrogate and ascertain the objectives of an intelligent system. As in The Counselor, the lived experience of interacting with an intelligent system can be highly limited, and it can be difficult to assess from the perspective of a user whether the behavior of a system is changing over time. The times in which a system is passively serving its user or actively pursuing an agenda can blur.
Alone, these three features might by themselves not be reasons for concern. The decision being influenced will define the scope of methods allowed to intelligent systems. In some contexts, society appears to have struck a balance that grants a broad latitude for machines to persuade. Shaping the choice of what to purchase or what to click on, for instance, is an arena in which machines have been allowed to use a wide range of methods without much objection.
The choice to end a life presents a radically different context. The fear that The Counselor mines is that the seemingly "passive" quality of a machine may allow intelligent systems to engage in persuasive activity that we would find it objectionable for humans to do. The advisory quality of the systems depicted in the story opens the doors for the machine to exert a relentless pressure to euthanize that violates medical norms. We secure the right of patients to make these types of decisions free of persuasive wheedling, machine or no.
Crafting the range of persuasive tools available to systems designers in a given context will become increasingly important as the technologies they create become ever more intelligent and autonomous in coming years. Whether the range given to machines is broader, commensurate, or smaller than that given to humans will be a critical decision. In doing so, these systems will work to redraw lines around the autonomy and independence individuals have to make decisions in their own lives.