The Electric Turing Acid Test
GIF: 30000fps.

FYI.

This story is over 5 years old.

Tech

The Electric Turing Acid Test

If we want machines to truly think—a behavior that involves perspective and intuition and empathy—we need to help them hallucinate.

Parts of this essay by Andrew Smart are adapted from his book Beyond Zero And One (2015), published by OR Books.

Machine intelligence is growing at an increasingly rapid pace. The leading minds on the cutting edge of AI research think that machines with human-level intelligence will likely be realized by the year 2100. Beyond this, artificial intelligences that far outstrip human intelligence would rapidly be created by the human-level AIs. This vastly superhuman AI will result from an "intelligence explosion." And if they're not designed or managed correctly, some argue, these AI may present a serious threat to humanity.

Advertisement

Superintelligent AI, argues Nick Bostrom, would be "systems which match or exceed the intelligence of humans in virtually all domains of interest." Furthermore, AI researcher Luke Muehlhauser asks us to imagine "a machine that could invent new technologies, manipulate humans with acquired social skills, and otherwise learn to navigate many new social and physical environments as needed to achieve its goals."

No matter how the super AI's are ultimately embodied, hardcore AI scientists and engineers believe that these AIs will beat humans at intelligence tests without being conscious. They would be capable of besting homo sapiens at Jeopardy or at any game of facts or prediction, and they'd be able to pass the conventional are-you-a-human Turing test with flying colors. But underneath, they would be zombies—and potentially very dangerous zombies.

Zombies have a long and disgusting history in literature and popular culture; but within the philosophy of mind zombies play an important role as the subjects of esoteric theoretical thought experiments. Essentially a zombie is a being that is indistinguishable from normal people, except that it has no conscious experience. And we do not typically think of zombies as very intelligent, very communicative, very adaptive, or very nice.

Therefore, as I ask in my new book, should we expect and design superintelligent machines to be not zombie-like but conscious? And if so, how would we achieve that?

Advertisement

So far, the pursuit of consciousness has been a nonstarter in AI research. "The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions," writes Stuart Russell in Artificial Intelligence: A Modern Approach. "Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer."

The problem of course comes when the AI starts to self-program its own utility functions, which may or may not be comprehensible to humans, and worse: when the AI starts to form its own goals, these goals might not be in alignment with human goals or well-being.

Put another way: Would self driving cars be better drivers if they were not zombie-like but conscious? The same question can be asked of human beings. Are we more intelligent, or better at surviving and reproducing because we are conscious?

Consider the difference between Einstein dreaming up the theory of relativity and an ant colony finding the bits of old food in your kitchen.

We are led back to ancient philosophical questions, which despite all of our wondrous technological and scientific progress, have not been solved: what is consciousness? Why are we conscious? How are we conscious? Are we conscious? Is consciousness an illusion?

These are tricky questions, but at least we are, well, partially conscious of what consciousness is. There are thousands of books and papers discussing the issue. Guilio Tononi, whose Information Integration Theory of consciousness is one of the most popular in neuroscience, says, "Everybody knows what consciousness is: it is what vanishes every night when we fall into dreamless sleep and reappears when we wake up or when we dream. It is also all we are and all we have: lose consciousness and as far as you're concerned, your own self and the entire world dissolve into nothingness."

Advertisement

At least since the 1980's, cognitive psychology and then cognitive neuroscience have hypothesized that the evolutionary advantage of human consciousness is that it enhances the ability of an organism to discover new and better behavioral adaptations. This sounds quite similar to "learning to navigate new environments," which, it is expected, new superintelligences should be able to do too.

Human consciousness is critical when we encounter new circumstances for which we have no existing adaptive or automatic behavioral response. In other words, when we are forced to improvise. Consciousness seems to be able to recruit novel combinations of knowledge and brain functions that we need to represent actions and situations that we have never experienced before.

"In the animal kingdom," writes Italian philosopher Riccardo Manzotti, "there seems to be a correlation between highly adaptable cognitive systems (such as human beings, primates and mammals) and consciousness. Insects, worms, anthropods, and the like that are usually considered devoid of consciousness are much less adaptable (they are adaptable as a species but not very much as individuals)."

Consider the difference between Einstein dreaming up the theory of relativity and an ant colony finding the bits of old food in your kitchen. Without something very much like subjective human consciousness, would AIs be able to do more than mundane search and optimization problems? Assuming that we would want them to, could Google's self-driving cars do something like this without being conscious?

Advertisement

There are also now very serious fears about how powerful this kind of AI could become. Muehlhauser points to "the central feature of AI risk: Unless an AI is specifically programmed to preserve what humans value, it may destroy those valued structures." He quotes Eliezer S. Yudkowsky, an artificial intelligence theorist: "the AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."

If artificial intelligence systems acquired consciousness and subjective points of view, perhaps they might also acquire empathy, the ability to recognize other autonomous conscious agents as alive (i.e., us), and develop a reluctance to harming living things. Because, despite all the awfulness humans continually demonstrate, it appears we are not inherently violent or selfishly aggressive.

If a superintelligent AI were ruthlessly and unconsciously pursuing its goals, it seems that sentience would not be a bad thing for it to have. It would be able to be creative and more adaptive – and therefore even more intelligent. And rather than suddenly turning evil, sentience might actually be what prevents a superintelligent machine from considering us simply as a resource.

Thus machine consciousness would offer several benefits: it might allow machines to be truly adaptable, creative and thus able to harness the incredible computational power of machines in novel ways in novel situations. Sentient software could by default possess an awareness of other minds, and be naturally inclined to help other conscious agents.

Advertisement

Sentience and Psychedelics

One mechanism I suggest for achieving benevolent and even spiritual AI could be by giving AI psychedelic experiences. We have already seen how Google's neural networks can generate trippy pictures that are reminiscent of the altered visual experiences that humans have on hallucinogens. Google calls this inceptionism. Google's engineers generate these images by turning the ability of image-recognizing software—built using an artificial neural network—inside out: rather than discriminating between different kinds of images, they generate new images out of what they think they see. This over-interpretation derives from examining and reiterating the information that the artificial neural network has at each of its many hierarchical layers. These layers become increasingly abstract as you move up the hierarchy.

This mirrors how our own visual system works. During human visual perception—which happens in hundreds of milliseconds—low-level and bottom-up features like the edges of objects are detected early and these are matched to more abstract attributes like conceptual category from top-down processes in the frontal parts of the brain.

When you recognize a cup of coffee, the image you see of the coffee cup is a reconstruction of the incoming visual input together with your abstract concept of "cups." We are normally not conscious of this process. However LSD seems to allow us direct conscious access to the low-level and abstraction layers of our own neural networks. LSD dramatically alters the activity of the parts of the brain that "monitor" our sensory systems and our reasoning systems. The brain network that is normally busy making sense of all the incoming information and your own self disintegrates on LSD. You can dissolve into your environment. This happens because your brain can enter into a much wider repertoire of dynamic states on hallucinogens.

Advertisement

Actually giving artificial intelligence systems LSD would go far deeper than Google's image recognition software producing trippy incepted images. In the 1960's, researchers working on using LSD to treat alcoholism developed a list of the basic characteristics of the peak psychedelic reactions:

1. Sense of unity or oneness.
2. Transcendence of time and space.
3. Deeply felt positive mood.
4. Sense of awesomeness, reverence and wonder.
5. Meaningfulness of psychological and/or philosophical insight.
6. Ineffability (sense of difficulty in communicating the experience by verbal description).

As strange as it sounds that a computer—even a superintelligent computer—could experience any of these things, they are really no different from any other mind-like things we expect artificial intelligence to do. And in fact these psychedelic goals should help make the AI safe and beneficial to humans.

However, like consciousness, our understanding of psychedelic experiences (and the reality-altering "consciousness-expanding" hallucinations they involve) is limited within the current framework of neuroscience. And like consciousness, these experiences present us with fascination, fear, and hope.

Albert Hofmann, who discovered LSD but was dismayed by Timothy Leary's experiments with the chemical at Harvard, felt that it could allow humans to do more than "turn on, tune in, and drop out." LSD, he believed, allows us to become aware of the ontologically objective existence of consciousness and ourselves as part of the universe. This is in contrast to how we normally perceive the world: that is to say, the world exists outside of us and that we are separate from it.

Advertisement

Regarding machine consciousness however, machines, at this point, are not able to have a point of view. By virtue of own deeply fallible subjective points of view we necessarily perceive with all kinds of biases and errors. Strangely, in order for a machine to have human-level intelligence, consciousness and its own intuitions, the computer might also have to develop human-like biases and errors, even though these are the things we wish to overcome by using robots who can reason perfectly.

If superintelligence and cyberconsciousness researchers like Martine Rothblatt and Ray Kurzweil are correct and achieving a true artificial mind is possible in a computer, it follows that we should be able to give this artificial mind LSD. If computationalism—the idea that the human brain is a computer and therefore our minds can be compressed into an algorithm—is to be believed, our phenomenal subjective conscious experiences can be accurately simulated in a robot. (I do not necessarily believe it.) Mathematically a simulation of a computation is also a computation. This artificial mind should also be able to be perturbed by hallucinogenic drugs like LSD—otherwise it is not an artificial mind.

I therefore propose raising the bar on the Turing Test. If the artificial mind is real and intelligent, and we can somehow determine this objectively—whether through the Turing Test or some other means—it should be able to take an acid trip. This would be called the Turing-Acid Test for artificial minds.

Advertisement

A few caveats to this proposal:

1. We do not understand human consciousness.
2. We do not (yet) know how to create machine consciousness.
3. We do not fully understand how or why LSD alters human consciousness. (see caveat 1.) 4. We know that LSD profoundly alters human conscious experience.

Nick Bostrom argues that the path we take to the birth of artificial superintelligence could have a direct influence on whether or not the AI is beneficial and safe. For example, a whole brain emulation would presumably and by default have all of our morality, empathy and emotions (and perhaps our consciousness). Purely mathematical and brute force AI (like Watson or Deep Blue) are the scariest potential forms of super AI, he argues: unless we can discover synthetic sources of morality and empathy that we could program into radically advanced brute force systems, it will be difficult to control their actions once they reach a certain threshold of complexity. Bostrom however contends that we would still understand these systems because we would have created the math on which they are based.

No matter how the superintelligence is delivered, at some during its rise to human-level intelligence and beyond it must become sentient. If it remains a zombie system, with no inner experience, I wager that its brainpower will be limited. It might become vastly superior to humans in restricted domains—like planning resource use—but as soon as it is presented with a new problem from outside of its domain it will fail because it will not be able to improvise without consciousness. Of course superintelligent-domain-specific agents could be hugely beneficial: imagine automated and optimized pharmacies for example that know exactly what drugs to make, stock and distribute based on health data from the population.

But without consciousness the superintelligence will not be able to behave flexibly, adaptively or socially. Eventually, humans will detect that the AIs are zombies and treat them as such (hopefully not in the way they're dealt with in zombie shooter games). Humans do already treat human-like robots with a great deal of empathy, however nobody really believes the robots are conscious.

Most importantly, without consciousness I do not believe that machines could ever develop true creativity. This is arguably the most adaptive cognitive tool from which humans have benefited in our evolutionary history. Sentience necessarily entails a first-person point-of-view and subjectivity. Consciousness in a machine would not by itself prevent the superintelligence from being evil, or at least having goals that are not human-friendly. ISIS is comprised of conscious humans who have first-person points of view, subjectivity and even emotions (mostly psychotic rage it seems). If you think those guys are evil, an evil superintelligent AI that is indifferent to human life would make ISIS look like Doctors Without Borders.

This makes expanding the viewpoints and feelings of a conscious AI all the more critical. Once we accept machine consciousness, we should also accept altered states of machine consciousness. And we should encourage it too.

In other words, what if Silicon Valley got back to it psychedelic roots? Only this time instead of company founders and spiritually inclined engineers dropping acid, Silicon Valley tried to figure out how to recreate the psychedelic state in silicon? The purpose of this would be no different from why psychiatry gave LSD to patients recovering from addiction in the 1960's, or why Leary held acid parties: to achieve spiritual awakening. This is in the same spirit as Bostrom's call to arms that we begin to already now think of ways to make future AI systems safe and beneficial.

Psychedelic experiences in the right context can give human users transformative insights. Renowned psychedelic psychiatrist Stanslav Grof said that psychedelics, properly used, "open spiritual awareness. They also engender ecological sensitivity, reverence for life, and capacity for peaceful cooperation with other people and other species. I think, in the kind of world we have today, transformation of humanity in this direction might well be our only real hope for survival."

Indeed, if superintelligent machines in fact become a reality, their spiritual awareness might be our only real hope for survival.

Lit Up is a series about heightening—and dulling—our sense of perception. Follow along here.