Quantcast

When Computers Insist They’re Alive

Zoltan Istvan

Zoltan Istvan

The idea that only humans can have consciousness is anthropomorphic prejudice.

Image: Camilo Rueda López/Flickr

Ever since college, where I focused some of my studies on the wacky topic of a brain in a vat, it's troubled me that some people think only humans are capable of consciousness—rationally knowing what they are and that they exist.

Such biased thinking smells of anthropomorphic prejudice. Machines can be just as aware of their own consciousness as people, and perhaps more so, if they're programmed that way.

While the three-pound brain and its hundred billion neurons remain the least understood organ of the human body, most experts agree on a standard explanation: Human consciousness is a compilation of many chemicals in the brain forced through a prism that produces cognitive awareness designed to insist an entity is aware of not only itself but also the outside world. As an atheist and science-minded person, I buy this simplistic meat bag explanation.

But there's probably a lot more to consciousness, especially if we consider the future of superintelligence consciousnesses. To understand it and the field that encapsulates it—epistemology, the study of knowledge, with a special emphasis on what can be proven and what can't—it's always useful to start with French philosopher and mathematician Rene Descartes. He may have made the initial step by saying I think, therefore I am. But thinking does not adequately define consciousness. Justifying thinking is much closer to the meaning that's adequate. It really should be: I believe I'm conscious, therefore I am.

Delving further into this point, some computers can already think on various rudimentary levels, but we do not say they are conscious because they don't insist they are conscious. If they did, then many would argue we are dealing with a bonafide life form. However, no experts argue such a thing, at least not yet.

The recent near-future sci-fi movie Ex Machina highlights some of the core dilemmas between whether a machine intelligence is alive and truly conscious, or whether it's just following its circuitry. The story follows a human and an AI robot getting to know one another. One can't watch it and not think about the ongoing nature versus nurture controversy—the millennia-old debate of how and why humans acquired their behavior. It's this egocentric behavior that makes most humans justify their own conscious identity.

However, philosophically, Ex Machina also challenges us to ask another critical question about consciousness: What part does free will play in consciousness, if any at all? It's an interesting question, but in my opinion, the more poignant inquiry is not whether conscious entities, like humans, have free will, but whether there could ever be a consciousness without free will. Anomalies, randomness, and potentially even built-in chaos seemingly must remain intrinsic parts of the picture—otherwise it's all deterministic.

A significantly smarter intelligence than us could have no free will and it would still appear far freer, abler, more alive than us in its decisions and actions

Some fictional computers, such as HAL in Stanley Kubrick's classic 2001: A Space Odyssey, have insisted they were alive and fully conscious. And indeed, HAL appeared to be so. What made HAL conscious and alive to us, rather than some awkward Honda robot or IBM's chess champion Deep Blue, was that HAL had his own set of desires, demands, and identity. Because of this, there's no question HAL would pass the Turing Test—a test where a robot attempts to pass for being a human, something no machine has truly successful accomplished yet in the 21st century.

Some machine intelligent experts swear by the Turing Test. But is it the all-important test we make it out to be in determining intelligence and consciousness? If we met a far more advanced being—maybe a superintelligence from the future—what would their test of us be called? Would they say we have a lower form of consciousness than they do? Would they even say we have a consciousness at all?

Probably not. After all, what human believes a fish has a consciousness? Or a seagull? Or even a dog? Consciousness is built upon massive complexity—and the power to make sense of and identify oneself upon that complexity. Anthropomorphizing everything is part of that conscious process, as egotistical as that sounds. Our consciousness is specifically built upon the ability to know we have the power to craft our own destiny amongst the material world around us.

So what test might a superintelligence give us to see if we possess a so-called consciousness comparable to their own? To even tackle that question, we first have to answer if there's something outside of free will that reflects a higher consciousness.

I think consciousness, as we know it, isn't dependent on free will. A significantly smarter intelligence than us could be completely run on wiring with no free will at all, and it would still appear far freer, abler, more creative, and more alive than us in its decisions and actions. Consciousness is therefore relative, at least to humans.

Perhaps, then, the real test a superintelligence would give us would not be based on any notion of free will or justification of consciousness, but upon the basis for complexity and the speed to successfully navigate that complexity. That certainly sounds like a machine-like thing to do. But I think there's more to it, as well. I think a superintelligence's test of humans would also involve the ability to transcend mammalian limitations and biases—something I refer to as artificial intelligence relativism. Good and evil, and morality as a whole—except for being functional—would have to be checked at the door.

I've questioned in my writings before that the critical component of a superintelligence's morality is that there is none, at least nothing human-like. Morality in a machine, or in a deterministic consciousness, is nothing more than mathematical algorithm of rule-bound precision. This leaves little room for humanity and love for another, or any of the mammalian niceties that people swear by. It seems, then, that the Turing Test for superintelligence is to deny the lack of notable value for anything outside oneself. Pure narcissism, mixed with nearly unlimited computational power, is therefore the quintessential part of a test of what comprises a superintelligent consciousness.

Jacked In is a series about brains and technology. Follow along here.