The technological singularity is a hypothetical moment in the future when artificial intelligence becomes indistinguishable from human intelligence—and capable of creating smarter iterations of itself. Apply the same general idea to simulations and you get the "simulation singularity": when a simulated world is indistinguishable from reality.
This was the theme of a talk this week at London’s Digital Shoreditch festival by engineer Andy Fawkes, who works for global simulation software company Bohemia Interactive Simulations (BIS) and is director of tech and training company Thinke. "Will there be a world where the simulation may be just as good as the real world?" he asked. Could it even be better than the real world?
"In a sense, I think in some regards it’s already happening," Fawkes told me in an interview. If people’s minds are already accepting a simulated world as “real” somehow, then we could perhaps consider that we’ve already reached the tipping point. In his talk, Fawkes showed examples of driving simulators from a game that were very nearly visually indistinct from real-life footage.
Another example that shows the power of realistic simulators is in the military sphere: pilots learn to fly using simulators that effectively trick the brain into thinking it’s actually controlling a plane. In the US, a quadriplegic woman was able to "fly" an F-35 fighter jet using nothing but her mind. Fawkes looks forward to strapping on an Oculus Rift in old age and “escaping” his weary body.
But are photorealistic 4K graphics and easily-misled human senses enough to constitute a true simulation singularity? Reality is not just about visuals; it’s a whole complex system that, so far, we’ve found pretty impossible to model with any great accuracy (presuming for the sake of argument that we’re actually not living in a sim, that is).
Fawkes gave the example of the weather, which we’re still not great at modelling a few days in advance, never mind 100 years. In this sense, the chaos of reality still beats our best simulators by a long way. "That idea, that you could predict precisely the weather—would just be transformational," he said.
He also thinks it may never be possible, but even improvements that are minor in scope compared to the idea of a total singularity could have a huge effect, like being able to predict the weather for a month, or better model asteroid impacts.
So perhaps we’re not that close to a true simulation singularity after all. But as far as simply confusing the human senses about what’s real and what’s not, Fawkes reckons that’s not far off at all. Indeed, you don’t necessarily need perfect graphics to induce suspension of disbelief in the human brain anyway: Just think about how your mind can get carried away watching a film or reading a book. On some level you know it’s not real, but that doesn’t stop you being emotionally invested. People get married in Second Life.
As Fawkes concluded in his talk, the human brain is one of the best simulators we’ve got. Things take an inevitable philosophical turn at this juncture: if simulations are so realistic (or we’re so gullible), then how do we know we’re not already in a simulation? It’s a question that’s been pondered by everyone from Plato with his cave allegory to Matrix fanboys to philosopher Nick Bostrom with his simulation hypothesis.
There’s a link to be drawn here with the broader technological singularity—because if we are living in a simulation, who or what is behind it? Even as far as we know, a lot of digital simulations rely on some sort of AI to inform what they show.
On the other hand, artificially intelligent robots often use simulations to train for the real world, which might be handy when it comes to warding off potential negative consequences along the way to improved AI. "If you’re going to have artificial intelligence in the real world, maybe it’s best to test it in a simulation first," Fawkes suggested.
Presuming the technological singularity didn’t get there first, and we’re the ones in the test zone.