FYI.

This story is over 5 years old.

Tech

Cambridge Will Study the Techno-Apocalypse with a Dedicated Research Center

The apocalypse will come someday, and while it happening this year is doubtful, it’s still an obviously relevant area of research.

The apocalypse will come someday, and while it happening this year is doubtful, it’s still an obviously relevant area of research. Even the eggheads at Cambridge think so, which recently opened a Center for the Study of Existential Risk to study how we humans may end up wiping ourselves out.

It was co-founded by Lord Martin Rees, a renowned cosmologist and author of Our Final Century, which looked at how humans could end up destroying themselves by the end of this century. The Center will focus on four areas of destruction that are the perfect fodder for amateur apocalypse enthusiasts: artificial intelligence, biotechnology, climate change, and nuclear war.

Advertisement

From the Daily Mail:

Huw Price, Bertrand Russell Professor of Philosophy and another of the centre’s three founders, said such an ‘ultra-intelligent machine, or artificial general intelligence (AGI)’ could have very serious consequences. He said: ’Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. ’We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. ’I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point. ’With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies He added: ’The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history.

Of course, the first thing I thought of when hearing about the CSER was Kevin Warwick, the British cyborg researcher whose work propelling us toward the singularity could end up creating the robo-apocalypse scenario that CSER is looking at. On the other hand, Warwick argues that by augmenting humans, we’ll be able to prevent the Skynet scenario altogether. I think he has a point: Why have extremely evolved AI when we can make humans more capable on their own?

I’ll admit it’s a little overblown to use a Terminator image with this post (even if it looks awesome) because it’s that type of hyperbole the center is trying to work against. While it’s fascinating to imagine that our future will contain time-traveling robo-assassins, it treats technology that poses us an existential risk as something that will only come long off in the future. While we may not have enough nukes to blow up the world 25 times over or whatever, we still have plenty of avenues of technology that could ruin our lives right now, whether it’s a rogue engineered virus or hackers that kill our power supply.

The CSER founders argue that such scenarios have been understudied, a conclusion I think is grounded in the fact that many people incorrectly think the human ability to develop something like Skynet is a long way off. Technology is developing at a faster pace than it ever has, and remember, Obama’s cyberwar scenarios are grounded in reality. So while there’s no need for paranoia or fearmongering, it is nice to see some top experts calling for research into how we’ll all accidentally kill ourselves.

Follow Derek Mead on Twitter: @derektmead.