FYI.

This story is over 5 years old.

Tech

The Scariest Ways Humanity Could Die Out Are the Ones We Haven't Thought Of Yet

The craziest way civilization could end? "Unknown consequences."

There are a few relatively obvious ways that humanity could be wiped out: climate change, nuclear conflict, killer artificial intelligence. And then there are less probable risks; the kind that delve so deep into the speculative and reach so far into the unknown that they're almost impossible to imagine happening.

A super-pollutant could make the human race sterile. The Large Hadron Collider could spin the Earth into a black hole. A video game could be so addictive humans die rather than press pause. Animals could be made more intelligent than humans through experimentation. Deadly aliens.

Advertisement

These are potential risks given as examples in one particularly compelling section of a report from the Global ​Challenges Foundation that outlines "risks that threaten human civilisation," under the chapter heading "Unknown Consequences."

They might sound preposterously improbable, but put them together and they give reason for pause. "[Uncertain risks] constitute an amalgamation of all the risks that can appear extremely unlikely in isolation, but can combine to represent a not insignifcant proportion of the risk exposure," the report states. It also points out that "many of today's risks would have sounded ridiculous to people from the past."

"If you work with risks, that's almost the most difficult but also the most interesting category," Dennis Pamlin, executive project manager of global risks at the Foundation, told me in a phone call. "Basically, risk is about trying to foresee different consequences, and there's always the unknown."

It's particularly important to think about, Pamlin said, when you take into account the kind of cutting edge research and advanced technology we're using today. We're seeing things at an atomic level; we're harnessing energies that haven't been used before. "So it's exploring really exciting territory, but of course there are unknowns here," he said. "We don't really know what we're up against."

One possible conclusion: Intelligent life destroys itself before it can expand into the galaxy

Advertisement

For Pamlin, however, the biggest risk of all is coming across as "a crazy person" when he talks about this stuff. "It's really hard to discuss these things, because it's really easy to come across as a fearmonger or anti-technology," he said.

He is, he insists, an optimist.

"It's hard to come across a person I think who's more fascinated and positive toward technology than I am, but I think it's irresponsible of us to not acknowledge that there are also challenges in this area," he said. The fact we are faced with such potential risks is an indicator of quite how far we've come—we just have to be "humble" about it.

And, as he chirpily added, what's the alternative to optimism—giving up?​

So how does one go about assessing risks that are by definition unknown? One way is to take an abstract approach and address the problem on a theoretical level. Take for instance the Fermi Pa​radox. In a nutshell, this argument goes as follows: It's highly probable that there is or has been some other form of life out in the universe, and you'd perhaps reasonably expect to have seen some evidence of this life. But we haven't. As far as we know, we're alone in the universe.

That might be good in terms of mitigating the "killer alien" risk, but it gives rise to one potential conclusion offered by the report: "that intelligent life destroys itself before beginning to expand into the galaxy."

So working with results that fit this explanation might help figure out the probability that we're doomed to destruction.

Advertisement

Another way to tackle unknown risks is to look into the past instead of into the future—"Looking through history, what has been our capacity and what has been the unknown that has been coming out of technological development so far?" Pamlin explained.

He wants to see the challenge of unknown risks addressed with more systematic and reflective discussion, which he thinks is hindered by a general propensity to blow things out of proportion as soon as you bring up the end of the world. "All through human history, talking about the end of the world in this way—the people who have done that have been superstitious people claiming divine intervention and such stuff," he said. "So there's no real precedent."

Pamlin isn't the only one to be concerned by potentially apocalyptic threats. Public figures including Elo​n Musk and Stephen H​awking have spoken out particularly on the threat of AI, and several academic research centres exist specifically to work on this topic, such as Cambridge University's Centre for the Study of Existential Risk and Oxford's Future of Humanity Institute. The Global Challenges Report was co-authored by Stuart Armstrong of the FHI.

Less "unknown" risks that it covers include global pandemic (imagine a disease as incurable as Ebola, as fatal as rabies, and as infectious as the common cold), global system collapse (everything from civil unrest caused by economic collapse to critical infrastructure like power grids going down), a major asteroid impact, a supervolcano, and the misuse or accidental use of nanotechnology (to make weapons) and synthetic biology (to make dangerous pathogens).

Pamlin thinks that representatives from different fields—technology, economics, art—should come together for the conversation. But he reiterated it should be a positive thing; with great risks come great opportunities. Optimism isn't about ignoring the risks, but recognising them and acknowledging that we can do something about them.

When I asked him how worried we should be, he responded, "That's the 10 billion dollar question."

But worrying won't get us anywhere.