A Global Arms Race to Create a Superintelligent AI is Looming
​​Global Panorama/Flickr

FYI.

This story is over 5 years old.

Tech

A Global Arms Race to Create a Superintelligent AI is Looming

Which nation will be the first to create a superintelligent artificial mind?

​Forget about superintelligent AIs being created by a company, university, or a rogue programmer with Einstein-like IQ. Hollywood and its AI-themed movies like Transcendence and Her have misled the public. The launch of the first truly autonomous, self-aware artificial intelligence—one that has the potential to become far smarter than human beings—is a matter of the highest national and global security. Its creation could change the landscape of international politics in a matter of weeks—maybe even days, depending on how fast the intelligence learns to upgrade itself, hack and rewrite the world's best codes, and utilize weaponry.

Advertisement

In the last year, a chorus of leading technology exp​erts, like Elon Musk, Stephen Hawking, and Bill Gates, have chimed in on the dangers regarding the creation of AI. The idea of a superintelligence on Planet Earth dwarfing the capacity of our own brains is daunting. Will this creation like its creators? Will it embrace human morals? Will it become religious? Will it be peaceful or warlike? The questions are innumerable and the answers are all debatable, but one thing is for sure from a national security perspective: If it's smarter than us, we want it to be on our side—the human race's side.

Now take that one step further, and I'm certain another theme regarding AI is just about to emerge—one bound with nationalistic fervor and patriotism. Politicians and military commanders around the world will want this superintelligent machine-mind for their countries and defensive forces. And they'll want it exclusively. Using AI's potential power and might for national security strategy is more than obvious—it's essential to retain leadership in the future world. Inevitably, a worldwide AI arms race is set to begin.

As the 2016 US Presidential candidate for the Transhumanist Party, I don't mind going out on a limb and saying the obvious: I also want AI to belong exclusively to America. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as America has done for much of the last century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched in, let's say, North Korea, or Iran, or increasingly authoritarian Russia? What if another national power told that superintelligence to break all the secret codes and classified material that America's CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country? The possible danger is overwhelming.

Advertisement

Given the AI Imperative, there's really only two likely courses of action for the world

Below is something simple I've designed that's tautological in nature called the "AI Imperative." It demonstrates why an AI arms race is likely in humanity's future:

1) According to experts, a superintelligent AI is likely possible to create, and with enough resources, could be developed in a short amount of time (such as in 10-20 years).

2) Assuming we can control this superintelligent AI, whoever launches it first will likely always have the strongest superintelligence indefinitely, since that AI can be programmed to undermine and control all other AIs—if it allows any others to develop at all. Being first is everything in the superintelligent AI creation game (imagine if you were first to develop the Atomic bomb, and then also had the power to limit who else could ever develop one).

3) Whichever government launches and controls a superintelligent AI first will almost certainly end up the most powerful nation in the world because of it.

Given the AI Imperative, there's really only two likely courses of action for the world, even though there's four major possibilities on how to proceed. The first is to make AI development illegal all around the world—similar to chemical weapon development. However, people and companies probably would not go for it. We are a capitalistic civilization and the humanitarian benefits of AI are too promising to not create it. Stopping development of technology has never really worked, either. Someone else just ends up eventually doing it—either openly or in secret—if there's gain or profit to be made.

The other option is to be the first to create the superintelligent AI. That's the one my money is on—the one America is going to pick, regardless which political party is in office. America's military will likely spend as much of its resources as it needs to make sure it has exclusivity or majority control in the launch of a superintelligent AI. I'm guessing that trillions of dollars will be spent on AI development by the American military over the next ten years, regardless of national debt, economic conditions, or public disagreement. I'm betting that engineers, coders, and even hackers will become the new face of the American military, too. Our new warriors will be geeks working around the clock in the highest security environment possible. Think the Manhattan Project, but many more times in size and complexity.

Of course a third option is that AI is developed via a broad international consortium. However, nuclear weapon proliferation shows why, at least so far, this idea will likely not come to pass—at least on a worldwide level. As long as powerful nations like Russia and China independently push their flavor of social policy, economic development, and government operations (many of which largely mirror their leader's desires), this is unlikely to work or be accepted. This is because we're not talking about good old fashioned teamwork exploring outer space together on the space station or stopping third-world civil wars and genocides, as the respected United Nations sometimes is involved in. We're talking about military power and protection of our families, citizenry, and livelihoods. There's much less room for cooperation when it concerns such personal matters.

A fourth option, one that I believe may be inevitable in the long run, is that all nations unite democratically and politically under one flag, one elected leadership, and one government, in an effort to better control the technology that is ushering in the transhumanist age—such as superintelligent AI. Then, all together, we create this intelligence. I like the sound of this from a philosophical and humanitarian point of view. The problem with it is such a plan takes time and many proud people to swallow their egos and cultural differences—and with only about 10 to 20 years before superintellitent AI is created, no one is going to push hard for that option.

So, inevitably, we are back to our looming dog-eat-dog AI arms race. It may not be one filled with nuclear fallout shelters like yesteryear, but it will show all the signs of the most powerful nations and the best minds they posses vying against one another for an all-important future national security. More importantly, it's a winner-takes-all scenario. The competition of the century is set to begin.