People Don’t Want to Leave AI Up to Corporations
Flickr/Dirk Thomas Johnson

FYI.

This story is over 5 years old.

Tech

People Don’t Want to Leave AI Up to Corporations

A new survey shows a large preference for government involvement.

Now that machines are learning to do all sorts of things on their own, from deciding who gets bail to finding new pharmaceuticals, the question is how we can ensure that computers don't accidentally screw up people's lives. The results from a new survey conducted on behalf of the Royal Society, a UK-based scientific fellowship, show that at the very least folks want the government to be involved in deciding how AI develops. Indeed, "71% of people feel that the government should play some role in the development of machine learning," the survey states.

Advertisement

Despite much of the basic research in deep learning being done in government-funded university labs, private corporations like Google and Facebook have largely been responsible for bringing the technology into our daily lives. It's safe to assume that this will continue, and big shots like Bill Gates have begun thinking about how to control the technology's rollout and to protect workers who may see their jobs automated. For Gates and politicians who agree, this means a robot tax, but there are other ideas out there, like the government forcing companies that use robots to pay into a public trust.

The exact methods of control are undecided, but the Royal Society survey, conducted by market research firm Ipsos, makes it clear that people want the government to implement some sort of controls.

Read More: How to Keep Our Robots, Lose Our Jobs, And Prosper

Of the nearly 1,000 UK adults who responded to the survey, 37 percent said the private sector should develop AI with its own money, but with government-imposed regulations and rules in place. Another 34 percent said the government should fund the research as well, which would usually imply some government-set research priorities. Just 12 percent of respondents said the private sector should be allowed to develop AI without rules.

We're already seeing how AI is getting away from us. For example, algorithms used to decide who is allowed bail during court proceedings may have unexamined racial biases (leading some experts to call for the open-sourcing of the code), and study after study has shown that similar biases can show up in our machines in unexpected ways, and with potentially disastrous consequences for people.

Government control over AI will probably rub some hardcore free market-types the wrong way, but it's more or less how society has functioned for a long time. Laws are (ideally) codified manifestations of what we have collectively decided is acceptable and what is not, and science is beholden to these rules. In 2004, human cloning was made illegal in Canada. The US has a law that limits how genetic data can be used. When cars were invented, we made laws about how they should be manufactured and operated.

There's probably not a single technology that exists today and doesn't have some sort of law governing its design or use. And, save for the nightmare hell-world of copyright law, that's largely a good thing.

Subscribe to Science Solved It , Motherboard's new show about the greatest mysteries that were solved by science.