Tech

AI Companies Building ‘Most Powerful’ Models Must Inform Feds, Biden Executive Order States

President Biden's wide-ranging executive order on AI aims to curtail some risks to the public while supporting companies.
AI Companies Building ‘Most Powerful’ Models Must Inform Feds, Biden Executive Order States
Image: 
Anna Moneymaker
 / Staff via Getty Images

The White House unveiled President Biden’s long-awaited executive order on artificial intelligence technologies on Monday, which aims to minimize some risks to the public from AI systems while supporting companies developing the technology. 

Since OpenAI’s ChatGPT software took off last year, AI has been quickly adopted in sectors ranging from search engines to Wall Street. Experts have raised serious concerns arising from this AI arms race, ranging from job losses, to financial collapse, to a severe degradation in sources of reliable information as deepfakes proliferate. Even so, the tech industry has loudly championed unfettered development of AI. Lawmakers in Congress recently met behind closed doors with AI creators, including OpenAI CEO Sam Altman and tech investor Marc Andreessen, who recently called regulations slowing AI development “a form of murder.” 

Advertisement

The new Executive Order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more,” according to a White House fact sheet

On the topic of AI safety, the Executive Order requires AI companies building the “most powerful” AI systems—or a “foundational model,” meaning a model that can be applied to numerous tasks with fine-tuning—must share information with the federal government, including internal tests designed to defend against unintended or nefarious uses. If a model “poses a serious risk to national security, national economic security, or national public health and safety,” the order states, “[the company] must notify the federal government when training the model, and must share the results of all red-team safety tests.”

The Order also directs the National Institute of Standards and Technology to develop standards for testing the safety of AI systems, which the Department of Homeland Security will apply to “critical infrastructure sectors” and form an AI Safety and Security Board. The Department of Energy and Homeland Security will also use those standards to address threats from AI, the Order states. 

Advertisement

Notably, the Executive Order offers little in the way of concrete protections for workers facing job losses or weakened income due to AI systems. The use of AI to lessen or degrade the contributions of writers in Hollywood was a major sticking point in the recent writer’s strike that stalled the entertainment industry for months. Actors are currently still on strike, with AI being one motivating factor. The Executive Order states that the government will develop “principles and best practices” that will serve as “guidance” for employers. It also states that the government will prepare a report on AI’s labor market impacts to “identify options” for supporting workers. 

In contrast, the Order announces direct support for startups. The government will be “providing small developers and entrepreneurs access to technical assistance and resources” and will directly “[help] small businesses commercialize AI breakthroughs,” the Order states. It will also expand grants for AI research and provide researchers with resources and data, as well as expanding the ability of immigrants with expertise in critical areas to work in the U.S. “America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined,” the White House’s fact sheet states. The White House will also encourage the Federal Trade Commission to use its authority regarding anticompetitive practices in the industry. 

The Order is very broad and covers numerous areas of concern surrounding AI. Agencies that fund the sciences will apply new safety standards to research that seeks to use AI to discover new biological compounds, in order to protect against the development of “dangerous” biological materials, for example. Last year, researchers published a study showing that an off-the-shelf AI program used in pharmaceutical research could be reconfigured to generate 40,000 potential bioweapons in six hours. 

As AI systems get better at generating convincing video and imagery, experts have become increasingly concerned about the potential for misuse. Online scammers have already begun to use AI to generate videos of celebrities to lure in victims, and the threat of deepfakes has hung over attempts to assess imagery from Hamas’ deadly attack on Israelis, and Israel’s devastating retaliatory siege on Gaza. The Executive Order states that the government will develop “guidance for content authentication and watermarking to clearly label AI-generated content” that it will use to “make it easy for Americans to know that the communications they receive from their government are authentic.”

The Order states that there will be “guidance” for landlords and federal benefits programs so that AI is not used to discriminate against people, and will establish “best practices” for “investigating and prosecuting civil rights violations related to AI.” It will also introduce best practices for the use of AI in the criminal justice system, from sentencing to policing. AI-powered systems have long been accused of perpetuating biases when assigning risk scores for reoffending, for example, and there have been numerous cases of innocent Black people being arrested after being flagged by a facial recognition system. 

The Order covers even more ground, from privacy protections to supporting the development and deployment of AI abroad. It’s clear, though, that while the federal government wants to take concrete steps to curtail some risks to the public, widespread unemployment or degradation of certain occupations due to the use of AI by companies is not presently one of them. Rather, it sees AI as another vector for entrenching American technological and economic dominance.