Tech

‘I’m a Doomer’: OpenAI’s New Interim CEO Wants to Slow AI Progress Down

Emmett Shear has called himself a "doomer" and "AI safetyist" in posts on X, and railed against e/acc ideology.
‘I’m a Doomer’: OpenAI’s New Interim CEO Doesn’t Buy Silicon Valley’s AI Accelerationist Ideology
Image: 
Robin L Marshall
 / Contributor via Getty Images

The new CEO of ChatGPT maker OpenAI is a self-professed artificial intelligence “doomer” who has railed against a vision of unfettered AI development championed by powerful figures in Silicon Valley. 

The tech world is still reeling after a chaotic weekend that saw former OpenAI CEO Sam Altman ousted by the company board, enter negotiations to be reinstated after investors balked, and then join Microsoft when those negotiations broke down. The tumult also saw the OpenAI board name company veteran Mira Murati as interim CEO before quickly replacing her with Twitch co-founder and former CEO Emmett Shear in another surprising move. 

Advertisement

The situation is still fluid, and there’s no telling what will happen next. On Monday morning, the majority of OpenAI’s employees, including Murati and board member Ilya Sutskever, signed a letter pledging to leave the company and work at Microsoft if the board does not resign and reinstate Altman and his lieutenant Greg Brockman. But for now, Shear is CEO of the most important AI company in the U.S., and he has a lot of thoughts about the direction of AI development that he’s posted on X (formerly Twitter), over the past year. 

Shear’s posts reveal he is a self-declared AI “doomer” and “safetyist” who wants to slow AI development, explicitly breaking with the effective accelerationists (“e/acc”) of Silicon Valley who want to charge ahead as fast as possible with minimal concern for regulations. 

“I'm a doomer and I'm basically e/acc on literally everything except the attempt to build a human level [AGI],” Shear wrote in an August post replying to a user who referred to doomerism—the belief that a sufficiently powerful AI, or Artificial General Intelligence (AGI), could destroy humanity—as “primitive.” He added in a follow-up post that “AGI is a pit trap with spikes we have to avoid, while navigating towards the north star of progress. This is not hard stuff to understand! Believing AGI is dangerous simply does NOT equate to primitivism.”

Advertisement

In another post, Shear referred to himself as an “AI safetyist.” 

“[Techno-optimism] doesn't run counter to AI Safetyism at all, at least not any of the AI Safetyists I know of including myself,” Shear posted in a July thread where another user argued that e/acc ideology is morphing into simple tech optimism. “I'm a techno-optimist who ALSO believes that there's a chance an human level [AGI] will be catastrophically dangerous.”

Shear posted a chart on November 18 explaining what being an AI “doomer” or “safetyist” currently means to him. A doomer is someone who wants to “slow down capabilities research” while a safetyist wants to “regulate to require safety and anti-bias.” 

“I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down,” he said in September. “If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”

In June, Shear praised Meta for “not making cutting edge frontier models” and “staying well within established bounds of safety.” He said it is “in many ways, the only company taking AI safety truly seriously!”

Advertisement

Sam Altman’s OpenAI made developing AGI its top priority, while also paying attention to safety concerns. Sutskever also bought into the AGI hype; he reportedly made “Feel the AGI!” an internal company slogan. Altman focused his energy on pushing ChatGPT out into the world and quickly developing new tools. Sutskever reportedly commissioned a wooden effigy of an “unaligned,” or dangerous, AGI and burned it at an offsite meeting for company leadership. 

Much of the commentary around the OpenAI mayhem has centered around the supposed ideological divides between Altman and other board members. Prominent tech investor Marc Andreesen spent the weekend sharing X posts framing Altman’s firing as a victory for “safetyists,” “doomers,” and “decels” over e/acc. Andreesen recently championed e/acc thinking in a rambling manifesto that argued slowing down AI development with regulations is akin to a form of murder, and currently has “e/acc” in his X bio. 

Shear, it appears, is not a fan. “e/acc: the movement that rejects reality, and wants to proceed in the most convenient and self-enriching way regardless of the real world consequences of their actions. Yup, checks out,” he said in a post on April 19. 

It’s unclear how heavily such ideological divides weighed on the OpenAI board’s deliberations, if at all. While much of the initial speculation around Altman’s ousting positioned him as being in opposition to Sutskever on issues like safety, Sutskever signed the letter calling for Altman’s return and said in a post on X that he “deeply [regrets] my participation in the board's actions.” In a post announcing taking on the interim CEO role, Shear clarified that “the board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models.”

Regardless, Shear’s public comments provide valuable insight into what the new interim head of OpenAI thinks about issues related to its core work. Shear did not respond to a request for comment sent via DM.