This article is more than

1 year old
Artificial Intelligence

The OpenAI Genius Who Told Sam Altman He Was Fired

Author: Editors Desk Source: WSJ:
November 21, 2023 at 11:55
OpenAI chief scientist Ilya Sutskever is a highly regarded technical visionary in the field of artificial intelligence. PHOTO: JACK GUEZ/AGENCE FRANCE-PRESSE/GETTY IMAGES
OpenAI chief scientist Ilya Sutskever is a highly regarded technical visionary in the field of artificial intelligence. PHOTO: JACK GUEZ/AGENCE FRANCE-PRESSE/GETTY IMAGES
Company’s chief scientist led a board coup against one of the most prominent figures in Silicon Valley

Last Friday, Ilya Sutskever sent a text message to his boss and fellow board member, Sam Altman. Could he jump on a video chat at noon Pacific time?

On the call was the rest of OpenAI’s board, but it was Sutskever, the company’s chief scientist and co-founder, who delivered the news. Altman was fired, “and the news was going out very soon.”

The meeting kicked off a weekend of turmoil, a revolt by OpenAI’s employees and, ultimately, an apology from Sutskever. On Monday, he wrote on X, formerly Twitter: “I deeply regret my participation in the board’s actions.”

It was a dramatic 72 hours for Sutskever, but also the culmination of a 15-year journey that saw him evolve from a young graduate student tinkering with little-known algorithms to the leader of a board coup that deposed one of the most prominent figures in Silicon Valley.
 



Most employees at OpenAI have demanded that former CEO Sam Altman be reinstated after the company’s board fired him on Friday. Microsoft said late Sunday that it hired Altman to lead a new advanced artificial-intelligence research team. Photo: Justin Sullivan/Getty Images
 

Sutskever is a highly regarded technical visionary in the field of artificial intelligence. After a stint at the Open University of Israel, he moved to the University of Toronto. At first, his work seemed highly experimental. A 2010 profile by the University of Toronto Magazine describes Sutskever’s software creating nonsensical Wikipedia-like entries such as “Akkerma’s Alcesia Minor (including) of Hawaiian State Rites of Cassio.” 

But in 2012, he made a name for himself by co-authoring a paper that demonstrated the extraordinary capabilities of the “deep learning” algorithms that Sutskever and his fellow University of Toronto researchers had been tinkering with. This paper, co-written with Geoffrey Hinton and Alex Krizhevsky, showed that deep learning could be used to solve pattern-recognition problems at a level that had previously seemed unthinkable. It kicked off a race to find new uses for deep learning and—a year later—the three researchers were all working at Google. 

There Sutskever worked on the company’s state-of-the-art AI system, called TensorFlow. 

By 2016, Sutskever was chief scientist at OpenAI, where he became focused on two ideas. First, the idea that AI systems could achieve human-level thinking, also known as artificial general intelligence, or AGI. And second, that these systems would be dangerous if they weren’t somehow “aligned” with human values.

“He’s incredibly smart, but his biggest claim to fame is his work at OpenAI, which hopefully he hasn’t tanked,” said Oren Etzioni, a professor emeritus at the University of Washington who studies AI.
 

 Sam Altman, OpenAI’s ousted CEO, in light pants, and the company’s chief scientist Ilya Sutskever, in a navy suit, clashed over safety, sources familiar with the board’s thinking say. PHOTO: JACK GUEZ/AFP/GETTY IMAGES
Sam Altman, OpenAI’s ousted CEO, in light pants, and the company’s chief scientist Ilya Sutskever, in a navy suit, clashed over safety, sources familiar with the board’s thinking say. 
PHOTO: JACK GUEZ/AFP/GETTY IMAGES


Current and former employees describe the square-jawed and intense Sutskever as approaching AI’s problems with a near-religious fervor. “If you don’t feel the AGI when you wake up and when you fall asleep, you shouldn’t be at this company,” he told employees at an all-hands meeting late last year. Sutskever has told employees that he worried that eventually AGI systems would treat humans the way humans currently treat animals, people familiar with the matter said. 

In a podcast interview earlier this year, Scott Aaronson, a computer scientist at the University of Texas at Austin, who is currently working on safety at OpenAI, described his weekly calls with Sutskever. “I’d tell him about my progress…and he would say, ‘Well, that’s great, Scott, and you should keep working on that. But what we really want to know is how do you formalize what it means for the AI to love humanity?’”

In recent years, Sutskever had become increasingly concerned with AI safety. At OpenAI, he headed up the company’s Superalignment team, which was set up to use 20% of the company’s computing power to make sure that the AI systems it built wouldn’t be harmful to humans.

Although the precise reason why Altman was ousted is unclear, sources familiar with the board’s thinking say that Sutskever and Altman clashed over safety. In a note sent to OpenAI employees on Sunday night, the company’s remaining four board members said that this wasn’t the reason for Altman’s dismissal. “Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company,” they wrote.

The man that Sutskever and the other board members chose to succeed Altman, Emmett Shear, said in September that he was in favor of slowing down the pace of AI development for safety reasons. “If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead,” Shear wrote on X.

By Monday morning, Sutskever sent out an X message recanting his actions. “I never intended to harm OpenAI,” he said. “I love everything we’ve built together and I will do everything I can to reunite the company.”

Altman quickly responded to Sutskever’s X message, republishing it to his two million followers along with three red hearts.

Deepa Seetharaman contributed to this article.

Write to Robert McMillan at robert.mcmillan@wsj.com

Keywords
You did not use the site, Click here to remain logged. Timeout: 60 second