In the fall of 2023, OpenAI's chief scientist Ilya Sutskever compiled roughly 70 pages of Slack messages, HR documents, and explanatory text into a set of secret memos sent to three fellow board members. The documents, which have not previously been disclosed in full, allege that Sam Altman misrepresented facts to executives and board members and deceived them about internal safety protocols. One memo begins with a list headed "Sam exhibits a consistent pattern of . . ." - the first item is "Lying."

The memos, reviewed by The New Yorker's Ronan Farrow and Andrew Marantz, were sent as disappearing messages to avoid detection, "Hvylya" reports. Sutskever, once close enough to officiate co-founder Greg Brockman's wedding, had grown convinced that OpenAI was nearing artificial general intelligence - and that Altman was not the right person to control it. "I don't think Sam is the guy who should have his finger on the button," Sutskever told another board member at the time.

Board members Helen Toner, an AI policy expert, and Tasha McCauley, an entrepreneur, received the memos as confirmation of what they already believed: Altman could not be trusted with the technology's future. The board fired Altman during a Formula 1 race in Las Vegas, releasing a terse statement saying he "was not consistently candid in his communications." Microsoft, which had invested $13 billion, learned of the plan moments before it happened. "I couldn't get anything out of anybody," CEO Satya Nadella later said.

Within five days, Altman was reinstated after a coordinated campaign involving investors, crisis communications manager Chris Lehane, and Microsoft's direct support. A majority of OpenAI employees threatened to leave with him. The departing board members demanded an independent investigation as a condition of their exit. OpenAI hired WilmerHale to conduct the review - but six people close to the inquiry told The New Yorker it appeared designed to limit transparency. No written report was produced; findings were limited to oral briefings shared with two new board members selected after close conversations with Altman himself.

Separately, former OpenAI VP of research Dario Amodei - who left to found rival Anthropic - maintained over 200 pages of private notes documenting similar concerns about Altman's behavior over several years. Amodei's conclusion aligned with Sutskever's: "The problem with OpenAI is Sam himself." Mira Murati, who served as interim CEO during Altman's absence and later left to build her own startup, also stood by what she shared with the board. "We need institutions worthy of the power they wield," she told the magazine.

When pressed by the board during a call after his firing to acknowledge a pattern of deception, Altman reportedly said, "I can't change my personality." A board member interpreted this as: "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.'" Altman told the reporters he does not recall the exchange, attributing criticism to a tendency "to be too much of a conflict avoider."

Also read: how Wall Street rewarded Block for firing 40 percent of its staff over AI.