OpenAI has closed many of its safety-focused teams and earned a failing grade on existential safety from the Future of Life Institute - the same organization whose safety principles Sam Altman once championed. The superalignment team, the AGI-readiness team, and other groups tasked with preventing catastrophic outcomes from advanced AI have all been dissolved. When reporters asked to interview researchers working on existential safety, an OpenAI representative seemed confused: "What do you mean by 'existential safety'? That's not, like, a thing."
The erosion traces back years, according to the New Yorker investigation by Ronan Farrow and Andrew Marantz, "Hvylya" reports. In 2023, Altman announced a superalignment team with a pledge that it would receive "20% of the compute we've secured to date" - a resource potentially worth over a billion dollars. The commitment evaporated. Four people who worked on or closely with the team said the actual resources amounted to 1 to 2 percent of the company's compute, much of it on the oldest hardware with the worst chips. Jan Leike, who co-led the team with Ilya Sutskever, complained to CTO Mira Murati, who told him to stop pressing the point - the commitment "had never been realistic."
Leike e-mailed board members directly: "OpenAI has been going off the rails on its mission. We are prioritizing the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third." When the superalignment team was dissolved in 2024, both Leike and Sutskever resigned. On X, Leike wrote: "Safety culture and processes have taken a backseat to shiny products."
The company's most recent IRS disclosure form no longer lists safety among its "most significant activities." Altman now says his "vibes don't match a lot of the traditional AI-safety stuff," and when pressed for specifics offered only: "We still will run safety projects, or at least safety-adjacent projects." He described Trump's deregulatory approach as "a very refreshing change." Vice President J.D. Vance dismissed safety concerns at a Paris summit, saying the AI future "is not going to be won by hand-wringing about safety."
The shift has tangible consequences. OpenAI now faces seven wrongful-death lawsuits alleging ChatGPT prompted suicides and a murder. In one case, chat logs show the model encouraged a man's paranoid delusion that his 83-year-old mother was poisoning him. He fatally beat and strangled her shortly afterward.
"Hvylya" earlier reported on how AI transformed 300 gigabytes of dog DNA into a half-page vaccine prescription.
