A single data center could soon do the work of an entire company like OpenAI or Google - and Jakub Pachocki, the man steering OpenAI's long-term research, is uneasy about what that means for the world.
"It's going to be a very weird thing. It's extremely concentrated power that's in some ways unprecedented," Pachocki said in an exclusive interview with MIT Technology Review, as "Hvylya" reports. "Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organizations would now be done by a couple of people."
Pachocki acknowledged the gravity of the scenario he described. "I think this is a big challenge for governments to figure out," he said. The comment reflected a broader tension: OpenAI is actively building the very technology its chief scientist calls a governance problem no one has solved.
The warning came amid fresh evidence that government interventions in the AI industry can have far-reaching and unpredictable consequences. When pressed on whether he felt personal responsibility for the risks he described, Pachocki said he did - but insisted the problem was too large for any single company. "I don't think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We'll definitely need a lot of involvement from policymakers."
The question of who governs AI has already produced real-world confrontations. The recent showdown between Anthropic and the Pentagon over autonomous weapons revealed that society has little agreement on where to draw red lines. In the immediate aftermath, OpenAI stepped in to sign a Pentagon deal its rival had walked away from. Pachocki did not address the deal directly but said powerful AI models should be sandboxed and that restrictions should remain in place until the systems can be fully trusted.
Also read: Nearly 100 OpenAI Employees Back Anthropic's Red Lines Hours After Altman Signs Pentagon Deal.
