For a company whose stated mission is to ensure artificial general intelligence benefits all of humanity, OpenAI's chief scientist appears notably uninterested in the concept. In an exclusive conversation with MIT Technology Review, Jakub Pachocki mentioned AGI only once - and immediately replaced it with a different term.

Rather than chasing a system that matches human cognition across every domain, Pachocki described OpenAI's actual target as "economically transformative technology," "Hvylya" reports, citing the MIT Technology Review interview. The distinction matters: it sets a lower, more concrete bar than the sweeping promises that have defined AI marketing for years.

"Even by 2028, I don't expect that we'll get systems as smart as people in all ways. I don't think that will happen," Pachocki said. He went further, rejecting the analogy between LLMs and human minds. "They are superficially similar to people in some ways because they're kind of mostly trained on people talking. But they're not formed by evolution to be really efficient."

The candor stands out in an industry where competitors routinely invoke AGI as both a technical milestone and a fundraising pitch. Pachocki's framing aligns more closely with recent warnings from economists that the real impact of AI will be economic and political - not the sci-fi scenario of a machine that outthinks humans at everything.

"The interesting thing is you don't need to be as smart as people in all their ways in order to be very transformative," Pachocki said. His vision is narrower but arguably more credible: systems that work autonomously on specific categories of problems for extended periods, not general-purpose digital humans. He cited the jump from GPT-3 to GPT-4 as evidence that raw capability gains naturally produce systems that sustain coherent work longer - even without specialized training for autonomous operation.

Doug Downey of the Allen Institute for AI admitted he no longer trusts his own predictions about how near or far specific capabilities are. "I've been in this field for a couple of decades and I no longer trust my predictions," he said. LLMs, as one recent analysis argued, may be most transformative not as general intelligences but as specialized expert systems that democratize access to knowledge.

Also read: AI Disinformation Fears Overstated, Philosopher Argues.