The three or four leading AI labs are pulling away from the rest of the pack, and the gap will only grow as current techniques reach their limits, Google DeepMind CEO Demis Hassabis has predicted. The advantage, he said, will go to those who can invent entirely new approaches - not just scale existing ones.

"Those labs that have capability to invent new algorithmic ideas are going to start having bigger advantage over the next few years as the last set of ideas - all the juice is being wrung out of them," Hassabis said during an appearance on the 20VC podcast, "Hvylya" reports.

Hassabis pushed back on the narrative that scaling laws have hit a wall. Returns from building bigger models remain "very substantial," he said, though no longer doubling with each generation as they did at the start. The real bottleneck is compute - not just for training, but for experimentation. "The computers, the cloud is our workbench," he explained. Testing a new algorithmic idea requires running it at a reasonable scale, which means massive compute budgets just to validate a hunch.

On open-source competition, Hassabis was measured. Google continues to invest in its Gemma suite of smaller models, designed to be "best-in-class for their sizes" for academics, small developers, and edge computing. But he said open-source models will likely remain "one step back from the absolute frontier," with the community typically needing about six months to reimplement and figure out new ideas from leading labs.

DeepMind's resurgence, Hassabis said, came from consolidating talent and compute scattered across multiple Google groups. "A lot of it was assembling together all the ingredients we already had and then pushing with relentless focus and pace - acting almost like a startup," he said. He claimed roughly 90 percent of the breakthroughs underpinning the modern AI industry came from Google Brain, Google Research, or DeepMind.

Also read: how a finance graduate turned to tree pruning as white-collar AI jobs dried up across America.