The greatest mathematical discoveries may look less like flashes of genius and more like brute-force pattern matching - the same thing large language models do today. In conversation with Dwarkesh Patel, Fields Medal winner Terence Tao compared Johannes Kepler's decades-long hunt for planetary laws to how AI systems cycle through hypotheses until something sticks.
Kepler spent years trying to fit Platonic solids between the orbits of planets, convinced that God had encoded geometric perfection into the solar system, "Hvylya" reports, citing the Dwarkesh Podcast. The theory was beautiful and wrong. But Kepler kept cycling through ideas, eventually landing on elliptical orbits - only after he got access to Tycho Brahe's painstakingly collected dataset, the most precise astronomical observations of the era.
"Kepler certainly had to cycle through many ideas, several of which didn't work. I bet there were many that he didn't even publish at all because they just didn't fit," Tao said. "That's an important part of the process, trying all kinds of random things and seeing if they worked."
Patel pushed the analogy further, calling Kepler a "high-temperature LLM" - a model with the randomness dial turned up, trying wild associations for twenty years. Some of his ideas involved musical harmonies of the planets; others were pure astrology. But the third law of planetary motion was buried in the same book as the nonsense, and it turned out to be the key Newton needed a century later to derive the inverse-square law of gravity.
Tao agreed but stressed that hypothesis generation without verification is "just slop." "We celebrate Kepler, but we should also celebrate Brahe for his assiduous data collection, which was ten times more precise than any previous observation," he said. "That extra decimal point of accuracy was essential for Kepler to get his results."
The implication for AI cuts both ways. Models can now generate hypotheses at near-zero cost, but science still lacks the infrastructure to verify them at the same speed. "AI has driven the cost of idea generation down to almost zero," Tao said. "Now the bottleneck is different. We're now in a situation where suddenly people can generate thousands of theories for a given scientific problem. Now we have to verify them, evaluate them."
"Hvylya" earlier reported how one technology company cut 40 percent of its workforce after adopting AI tools - and Wall Street rewarded the decision.
