Between 70% and 90% of the code Anthropic uses to develop future versions of Claude is now written by Claude itself. The company's co-founder and chief science officer Jared Kaplan believes fully automated AI research could be as little as a year away - a milestone that would fundamentally change how artificial intelligence evolves.

"Hvylya" reports, citing a TIME investigation, that Anthropic's internal benchmarks show Claude is already 427 times faster than its human overseers at performing some key tasks. One researcher described a colleague running six versions of Claude, each managing 28 additional instances, all simultaneously running experiments in parallel.

The catalyst was Claude Code, a tool created by Ukrainian-born engineer Boris Cherny in his first month at Anthropic. Where chatbots could only talk, Claude Code gave Claude hands - the ability to access files, run programs, and write and execute code like any programmer. When Cherny shared his prototype internally, it spread so fast that CEO Dario Amodei asked during his first performance review whether he was forcing colleagues to use it.

Evan Hubinger, who leads Anthropic's alignment stress-testing team, said the phenomenon many in AI circles have long anticipated is already underway. "Recursive self-improvement, in the broadest sense, is not a future phenomenon. It is a present phenomenon," he said. Model releases are now separated by weeks rather than months.

The acceleration cuts both ways. While Claude speeds up Anthropic's safety research as well, the risks become circular as the company increasingly relies on the AI system to develop itself. Helen Toner, interim executive director at Georgetown University's Center for Security and Emerging Technology, said the trajectory deserves far more scrutiny. "The idea that the wealthiest companies in the world, employing some of the smartest people on the planet, are trying to fully automate AI R&D deserves a 'what the f-ck' reaction," she said.

Anthropic staff say the model still lacks the judgment of its human overseers. Executives do not expect that gap to last long.

Also read: LLMs as New "Intelligence Bureaus": A Century-Old Vision Silicon Valley Accidentally Fulfilled