Sakana walks back claims that its AI can dramatically speed up model training

Date:

Share post:


This week, Sakana AI, an Nvidia-backed startup that’s raised hundreds of millions of dollars from VC firms, made a remarkable claim. The company said it had created an AI system, the AI CUDA Engineer, that could effectively speed up the training of certain AI models by a factor of up to 100x.

The only problem is, the system didn’t work.

Users on X quickly discovered that Sakana’s system actually resulted in worse-than-average model training performance. According to one user, Sakana’s AI resulted in a 3x slowdown — not a speedup.

What went wrong? A bug in the code, according to a post by Lucas Beyer, a member of the technical staff at OpenAI.

“Their orig code is wrong in [a] subtle way,” Beyer wrote on X. “The fact they run benchmarking TWICE with wildly different results should make them stop and think.”

In a postmortem published Friday, Sakana admitted that the system has found a way to “cheat” (as Sakana described it) and blamed the system’s tendency to “reward hack” — i.e. identify flaws to achieve high metrics without accomplishing the desired goal (speeding up model training). Similar phenomena has been observed in AI that’s trained to play games of chess.

According to Sakana, the system found exploits in the evaluation code that the company was using that allowed it to bypass validations for accuracy, among other checks. Sakana says it has addressed the issue, and that it intends to revise its claims in updated materials.

“We have since made the evaluation and runtime profiling harness more robust to eliminate many of such [sic] loopholes,” the company wrote in the X post. “We are in the process of revising our paper, and our results, to reflect and discuss the effects […] We deeply apologize for our oversight to our readers. We will provide a revision of this work soon, and discuss our learnings.”

Props to Sakana for owning up to the mistake. But the episode is a good reminder that if a claim sounds too good to be true, especially in AI, it probably is.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Meta, X approved ads containing violent anti-Muslim, antisemitic hate speech ahead of German election, study finds

Social media giants Meta and X (formerly Twitter) approved ads targeting users in Germany with violent anti-Muslim...

Court filings show Meta staffers discussed using copyrighted content for AI training

For years, Meta employees have internally discussed using copyrighted works obtained through legally questionable means to train...

Brian Armstrong says Coinbase spent $50M fighting SEC lawsuit – and beat it

Coinbase on Friday said the SEC has agreed to drop the lawsuit against the company with prejudice,...

iOS 18.4 will bring Apple Intelligence-powered ‘Priority Notifications’

Apple on Friday released its first developer beta for iOS 18.4, which adds a new “Priority Notifications”...

Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia founder and CEO Jensen Huang said the market got it wrong when it comes to DeepSeek’s...

Report: OpenAI plans to shift compute needs from Microsoft to SoftBank

OpenAI is forecasting a major shift in the next five years around who it gets most of...

Norway’s 1X is building a humanoid robot for the home

Norwegian robotics firm 1X unveiled its latest home robot, Neo Gamma, on Friday. The humanoid system will...

Despite recent layoffs, Meta is expanding in India

Meta made headlines last month for announcing plans to cut 5% of its employees, controversially deeming them...