Unhappy and disappointed customer giving low rating and negative feedback in survey, poll or questionnaire. Sad and dissatisfied man giving review about service quality. Bad user experience.


This week, Sakana AI, an Nvidia-backed startup that’s raised a whole bunch of hundreds of thousands of {dollars} from VC companies, made a outstanding declare. The corporate stated it had created an AI system, the AI CUDA Engineer, that would successfully pace up the coaching of sure AI fashions by an element of as much as 100x.

The one drawback is, the system didn’t work.

Customers on X shortly found that Sakana’s system really resulted in worse-than-average mannequin coaching efficiency. Based on one consumer, Sakana’s AI resulted in a 3x slowdown — not a speedup.

What went mistaken? A bug within the code, in accordance with a publish by Lucas Beyer, a member of the technical workers at OpenAI.

“Their orig code is mistaken in [a] refined manner,” Beyer wrote on X. “The very fact they run benchmarking TWICE with wildly totally different outcomes ought to make them cease and assume.”

In a postmortem printed Friday, Sakana admitted that the system has discovered a solution to “cheat” (as Sakana described it) and blamed the system’s tendency to “reward hack” — i.e. establish flaws to attain excessive metrics with out undertaking the specified purpose (rushing up mannequin coaching). Comparable phenomena has been noticed in AI that’s educated to play video games of chess.

Based on Sakana, the system discovered exploits within the analysis code that the corporate was utilizing that allowed it to bypass validations for accuracy, amongst different checks. Sakana says it has addressed the difficulty, and that it intends to revise its claims in up to date supplies.

“We’ve since made the analysis and runtime profiling harness extra strong to remove a lot of such [sic] loopholes,” the corporate wrote within the X publish. “We’re within the technique of revising our paper, and our outcomes, to replicate and talk about the results […] We deeply apologize for our oversight to our readers. We’ll present a revision of this work quickly, and talk about our learnings.”

Props to Sakana for proudly owning as much as the error. However the episode is an efficient reminder that if a declare sounds too good to be true, particularly in AI, it in all probability is.