The Google Gemini generative AI logo on a smartphone.


On Thursday, weeks after launching its strongest AI mannequin but, Gemini 2.5 Professional, Google revealed a technical report displaying the outcomes of its inner security evaluations. Nevertheless, the report is mild on the small print, consultants say, making it tough to find out which dangers the mannequin may pose.

Technical studies present helpful — and unflattering, at occasions — information that firms don’t at all times broadly promote about their AI. By and enormous, the AI neighborhood sees these studies as good-faith efforts to help impartial analysis and security evaluations.

Google takes a unique security reporting method than a few of its AI rivals, publishing technical studies solely as soon as it considers a mannequin to have graduated from the “experimental” stage. The corporate additionally doesn’t embrace findings from all of its “harmful functionality” evaluations in these write-ups; it reserves these for a separate audit.

A number of consultants TechCrunch spoke with had been nonetheless upset by the sparsity of the Gemini 2.5 Professional report, nonetheless, which they famous doesn’t point out Google’s Frontier Security Framework (FSF). Google launched the FSF final 12 months in what it described as an effort to establish future AI capabilities that might trigger “extreme hurt.”

“This [report] may be very sparse, comprises minimal info, and got here out weeks after the mannequin was already made obtainable to the general public,” Peter Wildeford, co-founder of the Institute for AI Coverage and Technique, informed TechCrunch. “It’s inconceivable to confirm if Google resides as much as its public commitments and thus inconceivable to evaluate the security and safety of their fashions.”

Thomas Woodside, co-founder of the Safe AI Mission, mentioned that whereas he’s glad Google launched a report for Gemini 2.5 Professional, he’s not satisfied of the corporate’s dedication to delivering well timed supplemental security evaluations. Woodside identified that the final time Google revealed the outcomes of harmful functionality checks was in June 2024 — for a mannequin introduced in February that very same 12 months.

Not inspiring a lot confidence, Google hasn’t made obtainable a report for Gemini 2.5 Flash, a smaller, extra environment friendly mannequin the corporate introduced final week. A spokesperson informed TechCrunch a report for Flash is “coming quickly.”

“I hope it is a promise from Google to begin publishing extra frequent updates,” Woodside informed TechCrunch. “These updates ought to embrace the outcomes of evaluations for fashions that haven’t been publicly deployed but, since these fashions may additionally pose severe dangers.”

Google could have been one of many first AI labs to suggest standardized studies for fashions, however it’s not the one one which’s been accused of underdelivering on transparency recently. Meta launched a equally skimpy security analysis of its new Llama 4 open fashions, and OpenAI opted to not publish any report for its GPT-4.1 collection.

Hanging over Google’s head are assurances the tech large made to regulators to take care of a excessive customary of AI security testing and reporting. Two years in the past, Google informed the U.S. authorities it will publish security studies for all “vital” public AI fashions “inside scope.” The corporate adopted up that promise with comparable commitments to different nations, pledging to “present public transparency” round AI merchandise.

Kevin Bankston, a senior adviser on AI governance on the Heart for Democracy and Know-how, known as the development of sporadic and imprecise studies a “race to the underside” on AI security.

“Mixed with studies that competing labs like OpenAI have shaved their security testing time earlier than launch from months to days, this meager documentation for Google’s prime AI mannequin tells a troubling story of a race to the underside on AI security and transparency as firms rush their fashions to market,” he informed TechCrunch.

Google has mentioned in statements that, whereas not detailed in its technical studies, it conducts security testing and “adversarial purple teaming” for fashions forward of launch.