Thursday, weeks after launching its most powerful AI model to date, Gemini 2.5 Pro, Google has published a technical report showing the results of its internal security assessments. However, the report is light on the details, according to experts, which makes it difficult to determinate the risks that the model could pose.
Technical reports provide useful – and inexpensive information, sometimes – that companies do not always announce largely on their AI. Overall, the AI community considers these relationships as efforts in good faith to support independent assessments of research and security.
Google adopts a different security report approach from that of some of its AI competitors, publishing technical reports only once it considers a model that has graduated from the “experimental” step. The company does not include the results of all its “dangerous capacity” assessments in these articles; He reserves them for a separate audit.
Several experts with whom Techcrunch has spoken were still disappointed by the scarcity of the Gemini 2.5 Pro report, which, according to them, does not mention the Google border security frame (FSF). Google introduced the FSF last year into what it described as an effort to identify future AI capacities which could cause “serious damages”.
“This (report) is very sparse, contains a minimum of information and has been released for weeks after the model has already been made available to the public,” Techcrunch Peter Wildelford, co-founder of the Institute for IA Policy and Strategy, in Techcrunch told Techcrunch. “It is impossible to check if Google is up to its public commitments and therefore impossible to assess the safety and safety of their models.”
Thomas Woodside, co-founder of the Secure AI project, said that although it is happy that Google has published a report for Gemini 2.5 Pro, he is not convinced of the company’s commitment to provide additional security assessments in a timely manner. Woodside stressed that the last time Google published the results of dangerous capacity tests was in June 2024 – for a model announced in February of the same year.
Not inspired much confidence, Google did not make a report for Gemini 2.5 Flash, a smaller and more efficient model than the company announced last week. A spokesperson told Techcrunch that a flash report “arrives soon”.
“I hope this is a Google promise to start publishing more frequent updates,” Wooddiside in Techcrunch told. “These updates should include the results of model assessments that have not yet been publicly deployed, as these models could also present serious risks.”
Google may have been one of the first AI laboratories to offer standardized reports for models, but it is not the only one to have been accused of underlying transparency in recent times. Meta has published an equally shown security assessment of its new Open Llama 4 models, and Openai has chosen to publish any report for its GPT-4.1 series.
Google’s head is provided by the technology giant made to regulators to maintain a high level of tests and reports on IA security. Two years ago, Google told the United States government that it would publish security reports for all “important” public AI “in the scope”. The company followed this promise with similar commitments to other countries, committing to “providing public transparency” around AI products.
Kevin Bankston, principal IA governance councilor at the Center for Democracy and Technology, described the tendency of sporadic and wave reports a “downward race” on AI security.
“Combined with reports according to which competing laboratories like Openai have shaved their safety time before the release of months to day, this meager documentation for the best model of Google AI tells a disturbing story of a downward race on the security and transparency of AI while companies rush on the market,” he told Techcrunch.
Google said in statements that, although not detailed in his technical reports, he performs security tests and “an opponent’s red team” for the models before the exit.