Sixty U.K. lawmakers have accused Google DeepMind of violating international AI safety pledges in an open letter organized by activist group PauseAI U.K. The cross-party coalition claims Google’s March release of Gemini 2.5 Pro without proper safety testing details “sets a dangerous precedent” and undermines commitments to responsible AI development.
What you should know: Google DeepMind failed to provide pre-deployment access to Gemini 2.5 Pro to the U.K. AI Safety Institute, breaking established safety protocols.
- TIME confirmed for the first time that Google DeepMind did not share the model with the U.K. AI Safety Institute before its March 25 release.
- The company only provided access to the safety institute after the model was already publicly available.
- This contradicts international agreements requiring AI companies to allow government safety testing before deployment.
Who’s involved: The letter includes prominent figures from across the political spectrum calling for accountability.
- Signatories include digital rights campaigner Baroness Beeban Kidron and former Defence Secretary Des Browne.
- The initiative was coordinated by PauseAI U.K., an activist organization focused on AI safety, with members directly contacting their MPs to request signatures.
- The letter was addressed to Demis Hassabis, CEO of Google DeepMind.
Google’s response: The company claims it did share the model with safety experts, but only after public release.
- A Google DeepMind spokesperson told TIME that the company shared Gemini 2.5 Pro with the U.K. AI Safety Institute and “a diverse group of external experts.”
- These external experts included Apollo Research, Dreadnode, and Vaultis.
- However, Google confirmed this sharing occurred only after the March 25 public release date.
Why this matters: The controversy highlights growing tensions between AI companies and regulators over safety commitments.
- The lawmakers argue that releasing models without prior safety testing undermines international efforts to ensure responsible AI development.
- This case could set a precedent for how strictly governments enforce AI safety pledges from major tech companies.
- The incident raises questions about whether voluntary commitments from AI companies are sufficient to ensure public safety.
The big picture: This dispute reflects broader challenges in AI governance as companies race to deploy increasingly powerful models while governments struggle to establish effective oversight mechanisms.
60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge