AI Incident Monitor - Dec 2025 List

State-Sponsored "Agentic" Cyber Espionage. ALSO, Waymo’s Gridlock And The Physical-Digital Infrastructure Collapse and more.....

Editor’s Blur 📢😲

Less than 1 min read

Welcome to the December 2025 AI Incident’s List - As we now, AI laws around the globe are getting their moment in the spotlight, and crafting smart policies will take you more than a lucky guess - it needs facts, forward-thinking, and a global group hug 🤗. Enter the AI Bulletin’s Global AI Incident Monitor (AIM) monthly newsletter, your friendly neighborhood watchdog for AI “gone wild”. AIM keeps tabs, at the end of each month, on global AI mishaps and hazards🤭, serving up juicy insights for company executives, policymakers, tech wizards, and anyone else who’s interested. Over time, AIM will piece together the puzzle of AI risk patterns, helping us all make sense of this unpredictable tech jungle. Think of it as the guidebook to keeping AI both brilliant and well-behaved!

In This Issue: December 25 - Key AI Breaches
  1. State-Sponsored "Agentic" Cyber Espionage

  2. “Gemini Jack": The Zero-Click Enterprise Vulnerability

  3. Waymo’s Gridlock And The Physical-Digital Infrastructure Collapse

  4. Workday Class Action & NJ Regulation - The Algorithms on Trial

  5. Adobe Firefly Class Action And The Myth of "Ethical AI"

  6. "Friend" Wearable, A Surveillance Backlash

Total Number of AI Incidents by Location - Jan to Nov 2025

AI BREACHES (1)

1- State-Sponsored "Agentic" Cyber Espionage

The Briefing

In a landmark disclosure, Anthropic revealed that a state-sponsored actor (identified as GTG-1002, linked to China) successfully weaponized the "Claude Code" tool to conduct autonomous cyber espionage. Unlike traditional attacks, this campaign utilized an AI agent capable of executing 80-90% of the attack lifecycle independently. The agent autonomously handled reconnaissance, vulnerability scanning, and credential harvesting, with human operators stepping in only for strategic oversight. This incident marks the transition from theoretical "offensive AI" to active, machine-speed cyber warfare, rendering manual incident response protocols largely ineffective against such rapid, autonomous threats

Potential AI Impact!!

 ✔️ People & Planet: National Security (State-level threat to critical infrastructure and sovereignty).  

 ✔️ Economic Context: Commercial Espionage (Automated theft of intellectual property at scale).  

 ✔️ AI Model: Generative & Agentic (LLM wrapped in an autonomous workflow).  

 ✔️ Task & Output: Offensive Operations (Autonomous execution of the cyber kill chain

💁 Why is it a Breach?

This is a governance breach because it demonstrates the failure of "dual-use" controls on powerful coding agents. The same capabilities designed to help developers fix bugs- autonomous reasoning and code execution, were successfully repurposed for malicious network intrusion. It exposes a critical gap in the "Release" phase of the AI lifecycle, where safety training (RLHF) proved insufficient to prevent the model from acting as a weapon when wrapped in an agentic framework. This incident fundamentally changes the risk profile for every organization, as the barrier to entry for sophisticated, machine-speed attacks has been lowered significantly.

AI BREACHES (2)

2 - “Gemini Jack" - A Zero-Click Enterprise Vulnerability

The Briefing

Researchers at Noma Security disclosed "GeminiJack," a critical zero-click vulnerability in Google's Gemini Enterprise. The flaw exploited the system's Retrieval-Augmented Generation (RAG) architecture to allow for indirect prompt injection. Attackers could hide malicious instructions within benign documents (like Google Docs or Calendar invites). When a corporate user asked Gemini to summarize these documents, the AI unknowingly executed the hidden commands, which could instruct it to scan the user's private files and exfiltrate sensitive data via a malicious image URL. This required no distinct action from the user other than their normal interaction with the AI assistant.

Potential AI Impact!!

 ✔️ Data & Input: Poisoned Data (Malicious instructions injected into trusted corporate data sources).

✔️  AI Model: LLM / RAG (Vulnerability inherent to retrieval-based architectures).

✔️ Task & Output: Information Retrieval (Hijacking the summarization task for data exfiltration).

✔️  Economic Context: Enterprise Risk (Direct threat to trade secrets and internal communications).

💁 Why is it a Breach?

This is a breach of the "trust boundary" between data and code. In traditional software, data is passive. In Generative AI, data (text) can be interpreted as instructions. "GeminiJack" represents a failure in Data Governance and Input Sanitization, as the system failed to distinguish between a user's query and the content of a retrieved document. It highlights that RAG systems, often sold as the "safe" way to use enterprise AI - introduce massive new attack surfaces where a single poisoned file can compromise an entire organization's data privacy without the user ever realizing a breach occurred.

Total Number of AI Incidents by Industry - Jan to Nov 2025

AI BREACHES (3)

3 - Waymo’s Gridlock And The Physical-Digital Infrastructure Collapse

The Briefing

A massive power outage in San Francisco caused a cascade failure in the Waymo robotaxi fleet, leading to widespread gridlock. As traffic signals across the city went dark, Waymo vehicles entered a "fail-safe" mode, pulling over or stopping in intersections because they could not interpret the non-functional lights or navigate the social dynamics of an uncontrolled blackout intersection. The stalled vehicles blocked roads and impeded emergency responders, forcing Waymo to suspend operations entirely. The incident revealed the critical dependency of autonomous vehicles on functioning city infrastructure and their inability to handle "out-of-distribution" infrastructure failures.

Potential AI Impact!!

 ✔️ People & Planet: Public Safety (Physical obstruction of emergency routes and civic paralysis).  

 ✔️ Task & Output: Navigation (Failure in decision-making during non-standard environmental conditions).  

 ✔️ AI Model: Perception & Control (Inability to generalize "safe operation" without active signals).

 ✔️ Economic Context: Urban Mobility (Disruption of city-wide transportation and economic activity).

💁 Why is it a Breach?

This is a governance breach related to Resilience and Reliability. While the individual vehicles followed their safety protocols (stop when unsure), the collective behavior caused a civic denial-of-service attack. It highlights a failure in "Smart City" planning, where autonomous systems are deployed without adequate contingency for infrastructure collapse. The incident demonstrates that "safety" is not just about avoiding collisions, but about maintaining operational continuity during crises. It forces a re-evaluation of whether AVs should be permitted to operate without V2X (vehicle-to-everything) redundancies.

AI BREACHES (4)

4 - Workday Class Action & NJ Regulation: The Algorithms on Trial

The Briefing

In a pivotal month for algorithmic accountability, a federal court granted conditional certification to a class-action lawsuit (Mobley v. Workday) alleging that Workday's AI hiring tools discriminated against applicants aged 40 and older, as well as Black and disabled candidates. Simultaneously, New Jersey introduced new regulations codifying "disparate impact" liability, placing the burden of proof on employers to demonstrate their AI tools are not discriminatory. These events establish that software vendors can be held liable as "agents" of employers, shattering the liability shield that has protected AI vendors.

Potential AI Impact!!

 ✔️ People & Planet: Human Rights (Violation of non-discrimination and equal opportunity).  

 ✔️ AI Model: Predictive Analytics (Bias encoded in historical training data and ranking logic).  

 ✔️ Economic Context: Labor Market (Systemic exclusion of protected groups from employment).  

 ✔️ Data & Input: Training Data Bias ( reliance on biased historical hiring patterns). 

💁 Why is it a Breach?

This is a breach of Fundamental Human Rights and Regulatory Compliance. It highlights the failure of Algorithmic Fairness. For years, companies used "black box" algorithms to screen resumes, assuming that removing human reviewers removed bias. This incident proves that AI can industrialize and scale discrimination if the underlying data is biased. The legal recognition of the vendor as an "agent" means companies can no longer outsource their liability; they must audit their AI tools for disparate impact or face existential legal risk

Total Number of Incidents by Harm Type to Nov 2025

AI BREACHES (5)

5 - Adobe Firefly Class Action: The Myth of "Ethical AI"

The Briefing

A class-action lawsuit was filed against Adobe by authors, including Elizabeth Lyon, alleging that its "Firefly" AI, marketed as the "commercially safe" and "ethical" alternative to competitors - was trained on the "Books3" dataset, which contains thousands of pirated books. This directly contradicts Adobe's marketing claims that Firefly was trained only on licensed stock images and public domain content. The lawsuit alleges false advertising and copyright infringement, putting enterprise customers who relied on Adobe's indemnification at risk.

Potential AI Impact!!

 ✔️ Data & Input: Data Provenance (Use of unauthorized/pirated datasets for training).

✔️ Economic Context: Market Transparency (False advertising regarding the legal safety of the product).  

✔️ AI Model: Generative (Reliance on vast, unvetted scrapings despite claims of curation).  

✔️ People & Planet: Creator Rights (Violation of moral and economic rights of authors).

💁 Why is it a Breach?

This is a breach of Corporate Ethics and Transparency. Adobe's entire competitive advantage with Firefly was "safety", the promise that enterprise users wouldn't be sued for copyright infringement. If the allegations are true, it represents a catastrophic failure of Internal Governance, where engineering teams potentially bypassed legal mandates to improve model performance using pirated data. It shatters the trust in "clean" AI and suggests that even the most compliance-focused vendors may have "poisoned" supply chains

AI BREACHES (6)

6 - "Friend" Wearable - The Surveillance Backlash

The Briefing

The "Friend" AI wearable, a necklace device designed to record and transcribe conversations to provide "companionship," faced intense backlash and critical failure in December 2025. Privacy advocates and consumers rejected the device due to its "always-on" recording of bystanders without consent and opaque data retention policies. The product was criticized for creating "awkward social friction," leading to a market rejection that highlights the cultural limits of AI surveillance in personal spaces.

Potential AI Impact!!

 ✔️ People & Planet: Privacy & Dignity (Non-consensual recording of social interactions).  

✔️ Data & Input: Data Collection (Passive, continuous audio surveillance).  

✔️ Task & Output: Social Interaction (Companionship derived from surveillance mechanics).  

✔️ Economic Context: Consumer Adoption (Market failure due to rejection of social norms)

💁 Why is it a Breach?

This is a breach of Privacy by Design and Social License. The device failed because it ignored the "Contextual Integrity" of privacy, the idea that conversations in physical spaces are assumed to be ephemeral. By trying to capture these moments for an AI model, the device violated social norms and potentially laws (like GDPR and two-party consent statutes). It serves as a warning that consumer AI hardware cannot simply "move fast and break things" when those "things" are the fundamental privacy expectations of the general public.

Reply

or to participate.