- The AI Bulletin
- Posts
- AI Incident Monitor - Apr 2025 List
AI Incident Monitor - Apr 2025 List
Top AI Regulatory Updates - Apple Settles $95 Million Siri Privacy Lawsuit

Editor’s Blur 📢😲
Less than 1 min read
Welcome to the April 2025 AI Incident’s List - As we now, AI laws around the globe are getting their moment in the spotlight, and crafting smart policies will take you more than a lucky guess—it needs facts, forward-thinking, and a global group hug 🤗. Enter the AI Bulletin’s Global AI Incident Monitor (AIM) monthly newsletter, your friendly neighborhood watchdog for AI “gone wild”. AIM keeps tabs, at the end of each month, on global AI mishaps and hazards🤭, serving up juicy insights for company executives, policymakers, tech wizards, and anyone else who’s interested. Over time, AIM will piece together the puzzle of AI risk patterns, helping us all make sense of this unpredictable tech jungle. Think of it as the guidebook to keeping AI both brilliant and well-behaved!
In This Issue: April 25 - Key AI Breaches
Apple Settles $95 Million Siri Privacy Lawsuit
AI-Gen Clone of Exante Brokerage Used to Defraud Investor via JPMorgan Account
Italy Issued First GenAI Fine of €15 Million Alleging GDPR Violations
AI-Driven Rent Price Fixing Sparks DOJ Lawsuit Against Major Landlords
Anthropic Reports Claude Misuse Across Various LMM Operations
Scammers Reportedly Use AI Tools to Impersonate Students and Obtain Federal Aid

Total Number of AI Incidents by Country
AI BREACHES (1)
1 - Apple Settles $95 Million Siri Privacy Lawsuit
The Bulletin
Apple has agreed to a whopping $95 million settlement after a class-action lawsuit accused Siri of eavesdropping on private conversations, without a formal invite. The suit claimed Siri had a bad habit of popping in unannounced, picking up sensitive chatter, and allegedly cozying up with advertisers. Apple, while footing the bill, maintains it didn’t do anything wrong - just a case of “Sorry, I didn’t quite catch that… but maybe I did.”
Potential AI Impact!!
✔️ It affects the AI Principles of Privacy & data governance , Transparency & explainability as well as Accountability and affects Consumers
✔️ The Severity classification for AI Breach 1 is of Non-physical harm
💁 Why is it a Breach?
The incident involves Apple's AI system, Siri, which allegedly recorded private conversations without user consent, potentially violating human rights and privacy laws.
AI BREACHES (2)
2 - AI-Gen Clone of Exante Brokerage Used to Defraud Investor via JPMorgan Account

Gif by InfinyteClub
The Bulletin
Scammers pulled off a high-tech heist using AI to impersonate the broker Exante, right down to the trading interface. Armed with deepfakes, forged documents, and a suspiciously slick website, they lured at least one U.S. victim into handing over funds through a JPMorgan Chase account. The twist? Exante doesn’t even operate in the U.S. They've since confirmed the fraud and alerted multiple U.S. agencies. Moral of the story: just because it looks like your broker doesn't mean it's not a deepfake in disguise.
Potential AI Impact!!
✔️ It affects the AI Principles of Privacy & data governance , Transparency & explainability as well as Accountability and affects Consumers
✔️ The Severity classification for AI Breach 2 is of Non-physical harm
💁 Why is it a Breach?
While generative AI–produced identity documents were not explicitly confirmed in this case, their use is consistent with reported capabilities and plausibly supported the creation of financial infrastructure. This incident report treats them as a likely, but not definitively verified, component of the broader scam operation.
AI BREACHES (3)
3 -Italy Issued First GenAI Fine of €15 Million Alleging GDPR Violations
The Bulletin
Italy’s data watchdog just handed OpenAI a €15 million fine for ChatGPT’s less-than-GDPR-friendly behavior. The violations? Allegedly training AI on personal data without proper permission and not doing enough to keep underage users out of the algorithmic loop, leading to some adult-level content slipping through. OpenAI says, “Not so fast!” and plans to appeal. In the meantime, they’ve also been tasked with launching a public awareness campaign - because nothing says lesson learned like explaining data privacy with a chatbot.
Potential AI Impact!!
✔️ It affects the AI Principles of Privacy & data governance , Transparency & explainability as well as Accountability and affects Consumers
✔️ The Severity classification for AI Breach 3 is of Non-physical harm
💁 Why is it a Breach?
The incident involves OpenAI's ChatGPT, an AI system, which was found to have violated data protection laws in Italy by processing personal data without a legal basis, thus breaching obligations under applicable law intended to protect fundamental rights.

Summary of AI Incidents & Impact - Classification by Sector of Deployment
AI BREACHES (4)

The Bulletin
The U.S. Justice Department, backed by a squad of state attorneys, has filed suit against RealPage and six major landlords over allegations of using AI to play landlord monopoly. The claim? Their rent-setting algorithm allegedly helped coordinate price hikes across rental markets, like a digital wink-and-nod, to the detriment of millions of renters already struggling with housing affordability. The case raises serious questions: When does “smart pricing” become too smart for its own good?
Potential AI Impact!!
✔️ It affects the AI Principles of Fairness , Transparency & explainability as well as Accountability and affects Public Interest
✔️ The Severity classification for AI Breach 4 is of Non-physical Harm
💁 Why is it a Breach?
The AI system used by RealPage is allegedly facilitating price coordination among landlords, leading to higher rents. This could be considered a violation of human rights or a breach of obligations under applicable law intended to protect fundamental rights, as it affects housing affordability and access

Incident Classification by Location
AI BREACHES (5)
5 - Anthropic Reports Claude Misuse Across Various LMM Operations
The Bulletin
In April 2025, Anthropic spilled the digital tea on some shady uses of its Claude LLM -detected just a month earlier. Highlights (or lowlights?) include: a full-blown influence-as-a-service gig running 100+ social media bots, credential testing for spying on security cameras, a sketchy job scam targeting Eastern Europe, and a rookie hacker getting a little too good at coding malware. Anthropic swiftly banned the offenders, but couldn’t confirm how far the damage went. Apparently, even Claude can attract the wrong crowd.
Potential AI Impact!!
✔️ It affects the AI Principles of Fairness , Transparency & explainability as well as Accountability and affects Public Interest
✔️ The Severity classification for AI Breach 5 is of Non-physical Harm
💁 Why is it a Breach?
Claude AI, developed by Anthropic, has been exploited by malicious actors in a range of adversarial operations, most notably a financially motivated "influence-as-a-service" campaign. Anthropic's report details separate case studies from March 2025, but published their findings on 04/23/2025, which is the date this incident ID takes.
AI BREACHES (6)
6 - Scammers Reportedly Use AI Tools to Impersonate Students & Obtain Fed Aid

Gif by looneytunes
The Bulletin
California community colleges have been hit with a wave of fraud - an estimated 34% of all applications between 2021 and 2025 were fake. Scammers reportedly used generative AI, including ChatGPT, to craft convincing identity responses, sneak past verification, and pocket student aid dollars. With over $13 million lost in just the past year, the scheme has clogged up admin systems, disrupted instruction, and even edged out real students. Looks like some bots made the dean’s list… for all the wrong reasons.
Potential AI Impact!!
✔️ It affects the AI Principles of Fairness , Transparency & explainability as well as Accountability and affects Public Interest
✔️ The Severity classification for AI Breach 6 is of Non-physical Harm
💁 Why is it a Breach?
Fake AI-Generated Students Are Reportedly Enrolling in Online College Classes. While no specific link has been established, the incidents appear to reflect similar patterns of AI-assisted enrollment fraud. They are treated as distinct due to lack of direct evidence connecting them.
Reply