- The AI Bulletin
- Posts
- CBA’s AI Shake-Up: 45 Jobs Out, Bots In and the Union is not happy
CBA’s AI Shake-Up: 45 Jobs Out, Bots In and the Union is not happy
The Finance Union isn’t amused – demanding CBA upskill, not offload, staff in the AI era. Australia’s peak union body, the ACTU, is demanding mandatory AI Implementation Agreements - The AI Bulletin Team!

📖 GOVERNANCE
1) CBA’s AI Shake-Up: 45 Jobs Out, Bots In

TL;DR
Commonwealth Bank of Australia (CBA) confirmed cutting 45 call centre jobs after deploying an AI-powered voice bot to handle routine customer enquiries - marking the first time a major bank openly attributed job losses to AI. The bot handles identification, balance checks, and triage, reducing weekly call volumes by around 2,000. CBA insists it's offering redeployment, reskilling, and new roles, denying claims of offshoring. The move sparked union backlash and renewed calls for regulation and ethical AI-driven workforce transformation. The move sparked outrage from the Finance Sector Union!!
Takeaways
AI voice bot triages inbound calls, cutting manual call volumes by ~2,000/week.
CBA confirmed 45 direct banking roles impacted by AI deployment.
Total job cuts claimed by union may reach 90, including messaging staff.
Union demands retraining and redeployment instead of dismissal.
CBA assures support - career transition services, care, and open vacancies.
Raises broader debate on ethical AI adoption, worker protection and regulation.
How Could This Help Me?
Imagine deploying AI voice bots to handle routine customer questions, cutting call volume and costs - while redeploying your people into more complex, value-added work. But before you issue pink slips, build a transition framework: reskilling programs, internal vacancies, union engagement, and compliance guardrails. That way, you balance productivity gains with social licence. Your executives get improved efficiency and happier customers, while maintaining employee morale, ethical credibility and regulatory goodwill. It’s AI with a human handshake, not a cold shoulder.
📖 GOVERNANCE
2) Australia’s peak union body demands mandatory AI Implementation Agreements
TL;DR
Australia’s peak union body, the ACTU, is demanding mandatory AI Implementation Agreements before employers roll out any AI at work. These binding agreements would ensure staff involvement upfront, with guarantees for job security, privacy, retraining, and transparency around AI use and data collection. They also call for a dedicated National AI Authority and an Australian AI Act, arguing that organisations without such frameworks should lose access to government contracts and R&D incentives. This push is timed to influence the federal Economic Reform Roundtable later in 2025 addressing AI, productivity and worker rights.
6 Takeaways
Employers must consult employees before deploying workplace AI.
Agreements cover retraining, job security, and data/privacy protections.
Backed by calls for a National AI Authority and AI Act.
Non-compliant firms risk losing government contracts and R&D incentives.
Productivity gains depend on workers being well-trained and involved.
Employers push back - warn new rules may hamper flexibility and innovation.
How Could This Help Me?
Union-backed AI agreements could bring clarity and reduce conflict by setting expectations early, especially around jobs, privacy, and training. For some organisations, this may streamline AI adoption and boost employee trust. However, it could also introduce regulatory overhead, slow innovation cycles, or limit flexibility in fast-moving tech environments. Leaders should weigh the trade-offs: formal frameworks may offer stability, but might not suit all sectors or AI maturity levels. The key is aligning AI deployment with both strategic goals and workforce dynamics.
📖 GOVERNANCE
3) South Australia has launched the nation’s first government Office for AI

TL;DR
South Australia (SA) has launched the nation’s first government Office for AI, backed by A$28 million over four years to embed ethical, impactful AI into public sector operations. Structured as a five-person team led by a Director of AI, the office will drive ‘Proof of Value’ pilots across healthcare, policing, social services and administration. The goal: cut costs, simplify workflows, and free up frontline staff for meaningful work - all within a robust governance framework. SA aims to lead Australia in responsible AI adoption.
Takeaways
A$28 m funding creates first state government Office for AI in Australia.
Five full-time staff, plus new Director, will steer strategy and policy.
Focused on high-value pilots in healthcare, policing, social services.
Proof-of-value trials aim to reduce administrative burden and costs.
Governance includes ethical guardrails, transparency and responsible frameworks.
SA doubles down on its AI leadership, following AIML institute legacy.
How Could This Help Me?
If you're executive-level and eyeing public-sector-style AI programs, SA’s approach offers a blueprint: start small with pilot use-cases, fund them via a central office, embed ethics from the get-go, and measure real service improvements. It mixes governance with experimentation, tight oversight with agency innovation, and strategic ROI tracking. The model is scalable, if you build a dedicated team, follow proof-of-value logic, and set guardrails early, you can deliver real impact with oversight - and satisfy even the toughest auditors.
📖 NEWS
4) AI “hallucinations” - LLMs generating fluent but fabricated content

TL;DR
The IAPP warns that AI “hallucinations” - LLMs generating fluent but fabricated content - pose serious technical and governance risks. These include misinformation spreading, legal liability, financial losses and reputational damage, especially in sensitive sectors like healthcare and finance. Hallucinations are largely unavoidable given how LLMs work, but can be mitigated through grounding techniques like retrieval-augmented generation, self-check scaffolding, human oversight, and prompting uncertainty.
6 Takeaway Points
Hallucinations are false yet fluent, pose high risk in regulated domains.
These AI outputs erode trust and may spread misinformation broadly.
Hallucination is inherent, not a bug - but latent in model design.
Mitigation strategies include retrieval‑augmented generation (RAG).
Chain-of-thought prompts, self‑verification and uncertainty flags help.
Governance needs auditing, testing pipelines, transparency and policy controls.
How Could This Help Me?
Think of hallucination risk as the phantom you didn’t invite: unmanaged, it burns trust and triggers compliance nightmares. But tackle it head‑on, using RAG, chain‑of‑thought prompting, uncertainty flags and human-in‑loop review and you turn hallucinations into manageable quirks. Embed audit trails, model evaluations, and governance policies (accuracy audits, transparency, red teaming), and you build AI systems that C‑suite and regulators can actually trust. Result: innovation without the scandal, and outputs you can stand behind.
Brought to you by Discidium—your trusted partner in AI Governance and Compliance.
Reply