• The AI Bulletin
  • Posts
  • Australia Slips in Global Digital Rankings - ALSO, Europe’s AI Act & Digital Rewrite!

Australia Slips in Global Digital Rankings - ALSO, Europe’s AI Act & Digital Rewrite!

Do you Want to Train AI Lawfully? - Navigating Data, Privacy & Purpose - The AI Bulletin Team!

📖 GOVERNANCE

1) Europe’s Digital Rewrite - What’s Coming & Why It Matters

europe GIF

TL;DR 

The European Commission is gearing up to roll out a major “Digital Package” due on 19 November 2025 that aims to simplify, consolidate and reform the bloc’s tangled digital rule-book. Among the changes:

A “Digital Omnibus” covering the General Data Protection Regulation (GDPR), ePrivacy Directive and Data Act.

Efforts to ease compliance burdens - e.g., raising thresholds for breach reporting, combining portals, standardising DPIAs.

Clarification that training AI models on personal data may be covered under “legitimate interest,” while tighter rules around special-category data are relaxed in certain contexts.

🎯 7 Quick Takeaways

  1. A single incident-reporting portal across GDPR, NIS2, DORA and other digital laws.

  2. Breach-reporting threshold raised; 96-hour window proposed to replace current 72 hours.

  3. For cookies/trackers, consent may no longer be mandatory if processing under lawful basis.

  4. Uniform DPIA templates and EU-wide lists to standardise when impact assessments are required.

  5. Training AI models on personal data may fall under “legitimate interest” rather than requiring consent.

  6. Data sharing, cloud switching and intermediary licensing regimes to be revised in Data Act updates.

  7. Reforms aim to simplify compliance for SMEs while retaining core protections.

💡 How Could This Help Me?

If you're focused on AI, data or digital operations across Europe, this signals a shift from heavy-regulation to calibrated enablement. You’ll want to:

  • Map how your AI/data operations align with both existing rules and these upcoming reforms.

  • Anticipate reduced thresholds, combined portals and more uniform DPIA requirements - great for streamlining but don’t relax oversight.

  • Treat this as your cue to build a dual-track governance model: Maintain compliance under current laws and adapt for the simplifications ahead.

In short: don’t assume “lighter rules” means “less governance, it means smarter governance!!

📖 GOVERNANCE

2) Australia Slips in Global Digital Rankings - AI Policy Holds the Mirror

Im A Celebrity Au GIF by I'm A Celebrity... Get Me Out Of Here! Australia

TL;DR

International Institute for Management Development (IMD)’s 2025 global digital competitiveness index sees Australia drop to 23rd, down from 15th last year, and not since 2018 has it performed this poorly. Key drivers of the slide: the regulatory framework indicator tanked from 5th to 18th globally, driven by Australia’s fall to 34th in the “AI policies passed into law” metric. Meanwhile business agility and talent factors also performed poorly: Australia ranked 65th of 69 for business agility, and 60th for employee training. Australia has strong research and policy drafting credentials, but its translation into legally-effective AI regulation, investment, and business uptake is stalling - and that’s showing up in the numbers.

🎯 7 Key Takeaways

  1. Australia slips to 23rd in IMD digital competitiveness rankings.

  2. Regulatory framework rank drops from 5th to 18th globally.

  3. AI-policy metric falls sharply, now 34th in “AI laws passed.”

  4. Business agility alarmingly low: 65th of 69 countries.

  5. Employee training rank: 60th globally - skills gap hurting readiness.

  6. Technology investment ranking slides from 19th to 37th.

  7. Research strong, but commercialisation and regulation lag in tandem.

💡 How Could This Help Me?

If you're leading AI adoption or digital strategy in your organisation, this is your red-flag alert. Australia’s slide shows that research capability + policy announcements aren’t enough without regulatory clarity, investment follow-through and business agility.

  • Map regulatory readiness now: missing or delayed laws = investment-drag.

  • Boost internal agility: your business must be ready to act when policy and tools align.

  • Invest in talent and training: if the workforce isn’t ready, you’ll be behind despite tools.

  • Strengthen commercial pathways: research output must convert to business value, not just papers.

In essence: the governance equation now looks like Policy + Investment + Agility = Competitiveness. Australia’s drop tells us what happens when one leg falters.

📖 GOVERNANCE

3) How to Train AI Lawfully? - Navigating Data, Privacy & Purpose

Working Out GIF by Pudgy Penguins

TL;DR

When organisations train AI models, the question isn’t simply “Can we?”- it’s “Under what legal basis and with what safeguards?” The International Association of Privacy Professionals (IAPP) breaks down key legal issues, especially under the EU General Data Protection Regulation (GDPR). For example:

  • Using first-party or customer data is easier to reconcile with GDPR than large-scale web scraping of public data sets.

  • Web scraping raises issues: inability to obtain consent, potential inclusion of “special-category” or biometric data, and lack of transparency to data subjects.

  • Legal bases like legitimate interest may apply, but organisations must carefully assess rights/risks, implement mitigations, and demonstrate transparency.

  • Where data belongs to special categories (political beliefs, biometric data, etc.), the thresholds and permissible conditions tighten significantly.

  • Training an AI system is not in itself the “interest”- the interest is the purpose behind the training (e.g., fraud detection, medical diagnostic aid).

🎯 7 Key Takeaways

  1. First-party or contract-obtained data offers clearer compliance than mass web-scraping.

  2. Legitimate interest can support AI training, but requires risk/rights balancing and safeguards.

  3. Web-scraped data often includes special categories and lacks meaningful consent - therefore high-risk.

  4. Purpose matters: training for a defined use-case is more defendable than “just training a model”.

  5. Transparency to data subjects remains a cornerstone, informing and enabling objection is key.

  6. Special-category data invites stricter controls, even for high-value AI.

  7. Compliance must shift from checkboxes to data-design, purpose-definition and governance by architecture.

💡 How Could This Help Me?

If you’re leading AI development or strategy, this article is a governance compass:

  • Clarify your purpose: define why you train the model, not just what you train.

  • Select your data path wisely: favour internal/private data flows over risky public scraping.

  • Build transparent processes: ensure data subjects know, can object, and are treated fairly.

  • Embed safeguards: filter out special-category data, maintain deletion/unlearning policies, document decisions.

  • Govern by design: architecture, pipeline, audit logs matter more than ticking a “we complied” box.

Lawful AI training isn’t a one-off legal review - it’s baked into your data, your architecture and your purpose.

📖 NEWS

4) NSW’s Office for AI Takes Shape: Leadership, Staffing & Strategic Mission

In Australia, the NSW Government’s new Office for AI has appointed Daniel Roelink as its inaugural Director. He joins from his previous role leading Digital NSW’s digital strategy, investment and architecture. Initial staffing plans include hiring 13 additional roles to support the office’s mandate: advising state departments, coordinating responsible AI deployment, and implementing governance frameworks. The office operates under Digital NSW, within the Department of Customer Service, for an initial two-year trial. It will not receive separate funding beyond the existing Digital NSW budget. Priority tasks: setting AI guardrails, building capability across agencies, supporting the already-established AI Review Committee reviewing high-risk AI projects.

🎯 7 Key Takeaways

  1. Daniel Roelink appointed Director to shape NSW’s AI governance ecosystem.

  2. Office for AI recruiting 13 key staff to support state-wide AI deployment.

  3. Operating under Digital NSW budget, no dedicated funding allocation reported.

  4. Two-year pilot phase emphasises capability-building and agile governance.

  5. Role centred on standards, advice and coordination- not direct deployment.

  6. Office supports independent AI Review Committee for high-risk AI oversight.

  7. Signals NSW’s transition from policy draft to operational AI governance structure.

💡 How Could This Help Me?

If you’re leading AI or governance in your organisation, NSW’s model is a useful template:

  • Appoint a dedicated lead early to drive strategy, not just serve as figurehead.

  • Staff the governance function adequately, expert roles in architecture, ethics, change-management matter.

  • Align budget and structure - governance functions shouldn’t be “under-resourced” or hidden.

  • Use a trial window (two-year phase) to refine frameworks, test real-world use-cases, and build momentum before full-scale rollout.

  • Position the governance office as advisor and coordinator, not necessarily executor - this keeps operational agility.

The governance engine isn’t just about frameworks - it’s about people, structure, and momentum..

KeyTerms.pdfGet your Copy of Key Terms for AI Governance576.32 KB • File

Brought to you by Discidium—your trusted partner in AI Governance and Compliance.

Reply

or to participate.