- The AI Bulletin
- Posts
- US Executive Order Preempts State AI Laws - ALSO The Geopolitical Chip - US/China Governance & The H200
US Executive Order Preempts State AI Laws - ALSO The Geopolitical Chip - US/China Governance & The H200
Pax Silica Declaration Enacted at the Pax Silica Summit - PLUS: Regulatory Sandboxes to Standards - Singapore MAS Risk Guidelines - The AI Bulletin Team!

📖 GOVERNANCE
1) The Federal Pivot: US Executive Order Preempts State AI Laws

TL;DR
On December 11, 2025, President Trump signed the "Ensuring a National Policy Framework for Artificial Intelligence" Executive Order, aggressively centralizing AI governance. The order establishes a federal policy to preempt "onerous" state-level AI regulations, specifically targeting laws in states like California and Colorado that mandate safety testing or discrimination assessments - arguing they stifle innovation and threaten US global dominance. It directs the Department of Justice to establish an "AI Litigation Task Force" to challenge conflicting state laws and empowers the Commerce Department to withhold federal funding (such as broadband grants) from states that persist with restrictive regimes. While it carves out exceptions for child safety and state procurement, the order signals a definitive shift toward a "minimal burden" federal standard designed to accelerate AI commercialization
🎯 7 Quick Takeaways
Federal Preemption: Explicitly aims to override state-level AI safety and discrimination laws deemed "onerous."
Litigation Task Force: DOJ directed to actively sue states to invalidate conflicting AI regulations.
Funding Leverage: Commerce Department may withhold federal broadband (BEAD) grants from non-compliant states.
Minimal Burden Doctrine: Prioritizes speed and global dominance over precautionary safety testing.
Exceptions Granted: Child safety, state procurement, and physical infrastructure remain under state purview.
Constitutional Conflict: Sets the stage for major legal battles over states' rights and commerce.
Immediate Uncertainty: Creates a volatile compliance environment while courts adjudicate the order's legality.
💡 How Could This Help Me?
For enterprises operating across multiple US jurisdictions, this EO offers the promise of a unified compliance landscape, potentially eliminating the need to navigate 50 disparate regulatory regimes. It signals a reduction in pre-deployment compliance costs, as the threat of state-level "safety testing" mandates recedes. However, it introduces immediate legal volatility. You must prepare for a period of constitutional litigation between states and the federal government. Your compliance strategy should remain agile; while federal deregulation is the goal, the immediate reality is a tug-of-war that may leave compliance obligations in flux for months.
📖 GOVERNANCE
2) Pax Silica Declaration at the Pax Silica Summit

TL;DR
Australia has joined a U.S.-led initiative by signing the Pax Silica Declaration at the Pax Silica Summit in Washington D.C. on 12 December 2025. This international agreement commits Australia and six other nations to strengthen technology supply chain security, especially for critical minerals, AI and emerging technologies, essential for economic resilience and future prosperity. The pact reinforces collaboration with key partners to build secure infrastructure, diversify supply sources, and support a competitive, safe and inclusive digital ecosystem.
🎯 7 Key Takeaways
Australia signed the Pax Silica Declaration on 12 Dec 2025.
The declaration focuses on securing global technology supply chains.
It was agreed at the Pax Silica Summit in Washington D.C.
Seven countries signed, including the U.S., UK, Japan and Korea.
It strengthens collaboration on critical minerals and AI tech.
Aims to foster a competitive, safe and inclusive digital ecosystem.
Encourages diversification and resilience in tech supply chains.
💡 How Could This Help Me?
If you’re a business leader, policymaker or technologist navigating the digital economy, this initiative matters. Strengthening supply chain resilience for critical tech components - like semiconductors, AI infrastructure and minerals, means fewer disruptions, more secure access to essential resources and greater investment certainty. It also signals where government policy and international cooperation are heading, toward diversified partnerships and ecosystem security. For innovators and investors, aligning with these priorities could open strategic opportunities in supply-chain-linked industries and future-focused technologies.
📖 GOVERNANCE
3) Regulatory Sandboxes to Standards: Singapore MAS Risk Guidelines

TL;DR
The Monetary Authority of Singapore (MAS) has released a consultation paper on "AI Risk Management Guidelines" for financial institutions, marking a shift from the high-level FEAT principles (Fairness, Ethics, Accountability, Transparency) to detailed, prescriptive regulations. The guidelines require banks to maintain a comprehensive inventory of all AI use cases, conduct rigorous "risk materiality" assessments, and implement specific controls for data quality, fairness, and explainability tailored to the risk level. This move transitions Singapore from a "sandbox" environment to a supervised regulatory regime, setting a benchmark for responsible AI in the Asian financial sector.
🎯 7 Key Takeaways
From Principles to Rules: Shifts from voluntary FEAT principles to prescriptive risk guidelines.
Materiality Assessment: Mandates rigorous assessment of "risk materiality" for every AI model.
Mandatory Inventory: Banks must maintain a comprehensive register of all AI use cases.
Board Accountability: Explicitly assigns AI oversight responsibility to the Board and Senior Management.
Third-Party Control: Requires risk-proportionate controls for external AI vendors.
Human Oversight: Mandates "human-in-the-loop" for high-impact decisions.
Transition Period: Proposes a 12-month window for full compliance after guidelines are issued.
💡 How Could This Help Me?
For financial institutions operating in Asia, this is the new playbook. Even if you are not in Singapore, MAS guidelines often serve as a template for other APAC regulators. The requirement for a "use case inventory" is a practical starting point for any governance program, you cannot govern what you do not track. By adopting these guidelines now - specifically the risk materiality assessment methodology, you future-proof your compliance strategy against the inevitable global convergence of financial AI regulation.
📖 NEWS
4) The Geopolitical Chip - US/China Governance & The H200

TL;DR
Geopolitics continues to define the physical layer of AI governance. The US government recently cleared Nvidia to sell its advanced H200 chips to China, but under strict conditions including a 25% sales tax/monitoring fee. In response, Beijing is debating its own restrictions to prioritize domestic chips and reduce reliance on US technology. This dance of export controls and domestic subsidies highlights that "Governance" is not just about software safety; it is about controlling the silicon substrate of intelligence. China’s "Action Plan for Global AI Governance" continues to push for a state-centric model, contrasting with the US’s new deregulatory stance..
🎯 7 Key Takeaways
H200 Sales Cleared: US permits Nvidia to sell advanced chips to China with conditions.
25% Tax/Fee: US imposes steep monitoring fee/tax on these chip sales.
Beijing Pushback: China debating restrictions to favor domestic hardware (e.g., Huawei).
Silicon Sovereignty: "Compute" viewed as critical national infrastructure by all powers.
Canada's Move: Canada invests C$2B in "Sovereign AI Compute" to reduce US reliance.
Physical Governance: Regulation shifting from software rules to hardware supply chain control.
Three-Body Problem: Global governance split between US (Market), EU (Rights), China (State).
💡 How Could This Help Me?
If you rely on global supply chains for compute, expect volatility. The "balkanization" of the chip market means you may need to diversify your hardware providers. For multi-nationals, it underlines the need for "sovereign clouds" - running AI on local infrastructure in China, the EU, and the US to comply with diverging hardware and data residency laws. You cannot assume a uniform hardware stack globally, and your governance strategy must account for the "physical location of the inference"
Brought to you by Discidium—your trusted partner in AI Governance and Compliance.

Reply