The AI Policy Mess: Federal vs. State, US vs. EU, and Nobody Agrees
If you’re trying to keep up with AI policy in 2026, I feel your pain. It’s a mess. The US federal government is fighting with states. The EU is fighting with tech companies. And everyone is fighting about what “responsible AI” even means.
Let me try to make sense of it.
The US: Federal Government vs. States
The biggest AI policy story in the US right now isn’t a new law — it’s a fight about who gets to make the laws.
In December 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.” The headline goal: prevent states from creating a patchwork of AI regulations that could slow down American AI dominance.
The order required the Secretary of Commerce to publish a report by March 11, 2026, listing “onerous” state AI laws. And by “onerous,” the administration means pretty much any state law that imposes significant compliance requirements on AI companies.
The targets are obvious:
- Colorado’s AI Act (effective June 30, 2026) — requires impact assessments for high-risk AI systems and bans algorithmic discrimination
- California’s Transparency in Frontier AI Act (effective January 1, 2026) — requires disclosure of training data and model capabilities for frontier AI
- Various state laws on AI in hiring, insurance, and healthcare
The administration’s argument: these state laws create a compliance nightmare for AI companies and threaten America’s competitive position. The states’ argument: someone has to protect citizens from AI harms, and Congress isn’t doing it.
Both sides have a point. And that’s what makes this so messy.
The Federal Vacuum
Here’s the uncomfortable truth: Congress hasn’t passed a thorough federal AI law. Not because they don’t want to — there have been dozens of proposed bills. But AI regulation is politically complicated.
Republicans generally want less regulation to promote innovation. Democrats generally want more regulation to prevent harm. And the AI industry is spending enormous amounts on lobbying both sides.
The result: executive orders that can be reversed by the next president, state laws that may or may not survive federal preemption challenges, and a lot of uncertainty for everyone.
Meanwhile, the EU AI Act is already being enforced. China has its own AI regulations. And US companies operating globally have to comply with all of them regardless of what happens domestically.
The Global space in 60 Seconds
Here’s where every major jurisdiction stands as of March 2026:
EU: AI Act in phased enforcement. Banned practices already illegal. High-risk requirements coming August 2026. Fines up to €35M or 7% of global revenue.
US: No federal AI law. Executive order trying to preempt state laws. States pushing ahead anyway. Complete uncertainty about what sticks.
UK: Sector-specific approach. AI Bill in development. Principles-based but not legally binding (yet).
Japan: AI Promotion Act. Innovation-first, voluntary guidelines. PM-chaired AI Strategy Headquarters.
China: Multiple AI regulations already in effect. Algorithmic recommendation rules, deepfake rules, generative AI rules. More centralized and prescriptive than any Western approach.
Canada: AIDA (Artificial Intelligence and Data Act) still working through Parliament. Would create a federal AI framework with penalties.
What This Means If You’re Building AI
Practical advice for navigating this chaos:
1. Comply with the strictest rules that apply to you. If you serve EU customers, the AI Act is your baseline. If you operate in Colorado, their AI Act applies regardless of what the federal government says (at least until a court says otherwise).
2. Document everything. Every jurisdiction is moving toward requiring more transparency about AI systems. If you’re already documenting your training data, model capabilities, and risk assessments, you’re ahead of the game no matter what rules end up applying.
3. Watch the Colorado AI Act. It takes effect June 30, 2026, and it’s the most thorough state AI law in the US. How it’s enforced (or preempted) will set the template for everything that follows.
4. Don’t bet on the federal government solving this. Even if the executive order succeeds in blocking some state laws, it doesn’t create a federal framework. You’ll still be operating in a regulatory vacuum at the federal level.
My Prediction
We’ll eventually get a federal AI law in the US. But not in 2026, probably not in 2027, and it’ll be weaker than what the EU has. In the meantime, the real regulation will happen at the state level and through existing federal agencies (FTC for consumer protection, EEOC for employment discrimination, FDA for medical devices).
The companies that treat compliance as a feature rather than a burden will win. Not because regulators will reward them — but because customers increasingly care about how AI affects them, and “we take this seriously” is becoming a competitive advantage.
The policy mess will sort itself out eventually. Your job is to build AI that you’d be comfortable defending in front of any regulator, in any jurisdiction. If you can do that, the specific rules matter less than you think.
🕒 Last updated: · Originally published: March 12, 2026