Federal policy may shift who regulates AI development—but it doesn’t change your responsibility to govern its use. State and local agencies still own procurement, data security, and the fast-growing shadow AI problem inside their organizations.

On March 20, 2026, the Trump Administration released a new National AI Legislative Framework. While the announcement focuses on national security and the global AI race, there is a very specific tug-of-war happening between federal authority and state-level regulation that you need to be aware of.
Here is the breakdown of what is happening, what is at risk, and what remains under your control.
The federal government is attempting to split AI policy into two distinct lanes:
The takeaway: You are still 100% responsible for how your employees use AI and how your agency secures its data.
It is important to separate a wishlist from enforceable law.
Legislative Framework
Status: Recommendation
This is not law. It’s a blueprint signaling where Congress may go, but it has no immediate legal effect.
DOJ Task Force
Status: Active now
The Department of Justice can already challenge state AI laws in federal court, particularly those considered overly restrictive.
BEAD Grant Funding
Status: Active risk
Federal broadband funding may be used as leverage. States with aggressive AI regulations could see funding withheld.
State AI Laws (e.g., TRAIGA)
Status: Enforceable
Existing laws in states like Texas, California, Colorado, and Utah remain fully in effect unless Congress acts.
While the lawyers fight over legislation, your biggest risk is likely already inside your building. We see that roughly 62% of AI adoption in the organizations we work with is Shadow AI—tools used by employees without IT’s knowledge.If you are only focusing on a single tool like
Microsoft Copilot, you are missing the dozens of other applications your staff is using behind the scenes. You cannot manage or secure what you cannot see.
You don’t need to wait for federal clarity to start governing AI. Here is where to focus:
The Net-Net: The federal government is trying to make it easier for companies to build AI, but they aren't taking away your responsibility to use it safely.
Need help setting up a visible, defensible AI policy? Reach out to the Darwin team. We’re here to help you navigate the policy side and the governance side without the guesswork