Fifty clashing regulatory experiments will cripple U.S. national security.
Given the generation-defining AI race that’s currently moving at breakneck speed, one would think Congress might have a productive thought about it by now. But this assumption would unfortunately be a mistake. Instead, federal legislators have chosen to mostly ignore a crucial issue that has deep ramifications for our nation. By the time they step up and see the bigger picture, it might just be too late.
Recently on Truth Social, President Donald Trump slammed the “patchwork” of state regulations Congress has allowed to flourish. AI policy is being drafted not in Washington but in states like California. It’s being crafted not by sensible, informed actors but by out-of-touch lawmakers who few people know, who won’t be held accountable, and whose motivation lies in appeasing their constituents rather than strengthening U.S. national security.
California, for example, modeled its recent AI legislation on the world’s first comprehensive AI law, which the E.U. adopted in June of last year. Creatively titled “The AI Act,” it’s predictably riddled with vague yet draconian rules, impact reviews, and registries. In some cases it features outright bans, such as with facial recognition algorithms (one of the few areas where Chinese AI solidly beats the U.S.).
This was, of course, to be expected. While the AI Act has affected U.S. firms, the worst of its influence has been concentrated within the E.U.’s own borders. In fact, it’s gotten so bad that the European Commission has proposed a complete regulatory rollback, effectively announcing their failure to the world.
California Governor Gavin Newsom and Golden State legislators evidently took away a different lesson. Newsom signed the state’s “Transparency in Frontier Artificial Intelligence Act” into law in September. Among other things, it requires AI companies with over $500 million in gross annual revenue to have “frontier AI models,” which consume large quantities of compute. Under the law, these companies must produce a plan outlining how they will follow international safety standards—and they must adhere to the plan for at least as long as it takes the California government to approve it.
To Newsom’s credit, the law is much tamer than it could have been. It came after he vetoed multiple stronger yet nonsensical proposals, and at first glance, it doesn’t seem to require anything too drastic from tech firms.
The problem, however, is not with any particular law but with the principle itself. A single state should not hold the balance of U.S. national security in the palm of its hand. What would have happened if a less cautious iteration of that bill had been signed into law? What will happen when New York or Maryland decides to pass a comprehensive AI bill? Colorado already approved their framework, which the state’s governor decried as being overly complex.
While the U.S. bickers over allowing states to undermine our national interest, China doesn’t concern itself with such trivialities. The CCP can carry out and implement its vision over the whole country with ease. It boasts a far more robust energy infrastructure than the U.S., and is attempting, with all the centralized economic force it can muster, to close the technological gap. Allowing the U.S. to be pulled apart by a vast apparatus of competing interests, be it states that want to regulate frontier AI innovation or the chipmaking industry’s money-hungry bid to sell its most powerful tech to China, is a form of cultural suicide.
Why is the U.S. not treating AI with the seriousness it deserves? In response to Trump’s call to end this sorry state of affairs, Florida Governor Ron DeSantis wrote that barring states’ ability to regulate AI would be a “subsidy to Big Tech.” But this ignores the fact that Big Tech already has a fair share of influence over state regulations. Imagine if he said the same thing about weapons technology.
Republican governors and lawmakers seem more liable to make this a states’ rights issue about a consumer product than to understand AI for what it really is: a key part of our country’s national security infrastructure.
Just consider that Anthropic recently caught a large-scale, Chinese-orchestrated cyberattack attempting to steal information from around 30 companies using a jailbroken version of Claude Code. China, of course, would rather be using its own models to conduct such an attack, eliminating the need for jailbreaking to begin with, but it can’t—at least not yet.
There’s no separation between these AI models and their capabilities. OpenAI, Anthropic, X, and Google are building the blueprint for what will inevitably be used in warfare.
Congress needs to realize that inaction is leaving our own AI development up to chance. It’s letting various actors, such as blue states and Big Tech, take the wheel and steer the country however they see fit. All the while, China is unified, coordinated, and chomping at the bit to overtake U.S. dominance, restrained only by the thin leash of export controls.
In most areas, letting states experiment with policy makes the country more resilient and innovative. But when it comes to frontier AI, we cannot allow 50 clashing regulatory experiments to paralyze progress while an authoritarian rival mobilizes its full power. Whether we like it or not, this is a race—and it’s about time we act like we want to win.
The American Mind presents a range of perspectives. Views are writers’ own and do not necessarily represent those of The Claremont Institute.
The American Mind is a publication of the Claremont Institute, a non-profit 501(c)(3) organization, dedicated to restoring the principles of the American Founding to their rightful, preeminent authority in our national life. Interested in supporting our work? Gifts to the Claremont Institute are tax-deductible.
















