The Gate Is the Prize
Every regulatory regime produces two things: rules and gatekeepers. The rules are debatable. The gatekeepers are the point.
The White House proposal to vet AI models before public release is being framed as a safety measure. That framing is doing a lot of work. Safety from what, exactly, measured by whom, using which methodology, appealed through which process? None of those questions have answers yet. But the entity that answers them will hold more structural power over the AI industry than any board, any investor, or arguably any CEO.
That entity would be the federal government, specifically the executive branch, specifically whatever administration happens to occupy the White House when the standards get written.
Regulatory Capture Runs at Machine Speed
The history of industry regulation in the United States follows a consistent arc. A sector grows large enough to attract political attention. Regulatory frameworks get proposed. The largest incumbents, the ones with Washington offices, lobbyists, and former officials on their payroll, engage early and shape the standards. By the time the rules are final, they function less as constraints on the industry than as barriers to everyone trying to enter it.
AI is moving through this arc in compressed time.
OpenAI, Google DeepMind, Anthropic, and Microsoft are already present in Washington. They have submitted testimony, published safety frameworks, and hired policy staff. They are not waiting to see what the standards will be. They are participating in writing them. A pre-release vetting process that draws on existing safety literature will draw heavily on safety literature these companies produced.
That is not a conspiracy. It is just how regulatory capture works. It does not require bad faith. It requires incumbency.
The Timing Is the Tell
This proposal surfaces at a specific moment. Several frontier AI labs are preparing to release models of substantial capability. The regulatory window, the period between 'AI is powerful' and 'AI is everywhere,' is closing. Once capable AI is widely deployed and integrated into critical infrastructure, pre-release vetting becomes logistically absurd. You cannot gate what is already inside the walls.
So the proposal arrives now, framed as precaution. But the mechanism being built, a federal checkpoint on AI deployment, would be extraordinarily durable. Checkpoints do not dissolve when the emergency passes. They expand.
Consider what the checkpoint actually controls: which models reach users, which companies can deploy at scale, which open-source projects can publish weights, which foreign models can enter the American market. A vetting regime is simultaneously a safety tool and a trade policy instrument and a domestic industrial policy mechanism. Those three functions will not stay separated.
Open Source Is the First Casualty
Pre-release vetting as currently imagined maps neatly onto the large-lab model of AI development, where a single identifiable entity produces a model and releases it. It maps very badly onto open-source AI development, where models are released as weights, modified by thousands of independent actors, and deployed without any central point of control.
Meta's Llama models, Mistral's releases, the entire Hugging Face ecosystem - none of these fit a review-then-release framework without restructuring how open-source AI works. That restructuring would, by coincidence, advantage the closed, proprietary systems built by well-capitalized labs with legal departments capable of navigating federal compliance.
The safety argument does not require this outcome. But the structural incentives point directly toward it.
Safety Is Real. The Frame Is Not.
None of this means AI safety concerns are fabricated. Frontier models trained on vast data, capable of generating persuasive content, writing functional code, and advising on sensitive decisions, do carry genuine risks worth taking seriously. The argument for some form of evaluation before deployment is not irrational.
But safety is always implemented through a specific institutional architecture. That architecture distributes power before it distributes protection. The question worth asking about any proposed AI vetting regime is not whether safety matters. It is who holds the stamp, who appeals to whom when the stamp gets denied, and which models were already through the gate before the gate was built.
Whoever defines AI safety gets to decide which AI wins. Everything else is implementation detail.