Posted in

Ted Cruz’s AI sandbox enables dangerous self-regulation, not innovation

Sen. Ted Cruz’s (R-Texas) newly introduced Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and Experiment, or SANDBOX Act, is a liability shield for Big Tech. 

While the act purports to offer a balanced approach to AI governance by shedding “outdated federal rules” and permitting AI companies to “experiment, build, and compete,” what it really provides is a free pass for the AI industry to continue to discriminate, spread deepfakes, exacerbate mental health risks and surveil workers. 

The SANDBOX Act, if passed, will frustrate accountability and exempt tech companies from honoring existing and hard-won protections. And it will do so by putting the risks of AI systems on the shoulders of regular Americans.

The bill would create a “regulatory sandbox” housed in the White House’s Office of Science and Technology Policy. In practice, this means that by registering their products with the office and going through a review process, AI companies can have those products exempted from agency enforcement of existing regulations. 

Considering the diverse oversight responsibilities of agencies charged with protecting Americans — the Consumer Financial Protection Bureau, the Environmental Protection Agency, the Department of Housing and Urban Development, to name only a few; all of them authorities that can be sidelined under the bill — this amounts to an end-run around the expertise and enforcement powers built into our governance system.   

Under certain conditions, and as part of a broader set of actions to govern AI, regulatory sandboxes can support safe innovation. 

The best case scenario is this: In an environment where there is a commitment to uphold a basic set of protections when AI systems are in use, a sandbox could allow overseeing agencies to gain firsthand insights to identify issues that require additional guidance and oversight. 

This is not our current situation, and Cruz’s SANDBOX Act is only a stand-in for accountability. 

President Trump’s early executive orders withdrawing agency guidance and protections around AI systems, the administration’s recently released AI Action Plan and a legislative roadmap from Cruz himself make clear that this administration’s intent is to remove protections, speed adoption of ungoverned AI and enable the large companies behind them to reap the commercial rewards — backed by the full power of the American government. 

Given this, a federal sandbox offers the weakest of oversight regimes, which is particularly worrying given Cruz’s intent to take another run at imposing a moratorium on state AI laws.  

If there is any doubt of the bill’s slavishness towards the AI industry, the waiver process tells a clear story: Tech companies would certify the safety of their own systems. They would be required to describe the benefits of their system — hardly a tall order for a tech sector driven by hype — and identify any risks, including those associated with waiving the specific federal regulations. 

Said “risks” are not defined expansively; the bill does not require companies to describe any risks of discrimination in housing, employment or education, surveillance, or degraded working conditions, to name just a few well-documented harms of AI. 

And there is no oversight mechanism to ensure that AI deployers are actually mitigating any harms they do identify. If something goes wrong, companies are advised to report incidents within 72 hours — and carry on. 

They are not required to stop deployment or make changes to those systems. The “temporary” waiver is renewable for up to 8 additional years, providing AI companies with a decade of weak, friendly oversight.

A national “try it and see” model puts the public in the position of serving as test subjects without assigning responsibility when things go wrong. By design, no one learns — and no one pays. 

Set alongside the deep alignment between industry and government to push for AI adoption at all costs — in education, work, government services and daily life — this is particularly galling. 

The bill is not simply a matter of neglect of a duty of care, but an active effort to foist unaccountable technology onto the country under the cover of governance. 

It is also not in line with what the American people want. The public is asking for stronger, not weaker, oversight of AI systems. A recent poll found that more people are concerned about AI than excited by it. It also found, among both Democrats and Republicans, that people think AI will benefit corporations, not the working class. 

Cruz’s AI legislation is a gift to the oligarch elite of Silicon Valley: a mechanism to undermine government enforcement and escape accountability.

Brian J. Chen is Data & Society’s policy director. Janet Haven is the executive director of Data & Society.