Table of Contents
People want AI regulation — but they don’t trust the regulators

Shutterstock.com
Generative AI is changing the way we learn, think, discover, and create. Researchers at UC San Diego are using to accelerate climate modeling. Scientists at Harvard Medical School a chatbot that can help diagnose cancers. In , , and , political dissidents and embattled journalists have created AI tools to bypass censorship.
Despite these benefits, a from The Future of Free Speech, a think tank where I am the executive director, finds that people around the world support strict guardrails — whether imposed by companies or governments — on the types of content that AI can create.
These findings were part of a broader survey that ranked 33 countries on overall support for free speech, including on controversial but legal topics. In every country, even high-scoring ones, fewer than half supported AI generating content that, for instance, might offend religious beliefs or insult the national flag — speech that would be protected in most democracies. While some people might find these topics beyond reproach, the ability to question these orthodoxies is a fundamental freedom that underpins free and open societies.
This tension reflects two competing approaches for how societies should harness AI’s power. The first, “User Empowerment,” sees generative AI as a powerful but neutral tool. Harm lies not in the tool itself, but in how it’s used and by whom. This approach affirms that free expression includes not just the right to speak, but the right to access information across borders and media — a collective good essential to informed choice and democratic life. Laws should prohibit using AI to commit fraud or harassment, not ban AI from discussing controversial political topics.
The second, “Preemptive Safetyism,” treats some speech as inherently harmful and seeks to block it before it’s even created. While this instinct may seem appealing given the potential for using AI to supercharge harm production, it risks turning AI into a tool of censorship and control, especially in the hands of powerful corporate or political actors.
As AI becomes an integrated operating system in our everyday life, it is critical that we not cut off access to ideas and information that may challenge us. Otherwise, we risk limiting human creativity and stifling scientific discovery.
Concerns over AI moderation
In 2024, The Future of Free Speech the policies of six major chatbots and tested 268 prompts to see how they handled controversial but legal topics, such as the participation of transgender athletes in women's sports and the “lab-leak” theory. We found that chatbots refused to generate content for more than 40% of prompts. This year, we and found that refusal rates dropped significantly to about 25% of the time.
Despite these positive developments, our survey’s findings indicate that people are comfortable with companies and governments erecting strict guardrails on what their AI chatbots can generate, which may result in large-scale government-mandated corporate control of users’ access to information and ideas.
Overwhelming opposition to political deepfakes
Unsurprisingly, the category of AI content that received the lowest support across the board in our survey was deepfakes of politicians. No more than 38% of respondents in any country expressed approval of political deepfakes. This finding aligns with a surge of legislative activity in both the U.S. and abroad as policymakers rush to regulate the use of AI deepfakes in elections.
At least introduced deepfake-related bills in the 2024 legislative session alone, with more than 50 bills already enacted. China, the EU, and others to pass laws requiring the detection, disclosure, and/or removal of deepfakes. Europe’s AI Act to mitigate nebulous and ill-defined “systemic risks to society,” which could lead companies to preemptively remove lawful but controversial speech like deepfakes critical of politicians.
Although deepfakes can have real-world consequences, First Amendment advocates who have challenged in the U.S. rightly argue that laws targeting political deepfakes open the door for governments to censor lawful dissent, criticism, or satire of candidates, a vital function of the democratic process. This is not a merely speculative risk.
An open society cannot thrive if its digital architecture is built to exclude dissent by design.
The editor of a far-right German media outlet was sentenced to a seven-month suspended prison sentence for of the Interior Minister holding a sign that ironically read, “I hate freedom of speech.” For much of 2024, Google restricted Gemini’s ability to generate factual responses about Indian Prime Minister Narendra Modi, after the Indian government when its chatbot responded that Modi had been “accused of implementing policies some experts characterized as fascist.”
And despite undermining global elections in 2024, studies from , , and the found no evidence that a wave of deepfakes affected election results in places like the U.S., Europe, or India.
People want regulation but don’t trust regulators
A recent found that nearly six in 10 U.S. adults believed the government would not adequately regulate AI. Our survey confirms these findings on a global scale. In all countries surveyed except Taiwan, at least a plurality supported dual regulation by both governments and tech companies.
Indeed, a survey found that 55% of Americans supported government restrictions on false information online, even if it limited free expression. But a found that more Americans fear misinformation from politicians than from AI, foreign governments, or social media. In other words, the public appears willing to empower those they distrust most with policing online and AI misinformation.
A new ĂŰĚŇÖ±˛Ą poll, conducted in May 2025, underscores this tension. Although about 47% of respondents said they prioritize protecting free speech in politics, even if that means tolerating some deceptive content, 41% said it’s more important to protect people from misinformation than to protect free speech. Even so, 69% said they were “moderately” to “extremely” concerned that the government might use AI rules to silence criticism of elected officials.
In a democracy, public opinion matters — and The Future of Free Speech survey suggests that people around the world, including in liberal democracies, favor regulating AI to suppress offensive or controversial content. But democracies are not mere megaphones for majorities. They must still safeguard the very freedoms — like the right to access information, question orthodoxy, and challenge those in power — that make self-government possible.
We should avoid Preemptive Safetyism
The dangers of Preemptive Safetyism are most vividly on display in China, where AI tools like DeepSeek must enforce “,” avoiding topics like Taiwan, Xinjiang, or Tiananmen, even when released in the West. What looks like a safety net can easily become a dragnet for dissent.
Speech being generated by a machine does not negate the human right to receive it, especially as those algorithms become central to the very search engines, email clients, and word processors that we use as an interface for the exchange of ideas and information in the digital age.
The greatest danger to speech often arises not from what is said, but from the fear of what might be said. An open society cannot thrive if its digital architecture is built to exclude dissent by design.
Recent Articles
FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Introducing Expression, ĂŰĚŇÖ±˛Ą's official new Substack

No gay rights without free expression

University of Michigan has ended private surveillance contracts but the chill on free speech remains
