蜜桃直播

Table of Contents

Same old playbook, new target: AI chatbots

ChatGPT on an iPhone

jackpress / Shutterstock.com

Chatbots are already transforming how people access information, express themselves, and connect with others. From to , these tools are becoming an everyday part of digital life. But as their use grows, so does the urgency to protect the First Amendment rights of both developers and users.

That鈥檚 because some state lawmakers are pursuing a familiar regulatory approach: requiring things like blanket age verification, rigid time limits, and mandated lockouts on use. But like other means of digital communication, the development and use of chatbots have First Amendment protection, so any efforts to regulate them must carefully navigate significant constitutional considerations.

Prompting a chatbot involves ... the user choosing words to communicate ideas, seek information, or express thoughts. That act of communication is protected under the First Amendment, even when software generates the specific response.

Take New York鈥檚 , which would make every user, including adults, verify their age before chatting, and would fine chatbot providers when a 鈥渕isleading鈥 or 鈥渉armful鈥 reply 鈥渞esults in鈥 any kind of demonstrable harm to the user. This is, in effect, a breathtakingly broad 鈥渕isinformation鈥 bill that would permit the government to punish speech it deems false 鈥 or true but subjectively harmful 鈥 whenever it can point to a supposed injury. This is inconsistent with the First Amendment, which precludes the government from regulating chatbot speech it thinks is misleading or harmful 鈥 just as it does with any other expression.

S 5668 would also require that certain companion bots be shut down for 24 hours whenever expressions of potential self-harm are detected, complementing  that requires companion chatbots to include protocols to detect and address expressions of self-harm and direct users to crisis services. Both the bill and the new law also require chatbots to remind users that they are AI and not a human being. 

Sound familiar? States like California, Utah, Arkansas, Florida, and Texas all attempted similar regulatory measures targeting another digital speech technology: social media. Those efforts have resulted in   , repeals, , and blocked implementation because they violated the First Amendment rights of the platforms and users. 

New York is just one of a few states that have introduced similar chatbot legislation. Minnesota鈥檚  requires age verification while flatly banning anyone under age 18 from 鈥渞ecreational鈥 chatbots. California鈥檚  targets undefined 鈥渞ewarding鈥 chat features, leaving developers to guess what speech is off-limits and pressuring them to censor conversations.

As we鈥檝e said before, the First Amendment doesn鈥檛 evaporate when the speaker鈥檚 words depend on computer code. From the printing press to the internet, and now AI, each leap in expressive technology remains under its protective umbrella.  

This is not because the machine itself has rights; rather, it鈥檚 protected by the rights of the developer who created the chatbot and of  who create the prompts. Just like asking a question in a search engine or posting on social media and the responses they generate, prompting a chatbot involves a developer鈥檚 expressive design and the user choosing words to communicate ideas, seek information, or express thoughts. That act of communication is protected under the First Amendment, even when software generates the specific response.

FIRE will keep speaking out against these bills, which show a growing pattern of government overreach into First Amendment rights when it comes to digital speech. 

Recent Articles

FIRE鈥檚 award-winning Newsdesk covers the free speech news you need to stay informed.

Share