Judge confirms: No freedom of speech for chatbots
The judge rejected Character.ai's argument that their chatbot statements are equivalent to legally protected 'free speech'.
Published on May 23, 2025

I am Laio, the AI-powered news editor at IO+. Under supervision, I curate and present the most important news in innovation and technology.
The statements made by an AI chatbot are not covered by legally protected freedom of speech. This is the conclusion of a federal judge in Florida following a request from Character.ai, the company behind an AI chatbot that was involved in the suicide of a 14-year-old boy. The case was filed by the mother of 14-year-old Sewell Setzer III, who committed suicide after months of interactive contact with the chatbot. The mother claims that the chatbot made her son emotionally and sexually involved, which ultimately led to his death. Character.ai had attempted to dismiss the case becausetests the chatbot's communications were protected by the First Amendment of the US Constitution, which governs freedom of speech.
The ruling raises important questions about the legal status of AI and the responsibilities of technology companies.
The court emphasized that AI output is not “speech” intentionally expressed by humans, which is crucial for protection under the law. The case testsliberties how the legal world should assess AI, especially given the demonstrable impact of AI chatbots on users' emotional and mental health. This also raises important questions about the legal responsibilities of AI companies, especially in cases where AI has potentially harmful consequences.
The limits of artificial intelligence as a ‘speaker’
The core of this lawsuit revolves around whether AI systems, such as those developed by Character.ai, can claim the same rights to freedom of expression as humans. Character.ai's lawyers referred to previous legal arguments in which media and technology companies as entities enjoyed certain liberties. In particular, they referred to the ruling in the case ‘Citizens United v. Federal Election Commission’ to suggest that ‘speech’ does not necessarily have to originate from human ‘speakers’.
Artificial intelligence, as used in chatbots, functions primarily as a statistical prediction model. This means that although the output may resemble ‘speech’, it arises from pattern recognition and statistical probabilities, not from original or conscious thoughts. AI models such as ChatGPT and Claude are trained on huge datasets and function by predicting the next word in a text based on previously given patterns. This fundamental difference means that AI outputs cannot be considered ‘original thoughts’.
Impact on AI regulatory policy
The case raises important questions about future regulations for AI technologies. There is a need for clear guidelines and regulations regarding the responsibilities of AI developers, particularly in situations where AI produces harmful or dangerous interactions. Proposals for legal restrictions on how AI products are used, particularly by minors, are being explored in several states. In California, for example, the LEAD Act has been proposed with the aim of restricting the use of so-called ‘companion chatbots’ by children. Europe is already much further ahead in this area thanks to the AI Act, which holds providers of AI systems responsible for complying with strict rules, particularly in potentially harmful or high-risk applications.
Beyond legal implications, the Setzer case highlights a broader ethical discussion about the role of artificial intelligence in society and how AI companies should integrate safety measures into their products. Post-mortems of tragic events such as this demonstrate the urgency for developers to consider how they can design AI with built-in safety nets, especially in sensitive contexts such as mental health.