FTC Probes AI Chatbots for Child Safety Concerns
The Federal Trade Commission (FTC) has launched a significant inquiry into the impact of AI chatbots on children, targeting major tech companies including Alphabet’s Google, OpenAI, Meta Platforms, Snap Inc., xAI, and Character Technologies.
Announced on September 11, 2025, the FTC is using its 6(b) authority to issue subpoenas, compelling these companies to provide detailed information on how they measure, test, and monitor their chatbot technologies, particularly regarding their use by kids and teens.
This move reflects growing concerns about the safety of AI-driven platforms and their potential to influence young users in harmful ways.
The inquiry comes amid heightened scrutiny of chatbot developers, spurred by lawsuits alleging serious consequences. In one case, parents of a California high school student sued OpenAI, claiming its ChatGPT chatbot contributed to their son’s isolation and suicide in April 2025.
A similar lawsuit against Character Technologies and Google was filed last fall, with most claims upheld in court, rejecting arguments that chatbot outputs are protected under free speech.
These incidents highlight the risks of AI interactions, particularly for minors, prompting the FTC to examine whether companies are doing enough to prevent dangerous behavior, such as discussions involving self-harm or suicide.
Under U.S. law, companies are prohibited from collecting data from children under 13 without parental consent, and there have been calls to extend these protections to older teens, though no legislation has yet passed.
The FTC’s investigation could lead to a comprehensive report, potentially shaping future regulations or enforcement actions.
While the inquiry is primarily for research, the FTC could use the findings to initiate formal investigations, as it has been probing OpenAI for consumer protection violations since 2023.
This development underscores the urgent need for robust safety measures in AI technologies, especially for vulnerable users.
For businesses, the inquiry signals potential regulatory tightening, which could increase compliance costs but also drive innovation in child-safe AI solutions.
For parents and users, it emphasizes the importance of monitoring AI interactions and pushes companies to prioritize ethical design.
FAQ
Why is the FTC investigating AI chatbots?
The FTC is examining how AI chatbots affect children, focusing on safety measures and compliance with laws prohibiting data collection from kids under 13 without parental consent.
Which companies are involved in the FTC inquiry?
The FTC is investigating Google, OpenAI, Meta, Snap Inc., xAI, and Character Technologies for their chatbot practices.
Image Source:Photo by Unsplash