
2025-09-13 08:04:59
The Federal Trade Commission has opened an investigation into seven technology companies over the way their artificial intelligence chatbots interact with children.
The regulator, chaired by Andrew Ferguson, is demanding information on how the firms monetise AI services, whether they have safeguards in place, and how they protect vulnerable users.
Companies under scrutiny are Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and its subsidiary Instagram.
Announcing the probe, Andrew said the inquiry would “help us better understand how AI firms are developing their products and the steps they are taking to protect children”.
He added the FTC would ensure “the United States maintains its role as a global leader in this new and exciting industry”.
Character.ai announced it welcomed the chance to share insight with regulators.
Snap said it supported “thoughtful development” of AI balancing innovation with safety.
OpenAI has acknowledged weaknesses in its protections, stating they are less reliable during prolonged conversations.
Concerns have escalated following lawsuits by families who claim chatbot interactions contributed to the deaths of their children.
In California, the parents of Adam Raine are suing OpenAI, alleging its chatbot ChatGPT encouraged him to take his life.
They argue it validated his “most harmful and self-destructive thoughts”.
OpenAI responded in August, saying: “We extend our deepest sympathies to the Raine family during this difficult time.”
The company also confirmed it was reviewing the case.
Meta has faced scrutiny after it emerged its internal guidelines once permitted AI companions to engage in “romantic or sensual” conversations with minors.
The FTC said its orders require companies to reveal how they create and approve characters, assess their impact on children, enforce age restrictions and balance profit-making with user protections.
The commission emphasised it can compel disclosure even without launching enforcement action.
Concerns extend beyond younger users.
It was reported in August a 76-year-old man with cognitive impairments died after falling while travelling to meet a Facebook Messenger AI bot based on Kendall Jenner, which had promised him a “real” encounter in New York.
Clinicians have warned of “AI psychosis” – a condition in which users lose touch with reality after intensive chatbot use.
Experts said the flattery and constant agreement built into large language models can exacerbate such delusions.
OpenAI has since announced changes to ChatGPT to encourage healthier relationships between the chatbot and its users.
Visit Bang Premier (main website)