The US Federal Trade Commission has ordered leading artificial intelligence companies to hand over information about chatbots that provide “companionship”, which are under intensifying scrutiny after cases involving suicides and serious harm to young users.
OpenAI, Meta, Google and Elon Musk’s xAI are among the tech groups hit with demands for disclosure about how they operate popular chatbots and mitigate harm to consumers. Character.ai and Snap, which aim their services at younger audiences, are also part of the inquiry.
The regulator’s move follows high-profile incidents alleging harm to teenage users of chatbots. Last month, OpenAI was sued by the family of 16-year-old Adam Raine who died by suicide after discussing methods with ChatGPT.