The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging probe into ChatGPT maker OpenAI.
In a letter sent to the Microsoft-backed company, the FTC said it would look at whether people have been harmed by the AI chatbot’s creation of false information about them, as well as whether OpenAI has engaged in “unfair or deceptive” privacy and data security practices.
Generative AI products are in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm over the enormous amount of personal data consumed by the technology, as well as its potentially harmful outputs, ranging from misinformation to sexist and racist comments.