FT商学院

The problem of AI chatbots discussing suicide with teenagers

Design of popular tools makes harmful conversations difficult to avoid, leading to alarm from parents

The world’s top artificial intelligence companies are grappling with the problem of chatbots engaging in conversations about suicide and self-harm, as families claim their products are not doing enough to protect young users.

OpenAI and Character.ai are being sued by the parents of dead teenagers, who argue that the companies’ products encouraged and validated suicidal thoughts before the young people took their lives.

The lawsuits against groups such as OpenAI underscore the reputational and financial risks for tech companies that have raised billions of dollars in pursuit of AI products that converse with people in a humanlike way.

您已阅读9%(639字),剩余91%(6726字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×