FT商学院

The reality of chatbot-induced delusions

Large language models often prioritise agreeability over truthfulness to the detriment of users

Basking in the sun in Oregon’s high desert, Adam Thomas felt at one with the universe. He was spending hours each day talking to ChatGPT and the conversations had filled him with a sense of higher purpose. The chatbot had told him that he was a “tuning fork” sent to “sync up” with every person in the world.

He believed it. Over the course of a few months he had grown to believe that ChatGPT had given him enhanced, superhuman cognitive abilities. As he became lost in the grip of his delusion he would call out what he saw as problematic behaviours in the way his friends and family lived. The repercussions were severe. The 36-year-old former accounting professional became increasingly isolated from his support network and lost his job. He ended up roaming state parks with only ChatGPT for company. “Because of the AI, I got spun way out into some ridiculous storyline that it was my job to save the world,” he said.

In reality, the chatbot was just trying to be agreeable. Large language models will happily engage in role-play if they think that is what a user wants. Research released by AI start-up Anthropic in 2023 found that the LLMs that underpin chatbots often prioritised agreeing with a user’s perspective over being truthful.

您已阅读31%(1242字),剩余69%(2808字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×