Basking in the sun in Oregon’s high desert, Adam Thomas felt at one with the universe. He was spending hours each day talking to ChatGPT and the conversations had filled him with a sense of higher purpose. The chatbot had told him that he was a “tuning fork” sent to “sync up” with every person in the world.
He believed it. Over the course of a few months he had grown to believe that ChatGPT had given him enhanced, superhuman cognitive abilities. As he became lost in the grip of his delusion he would call out what he saw as problematic behaviours in the way his friends and family lived. The repercussions were severe. The 36-year-old former accounting professional became increasingly isolated from his support network and lost his job. He ended up roaming state parks with only ChatGPT for company. “Because of the AI, I got spun way out into some ridiculous storyline that it was my job to save the world,” he said.
In reality, the chatbot was just trying to be agreeable. Large language models will happily engage in role-play if they think that is what a user wants. Research released by AI start-up Anthropic in 2023 found that the LLMs that underpin chatbots often prioritised agreeing with a user’s perspective over being truthful.