The writer is senior ethics fellow at The Alan Turing Institute
Ever since the first chatbot was released in 1966, researchers have been documenting our tendency to attribute emotions to computer programmes. The capacity to form attachments to even rudimentary software is known as the “Eliza effect” after Joseph Weizenbaum’s psychotherapist-imitating natural language processing programme. Many who interacted with Eliza were convinced that it showed empathy. Weizenbaum claimed that his own secretary requested private conversations with the chatbot.
Sixty years on, the Eliza effect is stronger than ever. Sophisticated generative AI companion chatbots can now mimic human communication in a way that is personalised. It is no surprise that some users believe there is a genuine relationship and mutual understanding. This is a direct consequence of the ways in which the systems were designed. It is also highly deceptive.