FT商学院

The ‘hallucinations’ that haunt AI: why chatbots struggle to tell the truth

Tech groups step up efforts to reduce fabricated responses but eliminating them appears impossible

The world’s leading artificial intelligence groups are stepping up efforts to reduce the number of “hallucinations” in large language models, as they seek to solve one of the big obstacles limiting take-up of the powerful technology.

Google, Amazon, Cohere and Mistral are among those trying to bring down the rate of these fabricated answers by rolling out technical fixes, improving the quality of the data in AI models, and building verification and fact-checking systems across their generative AI products.

The move to reduce these so-called hallucinations is seen as crucial to increase the use of AI tools across industries such as law and health, which require accurate information, and help boost the AI sector’s revenues.

您已阅读6%(729字),剩余94%(12329字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×