观点人工智能

It’s not only AI that hallucinates
不只AI会产生幻觉

Human memory is also fallible but people and machines can learn to complement each other
桑希尔:现今的生成式AI模型会编造事实,而人类的记忆同样不可靠,但人和机器可以学着互补。

It may be rash to extrapolate from a sample size of one (me). But I confess that my memory is not perfect: I forget some things, confuse others and occasionally “remember” events that never happened. I suspect some FT readers may be similarly muddle-headed. A smart machine might call this human hallucination.We talk a lot about generative AI models hallucinating facts. We wince at the lawyer who submitted a court document containing fictitious cases invented by ChatGPT. An FT colleague, who prompted the chatbot to produce a chart of the training costs of generative AI models, was startled to see that the most expensive one it identified did not exist (unless the model has access to inside information). As every user rapidly discovers: these models are unreliable — just like humans. The interesting question is: are machines more corrigible than us? It may prove easier to rewrite code than rewire the brain.

凭借一个孤例(我)来推断或许有失草率,但坦白说,我的记忆并不完美:我会忘记一些事情,记错一些事情,偶尔“记得”从没发生过的事情。我想,一些英国《金融时报》读者的记性或许也同样稀里糊涂。一台智能机器可能会把这叫做“人类的幻觉”。

您已阅读16%(1031字),剩余84%(5361字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×