FT商学院

AI models must adapt or die

The technology has already consumed almost all high-quality data — experience is now the dominant medium of improvement

The nosebleed valuations in the US tech sector partly reflect the belief that artificial general intelligence is within sight. Even though few agree on what AGI means exactly, investors seem convinced that a stronger form of generalisable AI will transform economic productivity and make mountainous fortunes for its creators. 

To sustain that story, US tech firms have been pouring hundreds of billions of dollars into building more AI infrastructure to scale their computing power. The trouble is that scaling is now producing diminishing returns and some researchers doubt whether the AI industry’s route map will ever lead to fully generalisable intelligence. Arch-sceptic Gary Marcus wrote recently that generative AI models were still best viewed as “souped-up regurgitation machines” that struggled with truth, hallucinations and reasoning and would never bring us to the “holy grail of AGI”.

The debate about the limits of scaling has been raging for years and, up until now, the doubters have been proved wrong. In 2019 the computer scientist Rich Sutton wrote The Bitter Lesson, arguing that the best way to solve AI problems was to keep throwing more data and computing power at them. The bitter lesson was that human ingenuity was overrated and constantly outstripped by the power of scaling. 

您已阅读28%(1303字),剩余72%(3288字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×