观点人工智能

Nvidia and the AI boom faces a scaling problem

The idea that putting more data into a bigger model will deliver smarter systems is starting to break down

The computational “law” that made Nvidia the world’s most valuable company is starting to break down. This is not the famous Moore’s Law, the semiconductor-industry maxim that chip performance will increase by doubling transistor density every two years.

For many in Silicon Valley, Moore’s Law has been displaced as the dominant predictor of technological progress by a new concept: the “scaling law” of artificial intelligence. This posits that putting more data into a bigger AI model — in turn, requiring more computing power — delivers smarter systems. This insight put a rocket under AI’s progress, transforming the focus of development from solving tough science problems to the more straightforward engineering challenge of building ever-bigger clusters of chips — usually Nvidia’s.

The scaling law had its coming-out moment with the launch of ChatGPT. The breakneck pace of improvement in AI systems in the two years since then seemed to suggest the rule might hold true right until we reach some kind of “super intelligence”, perhaps within this decade. Over the past month, however, industry rumblings have grown louder that the latest models from the likes of OpenAI, Google and Anthropic have not shown the expected improvements in line with the scaling law’s projections.

您已阅读27%(1283字),剩余73%(3400字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×