If 2024 was the year of experimentation with generative AI, then last year was one of implementation. Hundreds of thousands of businesses, as well as many hundreds of millions of individual users, applied the technology in all kinds of weird and wonderful ways. In some cases, users found highly productive uses of AI, but in many others the technology’s limitations became increasingly apparent, resulting in embarrassing business blunders.
This year will therefore be dominated by hard-headed evaluation as AI comes under intense scrutiny over its practical reliability and commercial viability. In particular, there are three questions the industry must address to justify the extraordinary investment surge that may see AI capital expenditure top $500bn in 2026.
First, is generative AI now hitting the limits of scaling? Back in 2019, the AI researcher Rich Sutton wrote an essay entitled “The Bitter Lesson” observing that the most effective way to build stronger AI was simply to throw more data and computation power at deep learning models. That scaling theory has since been spectacularly validated by OpenAI, and others, who have been building ever more powerful and computation-intensive models.