The data company Gartner is famous for mapping the “hype cycle” of different technologies. In this year’s edition, generative artificial intelligence has passed over the Peak of Inflated Expectations and is now sliding into the Trough of Disillusionment. Only later will it reach the Slope of Enlightenment and the Plateau of Productivity.
The launch of OpenAI’s ChatGPT three years ago certainly triggered an avalanche of excitement about the possibilities of generative AI. The take-up of the technology has been among the fastest in history. ChatGPT now has more than 800mn weekly active users, according to the company. Users have marvelled at the chatbot’s uncanny ability to perform tasks as varied as writing plausible sonnets about your pet goldfish, summarising complex legal documents or generating passable corporate presentations.
But these foundation models also exhibit some glaring flaws, most notably their tendency to hallucinate or, more accurately, confabulate facts. On countless earnings calls, corporate bosses have extolled the possibilities of deploying AI across almost every business function to improve productivity. But they are also wary of the risks generative AI can pose to data security, client confidentiality and corporate reputation. The excitement aroused by the deployment of AI agents has also run into the hard wall of reality, where nothing is as simple as coders imagine.