The writer is professor of computer science at the Université de Montreal and founder of Quebec Artificial Intelligence Institute Mila
Lack of internal deliberation abilities — thinking, in other words — has long been considered one of the main weaknesses of artificial intelligence. The scale of a recent advance in this by ChatGPT creator OpenAI is a point of debate within the scientific community. But it leads many of my expert colleagues and I to believe that there is a chance that we are on the brink of bridging the gap to human-level reasoning.
Researchers have long argued that traditional neural networks — the leading approach to AI — align more with “system 1” cognition. This corresponds to direct or intuitive answers to questions (such as when automatically recognising a face). Human intelligence, on the other hand, also relies on “system 2” cognition. This involves internal deliberation and enables powerful forms of reasoning (like when solving a maths problem or planning something in detail). It allows us to combine pieces of knowledge in coherent but novel ways.