Speaking at a session at the World Economic Forum in Davos, Meta chief AI scientist, Yann Le Cunn, has predicted “a new paradigm shift of AI architectures”. He says that the AI we know right now, which is generative AI and large language models (LLMs), are not capable of much. They get the basics done but still fall short. And in the next five years, “nobody in their right mind would use them anymore”, he said. “I think the shelf life of the current [AI] paradigm is fairly short, probably three to five years,” LeCun added.
advertisement
“We’re going to see the emergence of a new paradigm for AI architectures, which may not have the limitations of current AI systems,” he predicts.
The Meta chief AI scientist also believes that in the next few years we could be living in the “decade of robotics” where we will see a whole new application of the technology that will combine robots and AI.
LeCun also explains why he thinks the current AI models aren’t capable of doing much. He gives four reasons for that. One, he says, the current models lack the awareness and understanding of the physical world. Two, there is a limitation to how much it remembers at once and does not have a continuous memory. Three, it lacks the power of reasoning. And four, it is unable to perform complex planning tasks.
“So there’s going to be another revolution of AI over the next few years. We may have to change the name of it, because it’s probably not going to be generative in the sense that we understand it today,” LeCun says.
LeCun points out that the “AI revolution” could be some 10 years away, but with the way AI is progressing right now, the big change could be closer.
“LLMs are good at manipulating language, but not at thinking,” LeCun said.
While “Debating Technology” at Davos, LeCun may have also revealed what Meta’s AI labs are working on right now.
“So that’s what we’re working on — having systems build mental models of the world. If the plan that we’re working on succeeds, with the timetable that we hope, within three to five years we’ll have systems that are a completely different paradigm,” he said. “They may have some level of common sense. They may be able to learn how the world works from observing the world and maybe interacting with it.”
Interestingly, as the Meta AI chief scientist spoke about the current AI and LLM models’ lack of performing complex tasks, OpenAI and Perplexity on Friday, unveiled new agentic AI, which they claim excel at performing complex, multi-step tasks. For instance, OpenAI’s AI agent called Operator –– which is unfortunately available only in the US right now –– is capable of ordering groceries for you if you give it a shopping list, or it can book flight tickets for you if you share your itinerary with it. It can also create memes for you. Basically, anything you do on the web, the agentic AI can do on your behalf.
This post was originally published on here