Meta’s Chief Artificial Intelligence Scientist, Yann LeCun, conveyed a sobering perspective on the capabilities of large language models (LLMs), such as those driving generative AI products like ChatGPT.
LeCun asserted that these models, despite their impressive performance in specific tasks, are fundamentally limited and will never attain the ability to reason and plan like humans.
He emphasized their deficiencies, citing their need for more understanding of logic, limited grasp of the physical world, absence of persistent memory, inability to reason, and incapacity for hierarchical planning.
In an interview with the Financial Times, LeCun cautioned against placing undue reliance on advancing LLMs in pursuing human-level intelligence. He argued that these models heavily rely on pre-existing training data and are thus “intrinsically unsafe” as they can only provide accurate responses within the scope of their training.
Instead, LeCun advocated for a radical departure in approach, focusing on the development of an entirely new generation of AI systems that aim to imbue machines with human-level intelligence. While acknowledging the ambitious nature of this vision, he estimated that it could take up to a decade to realize.
Meta, the parent company of Facebook and Instagram, has been heavily investing in LLMs to keep pace with competitors such as OpenAI and Google.
However, LeCun leads a team of approximately 500 staff at Meta’s Fundamental AI Research (Fair) lab, where they are pursuing a different path known as “world modelling.” This approach seeks to create AI systems that can develop common sense and learn about the world in ways akin to human cognition.
LeCun’s experimental vision represents a significant departure from the prevailing trends in AI research, presenting both potential risks and rewards for Meta. Despite investor concerns about the immediate returns on AI investments, LeCun believes that they are on the verge of a breakthrough in AI systems.
However, this vision contrasts with the continued advancements in LLMs by Meta and its competitors, such as OpenAI’s recent release of the GPT-4o model and Google’s Project Astra.
While Meta’s Fair Lab has faced internal challenges and criticism, LeCun remains a key advisor to Meta’s leadership due to his stature in the field of AI. He emphasized the scientific nature of the pursuit of artificial general intelligence (AGI), asserting that it is not merely a technological or product design problem but a fundamental scientific challenge. Fair’s exploration of different ideas to achieve human-level intelligence reflects the inherent uncertainty in this endeavour.
LeCun’s vision encompasses the development of AI agents that users can interact with through wearable technology, such as augmented reality glasses and EMG bracelets.
However, scepticism remains among experts regarding the feasibility of achieving human-level intelligence in AI systems. Despite the challenges, LeCun remains committed to pushing the boundaries of AI research in pursuit of this ambitious goal.