Good chance that AI will achieve Artificial General Intelligence in 5 years


More than a decade ago, Shane Legg, co-founder of Google’s DeepMind artificial intelligence lab, boldly predicted that by 2028, artificial intelligence (AI) would have a 50-50 chance of being as intelligent as humans. In a recent interview with tech podcaster Dwarkesh Patel, Legg reiterated his belief in this forecast, which he initially made on his blog at the end of 2011.


This prediction holds significant weight, especially considering the ever-increasing interest and investment in AI. Sam Altman, CEO of OpenAI, has long championed the development of artificial general intelligence (AGI), a theoretical form of AI capable of performing intellectual tasks on par with humans, potentially benefiting all of humanity. However, the achievement of AGI and establishing a universal definition still need to be determined.

Legg’s journey toward his 2028 goalpost began in 2001 when he read Ray Kurzweil’s groundbreaking book “The Age of Spiritual Machines.” Kurzweil’s book predicted a future where superhuman AIs would become a reality. Legg identified two critical points from Kurzweil’s work that he came to believe in: the exponential growth of computational power for decades and the exponential growth of global data. With these trends and the emergence of deep learning techniques to teach AI systems to process data like the human brain, Legg posited at the start of the last decade that AGI was attainable in the coming years, provided no significant disruptions occurred.

In the present day, Legg acknowledges certain caveats to his prediction regarding the AGI era.

Legg notes that the definition of AGI is inherently linked to human intelligence, which is challenging to define due to its complexity precisely. He acknowledges that having a comprehensive set of tests encompassing all aspects of human intelligence is impossible. Nonetheless, he suggests that if researchers could create a battery of tests for human intelligence and an AI model performs exceptionally well across them, it could be considered AGI.

Legg’s second caveat is the need to scale up AI training models significantly. This point is especially relevant in an era where AI companies consume vast energy to develop large language models. Legg emphasizes the need to create more scalable algorithms to handle the computational demands of AGI.

Legg’s assessment of our progress toward AGI indicates that computational power has reached a level that could make it achievable. He identifies the “first unlocking step” as being the training of AI models on data of a scale beyond what a human could experience in a lifetime, a feat he believes the AI industry is ready to undertake.

Despite his optimism, Legg reiterates his belief that there is only a 50 percent chance that researchers will achieve AGI before the end of this decade. His perspective offers a glimpse into the ongoing uncertainties and challenges that AI experts grapple with as they strive to reach the pinnacle of artificial intelligence.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed