Nick Clegg, the president of global affairs at Meta (formerly Facebook), downplayed the risks of current Artificial Intelligence (AI) models, stating that they are “quite stupid” and the hype surrounding AI has surpassed the technology’s capabilities. He mentioned that the current models are far from achieving genuine autonomy and the ability to think for themselves.
Meta recently announced that its large language model, LLaMA 2, will be an open-source tool for commercial businesses and researchers. This decision has sparked debates within the tech community due to concerns about the potential misuse of such a powerful tool.
The limitations of current AI models
Clegg acknowledged that large language models like GPT, upon which ChatGPT is made, are trained to predict the next word in a sequence based on enormous datasets of text, which makes them lack proper understanding and independent thinking.
While opening up the product for others to use through open source allows for free user testing data and improvements, it also raises concerns about the need for solid guardrails to prevent misuse. Previous chatbot iterations have been manipulated to spread hate speech and false information, raising questions about how Meta plans to address the potential abuse of LLaMA 2.
The collaboration with Microsoft to make LLaMA 2 available through Microsoft’s platforms like Azure indicates Meta’s ambitions in the AI field. With the deep pockets of companies like Microsoft investing in AI creators like OpenAI (the creator of ChatGPT), there are concerns about consolidating power in the AI industry, potentially limiting healthy competition.
Overall, the availability and use of LLaMA 2 raise essential questions about the ethical use of AI and the need for robust measures to prevent its misuse.
The need to go open source
LLaMA 2, developed through a partnership between Microsoft and Meta, is an open-source tool, making it accessible for commercial businesses and researchers. In contrast, GPT-4 and Google’s LLM, which powers the Bard chatbot, are not accessible in commercial or research applications.
Recently, US comedian Sarah Silverman filed a lawsuit against both OpenAI and Meta, alleging that her copyright was violated in training their AI systems.
Dame Wendy Hall, a prominent computer science professor at the University of Southampton, expressed concerns about open-sourcing AI models, particularly regarding legislation and regulation.
AI surrounded by Hyperbole
Hall raised the question of whether the industry can be trusted to self-regulate or if there is a need for government involvement in regulation. She used strong language, comparing open-sourcing AI to providing a template for building a nuclear bomb.
In response, Nick dismissed the comparison as “hyperbole,” clarifying that Meta’s open-sourced system, LLaMA 2, cannot generate images or build harmful bioweapons. However, he agreed that AI indeed needs to be regulated.
Sir Nick emphasized that open-sourcing AI models is already common practice, and the genuine concern is how to do it responsibly and safely. He asserted that the open-sourcing LLMs (large language models), including LLaMA 2, are safer than other open-sourced AI models.