skip to content

OpenAI’s o1 model aka Strawberry can create bioweapons, comes with ‘medium risk’ accepts AI giant

OpenAI has acknowledged that its latest artificial intelligence models, known as o1 or “Strawberry,” pose an increased risk of misuse, particularly in creating biological weapons.

The company stated that these significantly enhanced capabilities hold the potential for beneficial applications in the right hands. The models boast improvements in reasoning, solving complex mathematical problems, and answering scientific research questions, marking a step forward in developing artificial general intelligence (AGI).

According to OpenAI’s system card, the new o1 models have been rated with a “medium risk” concerning chemical, biological, radiological, and nuclear (CBRN) weapons, the highest risk level the company has ever attributed to its AI technology.

This means that the models enable experts to develop bioweapons more effectively, raising ethical and safety concerns. While AI’s advanced reasoning abilities are a breakthrough in the field, they are considered a potential threat if used by bad actors for malicious purposes.

Experts, such as Professor Yoshua Bengio, one of the leading voices in AI research, have highlighted the urgent need for regulation in light of these risks. A proposed bill in California, SB 1047, aims to address such concerns by requiring AI developers to minimize the risk of their models being used to create bioweapons.

Bengio and others have stressed that as AI models evolve closer to AGI, the associated risks will only increase unless models have significantly enhanced capabilities and have the potential for beneficial applications in the right hands. The models boast improvements in reasoning, solving complex mathematical problems, and answering scientific research questions, marking a step forward in the development of urgent and robust safety measures that are implemented.

The development of these advanced AI systems is part of a broader competition among tech giants such as Google, Meta, and Anthropic. They are all vying to create sophisticated AI that can act as agents, assisting humans in various tasks. These AI agents are viewed as significant revenue generators for companies facing high training costs and operating such models.

OpenAI’s chief technology officer, Mira Murati, emphasized that the company is proceeding cautiously in releasing the o1 model to the public, ensuring the safety of its deployment. While it will be available to ChatGPT’s paid subscribers and developers via an API, rigorous testing has been conducted by “red-teamers,” experts tasked with identifying potential vulnerabilities in the model.

Murati noted that the latest model has demonstrated better safety performance than earlier versions. Despite the risks, OpenAI has deemed the model safe to deploy under its policies, assigning it a medium risk rating within its cautious framework.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed