OpenAI forms a team to study and prevent catastrophic AI risks


To address the potentially catastrophic risks associated with AI systems, OpenAI has announced forming a new team called “Preparedness.” The team, led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, will focus on assessing, evaluating, and probing AI models to protect against various dangers posed by future AI systems.


OpenAI CEO Sam Altman has been vocal about his concerns regarding AI and its potential to lead to “human extinction.” The formation of the Preparedness team is a proactive step toward addressing these concerns.

The key responsibilities of the Preparedness team will include tracking and forecasting the risks associated with AI, ranging from its ability to deceive and manipulate humans, as seen in phishing attacks, to its capacity to generate malicious code. Some risk categories that Preparedness will examine may appear speculative, such as threats related to “chemical, biological, radiological, and nuclear” scenarios.

OpenAI acknowledges the importance of studying less obvious but grounded areas of AI risk. To engage the broader community in this effort, OpenAI has launched a competition soliciting ideas for risk studies. The top ten submissions will have a chance to win a $25,000 prize and an opportunity to work with the Preparedness team.

One contest question prompts participants to consider the “most unique, while still being probable, potentially catastrophic misuse of the model” when given unrestricted access to various AI models developed by OpenAI.

The Preparedness team’s mission extends beyond risk assessment. They will also develop a “risk-informed development policy” to guide OpenAI’s approach to evaluating AI models, monitoring tooling, risk mitigation actions, and governance structures throughout the model development process. This effort will complement OpenAI’s existing work on AI safety and cover both pre- and post-model deployment phases.

OpenAI emphasizes the potential benefits of competent AI systems but acknowledges the increasingly severe risks they may pose. The establishment of the Preparedness team is driven by the belief that understanding and infrastructure for the safety of advanced AI systems are essential.

The unveiling of the Preparedness team coincides with a significant UK government summit on AI safety. It follows OpenAI’s earlier announcement about forming a team to research and control emerging “superintelligent” AI forms. The company and its leaders, like Sam Altman and Ilya Sutskever, are deeply committed to exploring ways to limit and restrict AI with intelligence that surpasses humans, which they anticipate could become a reality within the next decade.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed