OpenAI believes human-like SuperAI is coming sooner than expected, plans to control, capitalise on it


Despite recent internal shakeups at OpenAI, the Superalignment team, led by Ilya Sutskever, remains steadfast in its mission to develop strategies for steering and regulating superintelligent AI systems.


This team, formed in July, is tackling the complex challenge of aligning AI models that surpass human intelligence.

While some skeptics argue that the focus on superintelligent AI is premature, the Superalignment team is actively exploring governance and control frameworks to address the potential risks associated with brilliant systems, as reported by TechCrunch.

The Superalignment team, comprised of Collin Burns, Pavel Izmailov, and Leopold Aschenbrenner, presented their latest work at the NeurIPS conference.

Their approach involves using a less sophisticated AI model (e.g., GPT-2) to guide a more advanced model (e.g., GPT-4) toward desired behaviors and away from undesirable ones.

This analogy, where the weak model represents human supervisors and the robust model symbolizes superintelligent AI, aims to explore alignment hypotheses in a controlled manner.

The team is focused on instructing AI models effectively, ensuring they follow given instructions, and verifying the safety and accuracy of generated outputs.

The Superalignment team acknowledges the challenges of aligning models that surpass human intelligence and emphasizes the importance of research in addressing this critical issue.

To encourage collaboration and innovation, OpenAI is launching a $10 million grant program for technical research on superintelligent alignment. The program will fund academic labs, nonprofits, individual researchers, and graduate students.

Former Google CEO Eric Schmidt, a supporter of OpenAI and advocate for AI research, is contributing to the funding. OpenAI also plans to host an academic conference on super alignment in early 2025 to share and promote research findings.

The Superalignment team is committed to sharing its research, including code, with the public. The team’s mission aligns with OpenAI’s goal of ensuring AI safely benefits humanity.

The involvement of Schmidt, whose commercial interests in AI have been noted, raises questions about the commercial and ethical implications of OpenAI’s superalignment research. Nevertheless, the team remains dedicated to contributing to the safety and benefit of advanced AI for the broader community.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed