OpenAI launches safety fellowship to advance AI alignment research and assess effects on society

OpenAI has announced a new program inviting external researchers, engineers, and practitioners to pursue research on essential safety precautions and the alignment of advanced AI systems. The program is set to commence from September 14, 2026, through February 5, 2027. The AI platform has indicated that it is seeking applicants to work on safety questions that would apply to existing and future systems.

The researchers will be asked to maintain a tight focus on safety evaluations, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among other areas.

The platform has further mentioned a keen interest in work that is empirically grounded, technically strong, and relevant to the broader research community.

The researchers will work closely with OpenAI mentors and engage with peers. Workspace for all researchers will be available in Berkeley, alongside other fellows at Constellation, though fellows may also work remotely. While OpenAI is willing to support researchers in their endeavors, fellows are expected to produce substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship will also include a monthly stipend, computer support, and mentorship.

OpenAI is accepting applicants from a wide range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and other related fields. OpenAI has also clarified that research ability, technical judgment, and execution will be prioritized over specific credentials.

Other tech giants, such as Meta and Anthropic, also hire researchers to analyze how their social services affect users. However, in Meta’s case, one of the researchees, a whistleblower, became a whistleblower, after which Mark Zuckerberg’s company began diminishing the role of its research team, according to a CNBC report.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed