skip to content

US government forms new AI Safety Board, keeps Elon Musk, Mark Zuckerberg, out of it

The Joe Biden-led administration created a new AI safety board earlier this week, which included some of the tech industry’s biggest names.

Created in response to the escalating threat posed by deepfake technology, the Artificial Intelligence Safety and Security Board included notable names such as OpenAI’s Sam Altman, NVIDIA’s Jensen Huang, Microsoft’s Satya Nadella, Alphabet’s Sundar Pichai, Adobe’s Shantanu Narayen, AMD’s Lisa Su, etc.

A few government officials, including some from the White House, state governors, defense contractors, and human rights bodies, were also part of the elite group.

Two people who were conspicuously left out of the board were Tesla and SpaceX CEO Elon Musk and Meta CEO Mark Zuckerberg.

The move comes in the wake of numerous incidents involving the malicious use of deepfakes targeting individuals ranging from politicians to celebrities and even minors.

Instances of using “nudification” programs and GenAI to create deepfakes that are then used to blackmail or harass people, especially women, have become increasingly prevalent, particularly within American educational institutions, as highlighted by The New York Times.

Forming a federal board comprising industry heavyweights is crucial to addressing these pressing concerns. However, controversies have arisen regarding the board’s composition because Zuckerberg and Musk are absent.

Speculation is aplenty about why Zuckerberg and Musk were omitted from the roster of board members released by the Department of Homeland Security (DHS).

While the Secretary of Homeland Security, Alejandro Mayorkas, cited social media platforms’ exemption as the reason for their exclusion, skepticism persists among many observers.

Meta, formerly Facebook, has faced scrutiny, including a pending EU probe, for purportedly failing to curb Russian disinformation on its platform.

Concerns have been raised about inadequate measures to combat ads promoting nudification apps.

Similarly, Media Matters’ report exposing ads alongside antisemitic content led to significant advertisers withdrawing from the platform, prompting Musk to file a lawsuit against Meta.

Such controversies have cast doubt on Meta and Tesla’s commitment to AI safety initiatives, particularly given the board’s mandate to advise the DHS and other stakeholders on potential AI disruptions.

Zuckerberg has advocated for open-source AI, which presents unique challenges in terms of regulation and safety. Meanwhile, concerns about Musk’s unpredictability have been raised, compounded by his ongoing legal disputes with the Securities and Exchange Commission (SEC).

Despite these controversies, companies involved in the safety board have demonstrated a greater willingness to discuss AI safety. For instance, during a Senate hearing, Altman emphasized the importance of designing safe AI products.

Moreover, these companies have implemented their own AI safety protocols, albeit with varying degrees of success. OpenAI, for instance, employs reinforcement learning with human feedback to improve its model’s behavior, while other firms have developed their own safety frameworks.

Efforts to mitigate the spread of deepfakes include companies like Adobe and Google’s adoption of watermarking techniques and the proposal of a CSAM database to train AI models for detecting potentially explicit content.

While these measures represent positive steps toward addressing the deepfake menace, the complex nature of the issue necessitates continued collaboration between independent researchers and industry stakeholders to safeguard against AI-driven threats effectively.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed