President Biden sets up new AI guardrails for military, won’t let AI launch nukes, grant asylum


The Biden-led White House has introduced new guardrails for artificial intelligence (AI) use within military and intelligence operations, laying out strict rules to limit its application.


This marks the administration’s first national security memorandum dedicated to AI. It offers guidelines to balance the technology’s potential benefits with the risks it poses. A compressed version of the memo was made public, highlighting critical takeaways for citizens.

Strict guidelines for AI in weapons and immigration decisions

The new memorandum prioritizes human oversight in sensitive military scenarios, setting boundaries to prevent AI from operating autonomously in critical areas. It explicitly prohibits AI systems from being used to make decisions about launching nuclear weapons or determining asylum status for immigrants entering the United States.

Additionally, it ensures AI cannot be deployed to track individuals based on race or religion or label someone as a terrorist without human involvement.

National Security Adviser Jake Sullivan, who spoke at the National Defense University, underscored the directive’s importance. Sullivan, a strong advocate for a careful assessment of AI’s benefits and dangers, also pointed to the growing challenge posed by China’s use of AI to surveil its population and spread misinformation.

He hoped these new measures could spark discussions with nations with similar AI strategies.

Safeguarding AI development and national security

Beyond regulating military AI use, the memo sets deadlines for agencies to review how AI tools are deployed. However, most of these reviews will conclude before President Biden’s term ends. It also encourages partnerships between intelligence agencies and the private sector to safeguard AI advancements, now seen as critical national assets.

The memorandum directs intelligence agencies to support private companies developing AI models, helping them secure their work against potential spying or theft by foreign actors. It also emphasizes the importance of regularly updating intelligence assessments to protect these assets from international threats.

Preventing dystopian AI futures

One of the memo’s key goals is to avoid worst-case scenarios, such as developing fully autonomous weapons. In this vein, it draws a clear line between AI’s role in military operations and human decision-making, ensuring AI cannot replace humans in matters with significant ethical and security implications.

With AI becoming more integrated into national security strategies worldwide, the Biden administration aims to balance leveraging the technology’s advantages and minimizing its risks. These new rules reflect an effort to guide the military’s use of AI responsibly while addressing public fears about the unchecked rise of autonomous systems.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed