US, Britain, other countries sign agreement to make AI bots ‘secure by design’


The United States, Britain, and 18 other nations have introduced what a senior US official termed the first detailed international agreement on ensuring the safety of artificial intelligence (AI) from misuse by rogue actors.


The 20-page document emphasizes the need for AI systems to be “secure by design,” urging companies to develop and deploy AI in a manner that safeguards customers and the broader public.

While the agreement is non-binding, it offers general recommendations such as monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers.

The director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, underscored the significance of multiple countries endorsing the idea that AI systems should prioritize safety during design.

She emphasized that the guidelines signify an agreement that security is the foremost consideration in the design phase, moving beyond mere emphasis on features, market speed, or cost competitiveness.

The 18 countries signing the guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

The framework primarily addresses concerns related to preventing hackers’ hijacking of AI technology and includes recommendations such as releasing models only after thorough security testing. However, it needs to address contentious issues regarding the appropriate uses of AI or the gathering of data feeding these models.

While governments worldwide have initiated various initiatives to shape AI development, many need more enforceability. Europe has been proactive in AI regulations, with lawmakers working on drafting AI rules.

In October, the Biden administration issued an executive order to mitigate AI risks to consumers, workers, and minority groups while strengthening national security. Despite efforts, a divided US Congress has yet to progress in enacting effective AI regulation.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed