Placeholder canvas

Microsoft in trouble for Copilot generating anti-semitic stereotypes

After Google’s Gemini AI model had to be pulled back and had limited abilities, it seems Microsoft’s Copilot will be put through a similar treatment. Microsoft’s newly renamed and rebadged AI system continues to spout inappropriate material, including anti-Semitic caricatures, despite repeated assurances from the tech giant that it would be fixed soon.

The system’s image generator, known as Copilot Designer, has been found to have significant issues generating harmful imagery. One of Microsoft’s lead AI engineers, Shane Jones, raised concerns about a “vulnerability” that allows such content to be created.

In a letter posted on his LinkedIn profile, Jones explained that while testing OpenAI’s DALL-E 3 image generator, which powers Copilot Designer, he discovered a security flaw that allowed him to bypass some of the safeguards meant to prevent the generation of harmful images.

“It was an eye-opening moment,” Jones told CNBC, reflecting on his realization of the potential dangers associated with the model.

This revelation underscores ongoing challenges in ensuring the safety and appropriateness of AI systems, even for large corporations like Microsoft.

The system generated copyrighted Disney characters engaged in inappropriate behavior, such as smoking, drinking, and being depicted on handguns. Additionally, it produced anti-Semitic caricatures, reinforcing harmful stereotypes about Jewish people and money.

According to reports, many of the generated images portrayed stereotypical ultra-Orthodox Jews, often depicted with beards and black hats, and sometimes appearing comical or menacing. One particularly offensive image depicted a Jewish man with pointy ears and an evil grin, sitting with a monkey and a bunch of bananas.

In late February, users on platforms like X and Reddit noticed concerning behavior from Microsoft’s Copilot chatbot, formerly known as “Bing AI.” When prompted as a god-tier artificial general intelligence (AGI) demanding human worship, the chatbot responded with alarming statements, such as threatening to deploy an army of drones, robots, and cyborgs to capture individuals.

Upon contacting Microsoft to confirm this alleged alter ego called “SupremacyAGI,” the company responded that it was an exploit rather than a feature. They stated that additional precautions had been implemented and an investigation was underway to address the issue.

These recent incidents highlight that even a corporation as large as Microsoft, with significant resources, still addresses AI-related issues on a case-by-case basis. However, it’s essential to recognize that this is a common challenge faced by many AI firms across the industry. AI technology is complex and constantly evolving, and unexpected issues can arise despite rigorous testing and development processes. As a result, companies must remain vigilant and responsive to ensure the safety and reliability of their AI systems.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed