Placeholder canvas

Microsoft whistleblower tells US officials OpenAI-powered Copilot can create harmful imagery ‘too easily’

A Microsoft engineer has raised concerns about offensive and harmful images produced by the company’s artificial intelligence image-generator tool. According to an Associated Press report, Shane Jones, who considers himself a whistleblower, has sent letters to US regulators and Microsoft’s board of directors, urging them to take action.

Jones recently met with US Senate staffers to discuss his concerns and sent a letter to the Federal Trade Commission (FTC), which confirmed receipt but declined further comment.

Microsoft stated its commitment to addressing employee concerns and appreciated Jones’ efforts in testing the technology. However, it recommended using internal reporting channels to investigate and address the issues.

Jones, a principal software engineering lead, has spent three months trying to address safety concerns about Microsoft’s Copilot Designer. He highlighted the risk of the tool generating harmful content despite benign prompts. For example, when prompted with ‘car accident,’ Copilot Designer may include inappropriate, sexually objectified images of women.

Jones emphasized to FTC Chair Lina Khan that Copilot Designer poses significant risks by generating harmful content despite innocent user requests. For instance, when prompted with ‘car accident,’ it sometimes includes sexually objectified images of women. He also highlighted other concerning content like violence, political bias, underage drinking and drug use, copyright infringement, conspiracy theories, and religious imagery.

Jones has previously raised these concerns publicly. Initially advised by Microsoft to approach OpenAI, he took his findings to them. In December, he posted a letter on LinkedIn addressed to OpenAI, which led to Microsoft’s legal team demanding its deletion. Despite this, Jones has persisted, bringing his concerns to the US Senate’s Commerce Committee and the Washington State Attorney General’s office.

Jones highlighted that while the main issue lies with OpenAI’s DALL-E model, users of OpenAI’s ChatGPT for AI image generation are less likely to encounter harmful outputs due to different protective measures implemented by the two companies.

“Many of the concerns with Copilot Designer are already managed by ChatGPT’s built-in safeguards,” he conveyed via text.

The emergence of impressive AI image generators in 2022, including OpenAI’s DALL-E 2 and the subsequent release of ChatGPT, generated significant public interest, prompting tech giants like Microsoft and Google to develop their versions.

However, without robust safeguards, the technology carries risks, enabling users to create harmful “deepfake” images depicting political figures, war scenes, or nonconsensual nudity, falsely attributing them to real individuals.

In response to concerns, Google temporarily suspended the Gemini chatbot’s image generation feature, mainly due to controversies surrounding depictions of race and ethnicity, such as placing people of color in Nazi-era military attire.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed