Microsoft has filed a lawsuit after a group of cybercriminals allegedly bypassed security guardrails on its Azure OpenAI platform, using the service to generate harmful and offensive content.
The lawsuit, filed in December 2024 in the US District Court for the Eastern District of Virginia, names ten unidentified individuals. These cybercriminals, a foreign-based threat group, are accused of stealing customer credentials and using custom-designed software to gain unauthorised access to Microsoft’s generative AI services, including ChatGPT and DALL-E.
How the hackers gained access
Azure OpenAI is a service that allows businesses to integrate powerful OpenAI tools into their own cloud applications. Microsoft uses the service to power products such as GitHub Copilot, an AI-powered coding assistant for developers. According to the lawsuit, the hackers were able to obtain customer credentials by scraping public websites.
With these credentials in hand, they accessed Azure OpenAI accounts, circumventing security protocols, and altered the functionality of the AI services to suit their needs. After modifying the platform, the criminals resold access to the AI services to other malicious actors, providing them with detailed instructions on how to exploit these tools to generate harmful, illicit content.
The nature of the content and legal actions
While Microsoft has not disclosed the exact nature of the content created by the cybercriminals, it confirmed that it violated the company’s policies and terms of service. The lawsuit accuses the criminals of intentionally and illegally accessing Azure OpenAI’s systems, causing significant damage and loss. Microsoft is seeking injunctive relief to prevent further damage and to stop the criminals from continuing their illicit activities.
The company is also pursuing damages and has requested the seizure of a website that was central to the operation. This website is considered crucial to the criminal operation, and the court’s approval for its seizure will allow Microsoft to collect evidence, identify the perpetrators, and dismantle the infrastructure that supported the illegal activities.
Microsoft’s efforts to enhance security
In response to the breach, Microsoft has implemented additional security measures and safety mitigations to safeguard Azure OpenAI from future attacks. The company is actively working to prevent further unauthorised access and to strengthen its platform’s defences.
Microsoft has also highlighted that the hackers’ actions violated several US laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering laws. The company is determined to hold those responsible for the breach accountable and to prevent similar incidents from occurring in the future.
Broader implications for AI security
This incident underscores the potential risks and vulnerabilities in the rapidly growing field of generative AI. As AI tools like those offered by OpenAI become more widely accessible, the need for robust security and safety protocols becomes even more pressing.
Microsoft’s response to the breach aims to not only address the immediate damage but also to set a precedent for how companies must safeguard AI systems against misuse and malicious activity. The ongoing legal case highlights the challenges of securing AI platforms in an increasingly complex cyber threat landscape.