Microsoft has taken a significant step in addressing concerns related to copyright infringement arising from its AI-powered software offered to businesses, including tools in Word, PowerPoint, and coding.
The tech giant has committed to assuming legal responsibility for any copyright violations associated with material generated by its AI software. This pledge covers the legal costs for commercial customers who may face lawsuits over using AI-generated content or tools.
Microsoft’s assurance extends to users of GitHub Copilot, which utilizes generative AI to create computer code, and Microsoft 365 Copilot, which applies AI to various products like Word, Teams, and PowerPoint. Notably, 365 Copilot is currently in testing with select businesses.
This move by Microsoft has been met with praise for making AI software more accessible to businesses by alleviating concerns about potential legal challenges.
The intersection of generative AI and copyright has sparked disputes and lawsuits, with content owners, artists, media companies, and publishers contending that their copyrighted materials have been used without consent or compensation to train AI models.
Like Adobe’s earlier commitment regarding its Firefly AI tool, Microsoft aims to reassure paying users by taking responsibility for any legal risks associated with using its AI-powered tools and content.
Hossein Nowbar, General Counsel of Corporate Legal Affairs at Microsoft, acknowledged customer concerns about intellectual property (IP) infringement claims from AI-generated content.
He emphasized Microsoft’s commitment to addressing these concerns, stating that the company will assume the legal risks involved if customers face copyright challenges related to Microsoft’s Copilot tools.
Microsoft’s pledge includes defending customers in third-party lawsuits and covering any adverse judgments or settlements resulting from such legal actions, provided that customers have adhered to the content filters and guardrails built into Microsoft’s products.
These guardrails include measures like content filters and the detection of potentially infringing third-party content.
Ilanah Fhima, a professor of intellectual property law at University College London, noted that the legal landscape around AI and copyright is still evolving, suggesting that Microsoft’s risk may be calculated given the ongoing development of AI-related laws and legal precedents.
She also highlighted the public interest in technological advancement and the possibility that strict copyright enforcement might not always apply in every case.