Microsoft has decided to take legal action against alleged cybercriminals who, it is claimed, misused Microsoft's AI services. The Redmond giant claims that the actors broke US law, the Acceptable Use Policy, and the Code of Conduct attached to these tools.
“Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites,” it explains. “In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.”
As part of the legal action, Microsoft has been granted a court order to seize a website that was instrumental in offering these illegal services. Microsoft plans to use this access to learn more about who has been running the service. It will also try to work out how the services were monetized and try to disrupt other infrastructure run by the operators.
Even though the actors remain at large, Microsoft is adding in extra guardrails to make it more difficult to exploit the AI in ways the company has observed. While the actors will inevitably find a way to get around these guardrails, it will slow them down and give Microsoft more time to learn who is behind the operations.
Governments and tech firms have been very proactive in their focus on the threats AI poses. Just like any software, AI has very useful applications, but people inevitably try to break through the safety measures to use them for malicious purposes. This incident shows how Microsoft can adapt its models to new threats that emerge.
0 Comments - Add comment