{"id":133336,"date":"2024-02-14T04:00:00","date_gmt":"2024-02-14T12:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?p=133336"},"modified":"2024-07-03T07:32:50","modified_gmt":"2024-07-03T14:32:50","slug":"staying-ahead-of-threat-actors-in-the-age-of-ai","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2024\/02\/14\/staying-ahead-of-threat-actors-in-the-age-of-ai\/","title":{"rendered":"Staying ahead of threat actors in the age of AI"},"content":{"rendered":"\n

Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\u2019s blog on the research here<\/a>. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\u2019 usage of AI. However, Microsoft and our partners continue to study this landscape closely.<\/p>\n\n\n\n

The objective of Microsoft\u2019s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including Microsoft Copilot for Security<\/a>, to elevate defenders everywhere.<\/p>\n\n\n\n

A principled approach to detecting and blocking threat actors<\/h2>\n\n\n\n

The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House’s Executive Order on AI<\/a> requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order’s request for comprehensive AI safety and security standards.<\/p>\n\n\n\n

In line with Microsoft\u2019s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\u2019s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track.<\/p>\n\n\n\n

These principles include:   <\/p>\n\n\n\n