{"id":137320,"date":"2025-02-13T09:00:00","date_gmt":"2025-02-13T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?p=137320"},"modified":"2025-03-07T10:52:30","modified_gmt":"2025-03-07T18:52:30","slug":"securing-deepseek-and-other-ai-systems-with-microsoft-security","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2025\/02\/13\/securing-deepseek-and-other-ai-systems-with-microsoft-security\/","title":{"rendered":"Securing DeepSeek and other AI systems with Microsoft Security"},"content":{"rendered":"\n
A successful AI transformation starts with a strong security foundation. With a rapid increase in AI development and adoption, organizations need visibility into their emerging AI apps and tools. Microsoft Security provides threat protection, posture management, data security, compliance, and governance to secure AI applications that you build and use. These capabilities can also be used to help enterprises secure and govern AI apps built with the DeepSeek R1 model and gain visibility and control over the use of the seperate DeepSeek consumer app. <\/p>\n\n\n\n
Last week, we announced DeepSeek R1’s availability on Azure AI Foundry and GitHub<\/a>, joining a diverse portfolio of more than 1,800 models. <\/p>\n\n\n\n Customers today are building production-ready AI applications with Azure AI Foundry, while accounting for their varying security, safety, and privacy requirements. Similar to other models provided in Azure AI Foundry, DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. Microsoft\u2019s hosting safeguards for AI models are designed to keep customer data within Azure\u2019s secure boundaries. <\/p>\n\n\n \n\t\t\tazure AI content Safety\t\t<\/p>\n\t\t\n\t\t\tLearn more<\/span> <\/span>\n\t\t<\/a>\n\t<\/div>\n<\/div>\n\n\n\n With Azure AI Content Safety, built-in content filtering is available by default to help detect and block malicious, harmful, or ungrounded content, with opt-out options for flexibility. Additionally, the safety evaluation system allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently build and deploy AI solutions.\u202fSee Azure AI Foundry and GitHub<\/a> for more details.<\/p>\n\n\n\n