{"id":102582,"date":"2021-12-09T13:00:43","date_gmt":"2021-12-09T21:00:43","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?p=102582"},"modified":"2023-09-26T09:33:00","modified_gmt":"2023-09-26T16:33:00","slug":"best-practices-for-ai-security-risk-management","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/12\/09\/best-practices-for-ai-security-risk-management\/","title":{"rendered":"Best practices for AI security risk management"},"content":{"rendered":"

Today, we are releasing an AI security risk assessment framework<\/a> as a step to empower organizations to reliably audit, track, and improve the security of the AI systems. In addition, we are providing new updates to Counterfit<\/a>, our open-source tool to simplify assessing the security posture of AI systems.<\/p>\n

There is a marked interest in securing AI systems from adversaries. Counterfit has been heavily downloaded and explored by organizations of all sizes\u2014from startups to governments and large-scale organizations\u2014to proactively secure their AI systems. From a different vantage point, the Machine Learning Evasion Competition<\/a> we organized to help security professionals exercise their muscles to defend and attack AI systems in a realistic setting saw record participation, doubling the amount of participants and techniques than the previous year.<\/p>\n

This interest demonstrates the growth mindset and opportunity in securing AI systems. But how do we harness interest into action that can raise the security posture of AI systems? When the rubber hits the road, how can a security engineer think about mitigating the risk of an AI system being compromised?<\/p>\n

AI security risk assessment framework<\/h2>\n

The deficit is clear: according to Gartner\u00ae Market Guide for AI Trust, Risk and Security Management<\/a> published in September 2021, \u201cAI poses new trust, risk and security management requirements that conventional controls do not address.<\/em>\u201d1<\/sup> To address this gap, we did not want to invent a new process. We acknowledge that security professionals are already overwhelmed. Moreover, we believe that even though the attacks on AI systems pose a new security risk, current software security practices are relevant and can be adapted to manage this novel risk. To that end, we fashioned our AI security risk assessment in the spirit of the current security risk assessment frameworks.<\/p>\n

We believe that to comprehensively assess the security risk for an AI system, we need to look at the entire lifecycle of system development and deployment. An overreliance on securing machine learning models through academic adversarial machine learning oversimplifies the problem in practice. This means, to truly secure the AI model, we need to account for securing the entire supply chain and management of AI systems.<\/p>\n

Through our own operations experience in building and red teaming models at Microsoft, we recognize that securing AI systems is a team sport. AI researchers design model architectures. Machine learning engineers build data ingestion, model training, and deployment pipelines. Security architects establish appropriate security policies. Security analysts respond to threats. To that end, we envisioned a framework that would involve participation from each of these stakeholders.<\/p>\n

\u201cDesigning and developing secure AI is a cornerstone of AI product development at Boston Consulting Group (BCG).\u00a0As the societal need to secure our AI systems becomes increasingly apparent, assets like Microsoft\u2019s AI security risk management framework\u00a0can be foundational contributions.\u00a0We already implement best practices found in this framework in the AI systems we develop for our clients and are excited that Microsoft has developed and open sourced this framework for the benefit of the entire industry.\u201d<\/em>\u2014Jack Molloy, Senior Security Engineer, BCG<\/p><\/blockquote>\n

As a result of our Microsoft-wide collaboration, our framework features the following characteristics:<\/p>\n

    \n
  1. Provides a comprehensive perspective to AI<\/strong> system security<\/strong>. We looked at each element of the AI system lifecycle in a production setting: from data collection, data processing, to model deployment. We also accounted for AI supply chains, as well as the controls and policies with respect to backup, recovery, and contingency planning related to AI systems.<\/li>\n
  2. Outlines<\/strong> machine learning threats and recommendations to abate them<\/strong>. To directly help engineers and security professionals, we enumerated the threat statement at each step of the AI system building process. Next, we provided a set of best practices that overlay and reinforce existing software security practices in the context of securing AI systems.<\/li>\n
  3. Enables organizations<\/strong> to conduct risk assessments<\/strong>. The framework provides the ability to gather information about the current state of security of AI systems in an organization, perform gap analysis, and track the progress of the security posture.<\/li>\n<\/ol>\n

    Updates to Counterfit<\/h2>\n

    To help security professionals get a broader view of the security posture of the AI systems, we have also significantly expanded Counterfit. The first release of Counterfit wrapped two popular frameworks\u2014Adversarial Robustness Toolbox<\/a> (ART) and TextAttack<\/a>\u2014to provide evasion attacks against models operating on tabular, image, and textual inputs. With the new release, Counterfit now features the following:<\/p>\n