{"id":94938,"date":"2021-07-29T09:00:21","date_gmt":"2021-07-29T16:00:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?p=94938"},"modified":"2023-09-26T08:44:20","modified_gmt":"2023-09-26T15:44:20","slug":"attack-ai-systems-in-machine-learning-evasion-competition","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/07\/29\/attack-ai-systems-in-machine-learning-evasion-competition\/","title":{"rendered":"Attack AI systems in Machine Learning Evasion Competition"},"content":{"rendered":"

Today, we are launching MLSEC.IO, an educational Machine Learning Security Evasion Competition<\/a> (MLSEC) for the AI and security communities to exercise their muscle to attack critical AI systems in a realistic setting. Hosted and sponsored by Microsoft, alongside NVIDIA, CUJO AI, VM-Ray, and MRG Effitas, the competition rewards participants who efficiently evade AI-based malware detectors and AI-based phishing detectors.<\/p>\n

Machine learning powers critical applications in virtually every industry: finance, healthcare, infrastructure, and cybersecurity. Microsoft is seeing an uptick of attacks on commercial AI systems that could compromise the confidentiality, integrity, and availability guarantees of these systems. Publicly known cases documented by MITRE\u2019s ATLAS framework<\/a>, show how with the proliferation of AI systems comes the increased risk that the machine learning powering these systems can be manipulated to achieve an adversary\u2019s goals. While the risks are inherent in all deployed machine learning models, the threat is especially explicit in cybersecurity, where machine learning models are increasingly relied on to detect threat actors\u2019 tools and behaviors. Market surveys have consistently indicated that the security and privacy of AI systems are top concerns for executives. According to CCS Insight\u2019s survey of 700 senior IT leaders in 2020<\/a>, security is now the biggest hurdle companies face with AI, cited by over 30 percent of respondents1<\/sup>.<\/p>\n

However, security practitioners are unaware of how to clear this new hurdle. A recent Microsoft survey<\/a> found that 25 out of 28 organizations did not have the right tools in place to secure their AI systems. While academic researchers have been studying how to attack AI systems for close to two decades, awareness among practitioners is low. That is why one recommendation for business leaders from the 2021 Gartner report Top 5 Priorities for Managing AI Risk Within Gartner\u2019s MOST Framework published<\/a>2<\/sup> is that organizations \u201cDrive staff awareness across the organization by leading a formal AI risk education campaign.\u201d<\/p>\n

It is critical to democratize the knowledge to secure AI systems. That is why Microsoft recently released Counterfit<\/a>, a tool born out of our own need to assess Microsoft\u2019s AI systems for vulnerabilities with the goal of proactively securing AI services. For those new to adversarial machine learning, NVIDIA released MINTNV, a hack-the-box style environment to explore and build their skills.<\/p>\n

Participate in MLSEC.IO<\/h2>\n

With the launch today of MLSEC.IO, we aim to highlight how security models can be evaded by motivated attackers and allow practitioners to exercise their muscles attacking critical machine learning systems used in cybersecurity.<\/p>\n

\u201cThere is a lack of practical knowledge about securing or attacking AI systems in the security community. Competitions like Microsoft\u2019s MSLEC democratizes adversarial machine learning knowledge for the offensive and defensive security communities, as well as the machine learning community. MLSEC\u2019s hands-on approach is an exciting entry point into AML.\u201d\u2014<\/em>Christopher Cottrell, AI Red Team Lead, NVIDIA<\/p><\/blockquote>\n

The competition involves two challenges beginning on August 6 and ending on September 17, 2021: an Anti-Malware Evasion track and an Anti-Phishing Evasion track.<\/p>\n

    \n
  1. Anti-Phishing Evasion Track: Machine learning is routinely used to detect a highly successful attacker technique for gaining initial via phishing. In this track, contestants play the role of an attacker and attempt to evade a suite of anti-phishing models. Custom built by CUJO AI, the phishing machine learning models are purpose-built for this competition only.<\/li>\n
  2. Anti-Malware Evasion track: This challenge provides an alternative scenario for attackers wishing to bypass machine-learning-based antivirus: change an existing malicious binary in a way that disguises it from the antimalware model.<\/li>\n<\/ol>\n

    In addition, for each of the Attacker Challenge tracks, the highest-scoring submission that extends and leverages Counterfit<\/a>\u2014Microsoft\u2019s open-source tool for investigating the security of machine learning models\u2014<\/em>will be awarded a bonus prize.<\/p>\n

    \u201cThe security evasion challenge creates new pathways into cybersecurity and opens up access for a broader base of talent. This year, to lower barriers to entry, we are introducing the phishing challenge, while still strongly encouraging people without significant experience in malware to participate.\u201d<\/em>\u2014Zoltan Balazs, Head of Vulnerability Research Lab at CUJO AI and cofounder of the competition.<\/p><\/blockquote>\n

    Key details about the competition<\/h3>\n