{"id":93287,"date":"2021-04-08T09:00:56","date_gmt":"2021-04-08T16:00:56","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/\/?p=93287"},"modified":"2023-05-26T14:33:25","modified_gmt":"2023-05-26T21:33:25","slug":"gamifying-machine-learning-for-stronger-security-and-ai-models","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/","title":{"rendered":"Gamifying machine learning for stronger security and AI models"},"content":{"rendered":"

To stay ahead of adversaries, who show no restraint in adopting tools and techniques that can help them attain their goals, Microsoft continues to harness AI and machine learning to solve security challenges. One area we\u2019ve been experimenting on is autonomous systems. In a simulated enterprise network, we examine how autonomous agents, which are intelligent systems that independently carry out a set of operations using certain knowledge or parameters, interact within the environment and study how reinforcement learning techniques can be applied to improve security.<\/p>\n

Today, we\u2019d like to share some results from these experiments. We are open sourcing the Python source code of a research toolkit we call CyberBattleSim, an experimental research project that investigates how autonomous agents operate in a simulated enterprise environment using high-level abstraction of computer networks and cybersecurity concepts. The toolkit uses the Python-based OpenAI Gym interface to allow training of automated agents using reinforcement learning algorithms. The code is available here: https:\/\/github.com\/microsoft\/CyberBattleSim<\/a><\/p>\n

CyberBattleSim provides a way to build a highly abstract simulation of complexity of computer systems, making it possible to frame cybersecurity challenges in the context of reinforcement learning. By sharing this research toolkit broadly, we encourage the community to build on our work and investigate how cyber-agents interact and evolve in simulated environments, and research how high-level abstractions of cyber security concepts help us understand how cyber-agents would behave in actual enterprise networks.<\/p>\n

This research is part of efforts across Microsoft to leverage machine learning and AI to continuously improve security and automate more work for defenders. A recent study<\/a> commissioned by Microsoft found that almost three-quarters of organizations say their teams spend too much time on tasks that should be automated. We hope this toolkit inspires more research to explore how autonomous systems and reinforcement learning can be harnessed to build resilient real-world threat detection technologies and robust cyber-defense strategies.<\/p>\n

Applying reinforcement learning to security<\/h2>\n

Reinforcement learning is a type of machine learning with which autonomous agents learn how to conduct decision-making by interacting with their environment. Agents may execute actions to interact with their environment, and their goal is to optimize some notion of reward. One popular and successful application is found in video games where an environment is readily available: the computer program implementing the game. The player of the game is the agent<\/em>, the commands it takes are the actions<\/em>, and the ultimate reward<\/em> is winning the game. The best reinforcement learning algorithms can learn effective strategies through repeated experience by gradually learning what actions to take in each state of the environment. The more the agents play the game, the smarter they get at it. Recent advances in the field of reinforcement learning have shown<\/a> we can successfully train autonomous agents that exceed human levels at playing video games.<\/p>\n

Last year, we started exploring applications of reinforcement learning to software security. To do this, we thought of software security problems in the context of reinforcement learning: an attacker or a defender can be viewed as agents evolving in an environment that is provided by the computer network. Their actions are the available network and computer commands. The attacker\u2019s goal is usually to steal confidential information from the network. The defender\u2019s goal is to evict the attackers or mitigate their actions on the system by executing other kinds of operations.<\/p>\n

\"\"<\/p>\n

Figure 1. Mapping reinforcement learning concepts to security<\/em><\/p>\n

In this project, we used OpenAI Gym<\/a>, a popular toolkit that provides interactive environments for reinforcement learning researchers to develop, train, and evaluate new algorithms for training autonomous agents. Notable examples of environments built using this toolkit include video games, robotics simulators, and control systems.<\/p>\n

Computer and network systems, of course, are significantly more complex than video games. While a video game typically has a handful of permitted actions at a time, there is a vast array of actions available when interacting with a computer and network system. For instance, the state of the network system can be gigantic and not readily and reliably retrievable, as opposed to the finite list of positions on a board game. Even with these challenges, however, OpenAI Gym provided a good framework for our research, leading to the development of CyberBattleSim.<\/p>\n

How CyberBattleSim works<\/h2>\n

CyberBattleSim focuses on threat modeling the post-breach lateral movement stage of a cyberattack. The environment consists of a network of computer nodes. It is parameterized by a fixed network topology and a set of predefined vulnerabilities that an agent can exploit to laterally move through the network. The simulated attacker\u2019s goal is to take ownership of some portion of the network by exploiting these planted vulnerabilities. While the simulated attacker moves through the network, a defender agent watches the network activity to detect the presence of the attacker and contain the attack.<\/p>\n

To illustrate, the graph below depicts a toy example of a network with machines running various operating systems and software. Each machine has a set of properties, a value, and pre-assigned vulnerabilities. Black edges represent traffic running between nodes and are labelled by the communication protocol.<\/p>\n

\"\"<\/p>\n

Figure 2. Visual representation of lateral movement in a computer network simulation<\/em><\/p>\n

Suppose the agent <\/strong>represents the attacker. The post-breach assumption means that one node is initially infected with the attacker\u2019s code (we say that the attacker owns<\/em> the node). The simulated attacker\u2019s goal<\/strong>\u00a0is to maximize the cumulative reward by discovering and taking ownership of nodes in the network. The environment is\u00a0partially observable<\/strong>: the agent does not get to see all the nodes and edges of the network graph in advance. Instead, the attacker takes actions to gradually explore the network from the nodes it currently owns. There are\u00a0three kinds of actions<\/strong>,\u00a0offering a mix of exploitation and exploration capabilities to the agent: performing a local attack, performing a remote attack, and connecting to other nodes. Actions are parameterized by the source node where the underlying operation should take place, and they are only permitted on nodes owned by the agent. The\u00a0reward<\/strong>\u00a0is a float that represents the intrinsic value of a node (e.g., a SQL server has greater value than a test machine).<\/p>\n

In the depicted example, the simulated attacker breaches the network from a simulated Windows 7 node (on the left side, pointed to by an orange arrow). It proceeds with lateral movement to a Windows 8 node by exploiting a vulnerability in the SMB file-sharing protocol, then uses some cached credential to sign into another Windows 7 machine. It then exploits an IIS remote vulnerability to own the IIS server, and finally uses leaked connection strings to get to the SQL DB.<\/p>\n

This environment simulates a heterogenous computer network supporting multiple platforms and helps to show how using the latest operating systems and keeping these systems up to date enable organizations to take advantage of the latest hardening and protection technologies in platforms like Windows 10. The simulation Gym environment is parameterized by the definition of the network layout, the list of supported vulnerabilities, and the nodes where they are planted. The simulation does not support machine code execution, and thus no security exploit actually takes place in it. We instead model vulnerabilities abstractly with a precondition defining the following: the nodes where the vulnerability is active, a probability of successful exploitation, and a high-level definition of the outcome and side-effects. Nodes have preassigned named properties over which the precondition is expressed as a Boolean formula.<\/p>\n

Vulnerability outcomes<\/h3>\n

There are predefined outcomes that include the following: leaked credentials, leaked references to other computer nodes, leaked node properties, taking ownership of a node, and privilege escalation on the node. Examples of\u00a0remote<\/em>\u00a0vulnerabilities include: a SharePoint site exposing\u00a0ssh<\/em>\u00a0credentials, an\u00a0ssh<\/em>\u00a0vulnerability that grants access to the machine, a GitHub project leaking credentials in commit history, and a SharePoint site with file containing SAS token to storage account. Meanwhile, examples of\u00a0local<\/em>\u00a0vulnerabilities include: extracting authentication token or credentials from a system cache, escalating to SYSTEM privileges, escalating to administrator privileges. Vulnerabilities can either be defined in-place at the node level or can be defined globally and activated by the precondition Boolean expression.<\/p>\n

Benchmark: Measuring progress<\/h3>\n

We provide a basic stochastic defender that detects and mitigates ongoing attacks based on predefined probabilities of success. We implement mitigation by reimaging the infected nodes, a process abstractly modeled as an operation spanning multiple simulation steps. To compare the performance of the agents, we look at two metrics: the number of simulation steps taken to attain their goal and the cumulative rewards over simulation steps across training epochs.<\/p>\n

Modeling security problems<\/h2>\n

The parameterizable nature of the Gym environment allows modeling of various security problems. For instance, the snippet of code below is inspired by a capture the flag<\/a> challenge where the attacker\u2019s goal is to take ownership of valuable nodes and resources in a network:<\/p>\n

\"\"<\/p>\n

Figure 3. <\/em>Code <\/em>describing an instance of a simulation environment<\/em><\/p>\n

We provide a Jupyter notebook to interactively play the attacker in this example:<\/p>\n

\"\"<\/p>\n

Figure 4. Playing the simulation interactively<\/em><\/p>\n

With the Gym interface, we can easily instantiate automated agents and observe how they evolve in such environments. The screenshot below shows the outcome of running a random agent<\/em> on this simulation\u2014that is, an agent that randomly selects which action to perform at each step of the simulation.<\/p>\n

\"\"<\/p>\n

Figure 5. A random agent interacting with the simulation<\/em><\/p>\n

The above plot in the Jupyter notebook shows how the cumulative reward function grows along the simulation epochs (left) and the explored network graph (right) with infected nodes marked in red. It took about 500 agent steps to reach this state in this run. Logs reveal that many attempted actions failed, some due to traffic being blocked by firewall rules, some because incorrect credentials were used. In the real world, such erratic behavior should quickly trigger alarms and a defensive XDR system like Microsoft 365 Defender and SIEM\/SOAR system like Azure Sentinel would swiftly respond and evict the malicious actor.<\/p>\n

Such a toy example allows for an optimal strategy for the attacker that takes only about 20 actions to take full ownership of the network. It takes a human player about 50 operations on average to win this game on the first attempt. Because the network is static, after playing it repeatedly, a human can remember the right sequence of rewarding actions and can quickly determine the optimal solution.<\/p>\n

For benchmarking purposes, we created a simple toy environment of variable sizes and tried various reinforcement algorithms. The following plot summarizes the results, where the Y-axis is the number of actions taken to take full ownership of the network (lower is better) over multiple repeated episodes (X-axis). Note how certain algorithms such as Q-learning<\/a> can gradually improve and reach human level, while others are still struggling after 50 episodes!<\/p>\n

\"\"<\/p>\n

Figure 6. Number of iterations along epochs for agents trained with various reinforcement learning algorithms<\/em><\/p>\n

The cumulative reward plot offers another way to compare, where the agent gets rewarded each time it infects a node. Dark lines show the median while the shadows represent one standard deviation. This shows again how certain agents (red, blue, and green) perform distinctively better than others (orange).<\/p>\n

\"\"<\/p>\n

Figure 7. Cumulative reward plot for various reinforcement learning algorithms<\/em><\/p>\n

Generalizing<\/h1>\n

Learning how to perform well in a fixed environment is not that useful if the learned strategy does not fare well in other environments\u2014we want the strategy to generalize well. Having a partially observable environment prevents overfitting to some global aspects or dimensions of the network. However, it does not prevent an agent from learning non-generalizable strategies like remembering a fixed sequence of actions to take in order. To better evaluate this, we considered a set of environments of various sizes but with a common network structure. We train an agent in one environment of a certain size and evaluate it on larger or smaller ones. This also gives an idea of how the agent would fare on an environment that is dynamically growing or shrinking while preserving the same structure.<\/p>\n

To perform well, agents now must learn from observations that are not specific to the instance they are interacting with. They cannot just remember node indices or any other value related to the network size. They can instead observe temporal features or machine properties. For instance, they can choose the best operation to execute based on which software is present on the machine. The two cumulative reward plots below illustrate how one such agent, previously trained on an instance of size 4 can perform very well on a larger instance of size 10 (left), and reciprocally (right).<\/p>\n

\"\" \"\"<\/p>\n

Figure 8. Cumulative reward function for an agent pre-trained on a different environment<\/em><\/p>\n

An invitation to continue exploring the applications of reinforcement learning to security<\/h1>\n

When abstracting away some of the complexity of computer systems, it\u2019s possible to formulate cybersecurity problems as instances of a reinforcement learning problem. With the OpenAI toolkit, we could build highly abstract simulations of complex computer systems and easily evaluate state-of-the-art reinforcement algorithms to study how autonomous agents interact with and learn from them.<\/p>\n

A potential area for improvement is the realism of the simulation. The simulation in CyberBattleSim is simplistic, which has advantages: Its highly abstract nature prohibits direct application to real-world systems, thus providing a safeguard against potential nefarious use of automated agents trained with it. It also allows us to focus on specific aspects of security we aim to study and quickly experiment with recent machine learning and AI algorithms: we currently focus on lateral movement techniques, with the goal of understanding how network topology and configuration affects these techniques. With such a goal in mind, we felt that modeling actual network traffic was not necessary, but these are significant limitations that future contributions can look to address.<\/p>\n

On the algorithmic side, we currently only provide some basic agents as a baseline for comparison. We would be curious to find out how state-of-the art reinforcement learning algorithms compare to them. We found that the large action space intrinsic to any computer system is a particular challenge for reinforcement learning, in contrast to other applications such as video games or robot control. Training agents that can store and retrieve credentials is another challenge faced when applying reinforcement learning techniques where agents typically do not feature internal memory. These are other areas of research where the simulation could be used for benchmarking purposes.<\/p>\n

The code we are releasing today can also be turned into an online Kaggle<\/a> or AICrowd<\/a>-like competition and used to benchmark performance of latest reinforcement algorithms on parameterizable environments with large action space. Other areas of interest include the responsible and ethical use of autonomous cybersecurity systems. How does one design an enterprise network that gives an intrinsic advantage to defender agents? How does one conduct safe research aimed at defending enterprises against autonomous cyberattacks while preventing nefarious use of such technology?<\/p>\n

With CyberBattleSim, we are just scratching the surface of what we believe is a huge potential for applying reinforcement learning to security. We invite researchers and data scientists to build on our experimentation. We\u2019re excited to see this work expand and inspire new and innovative ways to approach security problems.<\/p>\n

 <\/p>\n

William Blum<\/em><\/strong><\/p>\n

Microsoft 365 Defender Research Team<\/em><\/p>\n

 <\/p>\n","protected":false},"excerpt":{"rendered":"

We are open sourcing the Python source code of a research toolkit we call CyberBattleSim, an experimental research project that investigates how autonomous agents operate in a simulated enterprise environment using high-level abstraction of computer networks and cybersecurity concepts.<\/p>\n","protected":false},"author":68,"featured_media":93297,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"content-type":[3663],"topic":[3687],"products":[],"threat-intelligence":[3727],"tags":[],"coauthors":[3380],"class_list":["post-93287","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","content-type-research","topic-threat-intelligence","threat-intelligence-attacker-techniques-tools-and-infrastructure"],"yoast_head":"\nGamifying machine learning for stronger security and AI models | Microsoft Security Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Gamifying machine learning for stronger security and AI models | Microsoft Security Blog\" \/>\n<meta property=\"og:description\" content=\"We are open sourcing the Python source code of a research toolkit we call CyberBattleSim, an experimental research project that investigates how autonomous agents operate in a simulated enterprise environment using high-level abstraction of computer networks and cybersecurity concepts.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Security Blog\" \/>\n<meta property=\"article:published_time\" content=\"2021-04-08T16:00:56+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-05-26T21:33:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim-social.png\" \/>\n\t<meta property=\"og:image:width\" content=\"600\" \/>\n\t<meta property=\"og:image:height\" content=\"300\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Microsoft Threat Intelligence\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim-social.png\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Microsoft Threat Intelligence\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\"},\"author\":[{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/author\/microsoft-security-threat-intelligence\/\",\"@type\":\"Person\",\"@name\":\"Microsoft Threat Intelligence\"}],\"headline\":\"Gamifying machine learning for stronger security and AI models\",\"datePublished\":\"2021-04-08T16:00:56+00:00\",\"dateModified\":\"2023-05-26T21:33:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\"},\"wordCount\":2540,\"publisher\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\",\"name\":\"Gamifying machine learning for stronger security and AI models | Microsoft Security Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg\",\"datePublished\":\"2021-04-08T16:00:56+00:00\",\"dateModified\":\"2023-05-26T21:33:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg\",\"contentUrl\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg\",\"width\":1200,\"height\":800},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Gamifying machine learning for stronger security and AI models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#website\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/\",\"name\":\"Microsoft Security Blog\",\"description\":\"Expert coverage of cybersecurity topics\",\"publisher\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#organization\",\"name\":\"Microsoft Security Blog\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2018\/08\/cropped-cropped-microsoft_logo_element.png\",\"contentUrl\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2018\/08\/cropped-cropped-microsoft_logo_element.png\",\"width\":512,\"height\":512,\"caption\":\"Microsoft Security Blog\"},\"image\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Gamifying machine learning for stronger security and AI models | Microsoft Security Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/","og_locale":"en_US","og_type":"article","og_title":"Gamifying machine learning for stronger security and AI models | Microsoft Security Blog","og_description":"We are open sourcing the Python source code of a research toolkit we call CyberBattleSim, an experimental research project that investigates how autonomous agents operate in a simulated enterprise environment using high-level abstraction of computer networks and cybersecurity concepts.","og_url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/","og_site_name":"Microsoft Security Blog","article_published_time":"2021-04-08T16:00:56+00:00","article_modified_time":"2023-05-26T21:33:25+00:00","og_image":[{"width":600,"height":300,"url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim-social.png","type":"image\/png"}],"author":"Microsoft Threat Intelligence","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim-social.png","twitter_misc":{"Written by":"Microsoft Threat Intelligence","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#article","isPartOf":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/"},"author":[{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/author\/microsoft-security-threat-intelligence\/","@type":"Person","@name":"Microsoft Threat Intelligence"}],"headline":"Gamifying machine learning for stronger security and AI models","datePublished":"2021-04-08T16:00:56+00:00","dateModified":"2023-05-26T21:33:25+00:00","mainEntityOfPage":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/"},"wordCount":2540,"publisher":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#organization"},"image":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage"},"thumbnailUrl":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/","url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/","name":"Gamifying machine learning for stronger security and AI models | Microsoft Security Blog","isPartOf":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage"},"image":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage"},"thumbnailUrl":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg","datePublished":"2021-04-08T16:00:56+00:00","dateModified":"2023-05-26T21:33:25+00:00","breadcrumb":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#primaryimage","url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg","contentUrl":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2021\/04\/CyberBattleSim.jpg","width":1200,"height":800},{"@type":"BreadcrumbList","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2021\/04\/08\/gamifying-machine-learning-for-stronger-security-and-ai-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/"},{"@type":"ListItem","position":2,"name":"Gamifying machine learning for stronger security and AI models"}]},{"@type":"WebSite","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#website","url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/","name":"Microsoft Security Blog","description":"Expert coverage of cybersecurity topics","publisher":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#organization","name":"Microsoft Security Blog","url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2018\/08\/cropped-cropped-microsoft_logo_element.png","contentUrl":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2018\/08\/cropped-cropped-microsoft_logo_element.png","width":512,"height":512,"caption":"Microsoft Security Blog"},"image":{"@id":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/#\/schema\/logo\/image\/"}}]}},"msxcm_display_generated_audio":false,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Security Blog","distributor_original_site_url":"https:\/\/www.microsoft.com\/en-us\/security\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/posts\/93287"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/users\/68"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/comments?post=93287"}],"version-history":[{"count":0,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/posts\/93287\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/media\/93297"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/media?parent=93287"}],"wp:term":[{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/content-type?post=93287"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/topic?post=93287"},{"taxonomy":"products","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/products?post=93287"},{"taxonomy":"threat-intelligence","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/threat-intelligence?post=93287"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/tags?post=93287"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-json\/wp\/v2\/coauthors?post=93287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}