{"id":88435,"date":"2019-02-07T10:00:46","date_gmt":"2019-02-07T18:00:46","guid":{"rendered":"https:\/\/cloudblogs.microsoft.com\/microsoftsecure\/?p=88435"},"modified":"2023-05-15T23:28:23","modified_gmt":"2023-05-16T06:28:23","slug":"securing-the-future-of-ai-and-machine-learning-at-microsoft","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2019\/02\/07\/securing-the-future-of-ai-and-machine-learning-at-microsoft\/","title":{"rendered":"Securing the future of AI and machine learning at Microsoft"},"content":{"rendered":"

Artificial intelligence (AI) and machine learning are making a big impact on how people work, socialize, and live their lives. As consumption of products and services built around AI and machine learning increases, specialized actions must be undertaken to safeguard not only your customers and their data, but also to protect your AI and algorithms from abuse, trolling, and extraction.<\/p>\n

We are pleased to announce the release of a research paper, Securing the Future of Artificial Intelligence and Machine Learning at Microsoft<\/a>, focused on net-new security engineering challenges in the AI and machine learning space, with a strong focus on protecting algorithms, data, and services. This content was developed in partnership with Microsoft\u2019s AI and Research group.\u00a0It\u2019s referenced in The Future Computed: Artificial Intelligence and its role in society<\/a> by Brad Smith and Harry Shum, as well as cited in the Responsible bots: 10 guidelines for developers of conversational AI<\/a>.<\/p>\n

This document focuses entirely on security engineering issues unique to the AI and machine learning space, but due to the expansive nature of the InfoSec domain, it\u2019s understood that issues and findings discussed here will overlap to a degree with the domains of privacy and ethics. As this document highlights challenges of strategic importance to the tech industry, the target audience for this document is security engineering leadership industry-wide.<\/p>\n

Our early findings suggest that:<\/p>\n

    \n
  1. Secure development and operations foundations must incorporate the concepts of Resilience and Discretion when protecting AI and the data under its control.<\/strong><\/strong><\/li>\n<\/ol>\n