{"id":90341,"date":"2019-12-13T09:00:40","date_gmt":"2019-12-13T17:00:40","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/\/?p=90341"},"modified":"2023-05-15T23:01:34","modified_gmt":"2023-05-16T06:01:34","slug":"finding-a-common-language-to-describe-ai-security-threats","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2019\/12\/13\/finding-a-common-language-to-describe-ai-security-threats\/","title":{"rendered":"Finding a common language to describe AI security threats"},"content":{"rendered":"
As artificial intelligence (AI) and machine learning systems become increasingly important to our lives, it\u2019s critical that when they fail we understand how and why. Many research papers have been dedicated to this topic, but inconsistent vocabulary has limited their usefulness. In collaboration with Harvard University\u2019s\u00a0Berkman Klein Center<\/a>, Microsoft published a series of materials that define common vocabulary that can be used to describe intentional and unintentional failures.<\/p>\n