How we use Technology to detect harmful content
We use a multi-layered approach to protect our users from harmful content and conduct.
We deploy the hash-matching technologies PhotoDNA and MD5 on photo and video content shared through Microsoft hosted consumer services and on content uploaded for visual image searches of the internet, to detect and stop the spread of known illegal and harmful image content. A “hash” transforms images into a series of numbers that can be easily compared, stored, and processed. These “hashes” are not reversible, meaning they cannot be used to recreate the original images. We rely on the derogation permitted by European Union Regulation (EU) 2021/1232 as required, for use of these hash-matching technologies in services governed by EU Directive 2002/58/EC.
We also use machine-learning technologies like text-based classifiers, image classifiers, and grooming detection techniques to discover content or conduct shared through Microsoft hosted consumer services that may be illegal or break our policies. Lastly, we leverage reports from users, governments, and trusted flaggers to bring potential policy violations to our attention. These techniques are uniquely tailored to the features and services on which they are deployed.
We find it
Others find it
You find it
Content review
Human reviewers consider images, video, messages, and context.
Policies
Microsoft content and conduct policies explain what is not allowed on our services.