How review works
When a potential content or conduct violation on our services is reported, a specially trained human reviewer may look at the content and conduct. Human reviewers consider the images, video, messages, and context to determine whether the identified content or conduct violates our terms including our policies, Code of Conduct and service specific terms. When content or conduct violates our policies, human reviewers need additional information, they may follow up with the reporter to ask questions, or they may seek assistance from subject matter experts.
We may rely on automated technology to identify and categorize violations without human review.
Examples include violations like:
- Blocking known hateful words in a Gamertag.
- Malware, virus, spam, and phishing detection.
- Blocking terrorist imagery when it matches imagery previously identified as terrorist content.
Reviewer training
Our human reviewer teams are diverse and receive extensive training.
Content detection
We use technology to find harmful content and review concerns reported from others.