Overview
Microsoft’s consumer online products, website and services have rules about what types of content and conduct are not allowed. The Microsoft Services Agreement has a Code of Conduct that explains what is not allowed and what to expect when accessing services like Xbox and Teams. Prohibited content and conduct is defined below. When reviewing content and conduct that may violate these policies, we carefully consider values such as privacy, freedom of speech, and access to information.
Additional guidelines
Additional policies and community standards that apply to specific services are available.
Moderation and enforcement
If you break these rules, your actions can be addressed in a variety of ways (and this might differ by service).
Abuse of our Platform and Services
Do not misuse any of Microsoft's services. Do not use any Microsoft service to harm, degrade or negatively affect the operations of our or others’ networks, services, or any other infrastructure.
Examples of violative material include:
- Gaining or attempting to gain unauthorized access to any secure systems such as accounts, computer systems, networks or any other services or infrastructure.
- Deploying or attempting to deploy software or code of any kind on unauthorized systems that may negatively affect the operations of our or other networks, services, or any other infrastructure.
- Disrupting or attempting to disrupt Microsoft’s or others’ services or any other systems through any activities including but not limited to denial-of-service attacks.
- Attempting to or successfully bypassing or circumventing access to, usage, or availability of the Services (e.g., attempting to "jailbreak" an Al system or unauthorized scraping). This includes attempts to subvert enforcement placed on your account.
Bullying and Harassment
Microsoft seeks to create a safe and inclusive environment where you can engage with others and express yourself free from abuse. We do not allow content or conduct that targets a person or group with abusive behavior. This includes any action that:
- Harasses, intimidates, or threatens others.
- Hurts people by insulting or belittling them.
- Continues contact or interaction that is unwelcome, especially where contact causes others to fear injury.
Child Sexual Exploitation and Abuse
Microsoft is committed to protecting children from online harm. We do not allow the exploitation of, harm, or threat of harm to children on our services. This includes banning the use of our services to further child sexual exploitation and abuse (CSEA). When we become aware of content containing child sexual exploitation and abuse, Microsoft reports the content to the National Center for Missing and Exploited Children (NCMEC).
CSEA is any content or activity that harms or threatens to harm a child through exploitation, trafficking, extortion, endangerment, or sexualization. This includes the sharing of visual media that contains sexual content that involves or sexualizes a child. CSEA also includes grooming, which is the inappropriate interaction with children by contacting, private messaging, or talking with a child to ask for or offer sex or sexual content, sharing content that is sexually suggestive, and planning to meet with a child for sexual encounters. A child is anyone under 18 years old.
Coordination of Harm
Microsoft cares about your physical well-being. Our products and services should never be used to hurt people, including by working with others to cause physical harm. Cooperating or making specific plans with others with the shared purpose of harming someone physically is not allowed.
Deceptive Generative AI Election Content
Microsoft seeks to foster a trustworthy information environment where voters are empowered with the information they need to vote for the candidates and issues of their choosing. Microsoft prohibits the creation or dissemination of deceptive generative AI election content. This includes AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates.
Exposure of Personal Information
Do not use Microsoft products and services to share personal or confidential information about a person without authorization.
Examples of prohibited activities include sharing:
- Personal data, such as location, that may result in endangering someone else.
- Account username, passwords, or other information used for the purposes of account credentialing.
- Government-issued information such as Social Security Numbers or passport numbers.
- Private financial information including bank account numbers and credit card numbers, or any other information which facilitates fraudulent transactions or identity theft.
- Health information including healthcare records.
- Confidential employment records.
Graphic Violence and Human Gore
Real-world violent content can be disturbing, offensive, or even traumatic for users. We also understand some violent or graphic images may be newsworthy or important for educational or research purposes, and we consider these factors when reviewing content and enforcing our policies.
We do not permit any visual content that promotes real-world violence or human gore.
This may include images or videos that show:
- Real acts of serious physical harm or death against a person or group.
- Violent domestic abuse against a real person or people.
- Severe effects or physical trauma, such as internal organs or tissues, burnt remains of a person, severed limbs, or beheading.
Hate speech
Microsoft wants to create online spaces where everyone can participate and feel welcome.
We do not allow hateful content that attacks, insults, or degrades someone because of a protected trait, such as their race, ethnicity, gender, gender identity, sexual orientation, religion, national origin, age, disability status, or caste.
Hate speech includes:
- Promoting harmful stereotypes about people because of a protected trait.
- Dehumanizing statements, such as comparing someone to an animal or other non-human, because of a protected trait.
- Encouraging or supporting violence against someone because of a protected trait.
- Calling for segregation, exclusion, or intimidation of people because of their protected trait.
- Symbols, logos, or other images that are recognized as communicating hatred or racial superiority.
Intellectual Property Infringement
Microsoft respects the intellectual property rights of others, and we expect you to do the same. To the extent certain Microsoft features allow for the creation or upload of user generated content, Microsoft does not allow posting, sharing, or sending any content that violates or infringes someone else’s copyrights, trademarks, or other intellectual property rights.
So, do not use Microsoft's products and services to violate third party copyright, trademark, or other intellectual property rights.
Non-Consensual Intimate Imagery and Intimate Extortion
Microsoft does not allow the sharing or creation of sexually intimate images of someone without their permission—also called non-consensual intimate imagery, or NCII. This includes photorealistic NCII content that was created or altered using technology. We do not allow NCII to be distributed on our services, nor do we allow any content that praises, supports, or request NCII.
Additionally, Microsoft does not allow any threats to share or publish NCII—also called intimate extortion. This includes asking for or threatening a person to get money, images, or other value things in exchange for not making the NCII public.
Sexual Solicitation
Microsoft does not allow people to use its products and services to ask for or offer sex, sexual services, or sexual content in exchange for money or something else of value.
Spam, Fraud, Scams, Phishing
Microsoft does not tolerate any form of spam, fraud, phishing, scams, or deceptive practices, including impersonation, on our platforms or services.
Spam is any content that is excessively posted, repetitive, untargeted, unwanted or unsolicited.
The following are some examples of spam practices that are prohibited on our platforms or services:
- Sending unsolicited messages to users or posting comments that are commercial, repetitive, or deceptive.
- Using title, thumbnails, descriptions, or tags to mislead users into believing the content is about a different topic or category than it is.
- Sending unwanted or unsolicited bulk email, postings, contact requests, SMS messages, instant messages, or similar electronic communications.
- Using deceptive or abusive tactics to attempt to deceive or manipulate ranking or other algorithmic systems, including link spamming, social media schemes, cloaking, or keyword stuffing.
Fraud, Scams, and Phishing is any intentional act or omission designed to deceive others to generate personal or financial benefit to the detriment of others. Additionally, Phishing includes sending emails or other electronic communications to fraudulently or unlawfully induce recipients to reveal personal or sensitive information.
Examples of Fraud, Scams, and Phishing include content that:
- Promises viewers a legitimate or relevant offer but instead redirects them somewhere different off site.
- Offers cash gifts, “get rich quick” schemes, pyramid schemes, or other fraudulent or illegal activities.
- Sells engagement metrics such as views, likes, comments, or any other metric on the platform.
- Uses false or misleading header information or deceptive subject lines.
- Fails to provide a valid physical postal address of the sender or a clear and conspicuous way to opt out of receiving future emails.
- Attempts to deceive users or audiences into visiting websites intended to facilitate the spread of harmful malware or spyware.
- Includes fake login screens or alert emails used to trick and steal personal information or account login details.
Suicide and Self-Injury
We work to remove any content about suicide and self-harm that could be dangerous. We also know that people may use our services to talk about mental health, share their stories, or join groups with others who have been affected by suicide or self-injury.
This includes things like:
- Supporting general ways people can end their lives, such as with a gun, hanging, or drug overdose.
- Encouraging someone to take their life.
- Showing images of real or attempted suicide.
- Praising those who have died by suicide for taking their own life.
Self-injury content demonstrates, praises, or inspires physical harm to oneself, including through cutting, burning, or carving one’s skin. It also includes content that encourages or instructs on eating disorders, or systematic over or under-eating.
Terrorism and Violent Extremism
At Microsoft, we recognize that we have an important role to play in preventing terrorists and violent extremists from abusing online platforms. We do not allow content that praises or supports terrorists or violent extremists, helps them to recruit, or encourages or enables their activities. We look to the United Nations Security Council’s Consolidated List to identify terrorists or terrorist groups. Violent extremists include people who embrace an ideology of violence or violent hatred towards another group.
In addressing terrorist and violent extremist content, we also work to ensure that people can use our services to talk about terrorism or violent extremism, share news or research about it, or express opposition to it.
Trafficking
Our service should never be used to exploit people, endanger them, or otherwise threaten their physical safety.
Microsoft does not allow any kind of human trafficking on our services. Trafficking happens when someone exploits someone else for personal gain by depraving them of their human rights.
Trafficking commonly includes three parts:
- The act of recurring, moving, relocating, paying for, or abducting people.
- The use of—or threat of—force, lies, trickery, or coercion to do these activities.
- The activities are done for money, status, or some other kind of gain.
Trafficking includes forcing people to work, marry, engage in sexual activity, or have medical treatments or operations without their consent and is not limited to any age or background.
Violent Threats, Incitement, and Glorification of Violence
Microsoft does not permit content that encourages violence against other people through violent threats or incitement.
- Threats of violence are words that show a specific intention to cause someone serious physical harm. Slang or obviously exaggerated remarks usually don’t count as violent threats.
- Incitement is material that encourages, urges, or is likely to result in serious physical harm to a person or group.
We also do not allow the glorification of violence through content that praises or supports real acts of violence causing serious physical harm to people or groups, including violence that happened in the past.
Virus, spyware, or malware
Do not use any Microsoft products and services to host, run, transmit, or otherwise distribute harmful software such as viruses, spyware, and malware. Do not host, run, transmit, or otherwise distribute harmful software that damages or impairs the operation of Microsoft’s or third parties' networks, infrastructure, servers, or end user devices.
Examples of prohibited activities include:
- Transmitting software to damage Microsoft’s or another party’s device.
- Embedding code in software to track and log users' activities.
- Downloading software without the consent of an end user or using fraudulent or misleading means to deceive users into clicking links, visiting websites, or downloading software.