LGBTQ+ safety policies

The Facebook Community Standards and Instagram Community Guidelines define what is acceptable to share and prohibit harmful content, including hate speech, bullying and harassment, and non-consensually shared intimate imagery.


When we believe a genuine risk of physical harm or a direct threat to public safety exists, we remove content, disable accounts and work with local emergency services.


Our policies do not allow content that outs an individual as a member of a designated and recognizable at-risk group or threatens LGBTQ+ safety by revealing sexual orientation or gender identity against their will or without permission.

Hate speech against the LGBTQ+ community is prohibited.

We believe that people connect more freely when they don’t feel attacked based on their identity. We don’t allow hate speech because it creates an environment of intimidation and exclusion, leads to dehumanization, and in some cases, can promote offline violence.

We define hate speech as a direct attack against people based on our protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and disease.

We know hate speech constantly evolves. That’s why we partner with LGBTQ+ resource experts and organizations to stay ahead of trends and keep our policies current.

Hate speech attacks (violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation) are prohibited.

We also ban the use of harmful stereotypes — defined by us as dehumanizing comparisons that have historically been used to attack, intimidate or exclude specific groups — which can lead to offline violence.

We do not tolerate bullying behavior.

Bullying and harassment take varying forms, including threatening messages, unwanted malicious contact and the release of personal information. We do not tolerate this behavior.

Meta views public figures and private individuals differently to allow discussion, which can include critical commentary of people featured in the news or with large public audiences. For public figures, Meta removes posts that use derogatory terms, call for sexual assault or exploitation, call for mass harassment or threaten to release private information.

For private individuals, Meta's protection goes further. We remove content that's meant to degrade or shame someone for their sexual orientation or gender identity, among other protections.

We recognize that bullying and harassment can have even more of an emotional and physical impact on minors, which is why our policies provide even deeper protections for young people.

Non-consensual intimate images

The non-consensual sharing of intimate images violates our policies, as do threats to share those images. We remove images shared on Facebook and Instagram in revenge or without permission, as well as photos or videos depicting incidents of sexual violence. We also remove content that threatens or promotes sexual violence or exploitation.

Report when someone shares your intimate images without your consent or is threatening to do so. Our teams review reports 24/7 in more than 70 languages and will remove intimate images or videos shared without consent. We will also remove any content that threatens to share intimate images without permission. In most cases, we disable the account that shared, or threatened to share, such content on our technologies.

To stop further attempts at sharing a removed image, we use preventative photo-matching technologies. If someone tries to share the image after it has been reported to us and removed, we will alert them that it violates our policies. We also stop the resharing attempt and may disable the account. We encourage you to report sextortion, which is when people threaten or force someone to share intimate photos or videos. This is against the Facebook Community Standards and, in some instances, also against the law.

How to report content that may violate our policies.

If you want to report actions that go against the Facebook Community Standards, such as hate speech, bullying and harassment or violence, go to the content you want to report and use the "Find Support" or "Report" link. We have teams of experts who review reports of violating content 24/7 in more than 70 languages, and we use artificial intelligence technology that finds and removes this content before users even see it.

Transparency and accountability

We strive for open and proactive action when safeguarding users’ privacy, security and access to information online. We’ve published biannual transparency reports since 2013. We also release a quarterly Community Standards Enforcement Report, which includes data on actions we’ve taken against violating content on Facebook, Messenger and Instagram. We believe that increased transparency leads to increased accountability and responsibility.