We want to do everything we can to keep people safe on Instagram. We’ve worked with experts to better understand the deeply complex issues of mental health, suicide and self-harm, and how best to support those who are vulnerable. No one at Instagram takes these issues lightly, including me. We’ve made progress over the past few years, and today we’re rolling out more technology in Europe to help with our efforts. But our work here is never done and we need to constantly look for ways to do more.
We recognize that these are deeply personal issues for the people who are affected. They are also complicated and always evolving, which is why we continue to update our policies and products so we can best support our community. We’ve never allowed anyone to promote or encourage suicide or self-harm on Instagram, and last year we updated our policies to remove all graphic suicide and self-harm content. We also extended our policies to disallow fictional depictions like drawings, memes or other imagery that shows materials or methods associated with suicide or self-harm.
It’s not enough to address these difficult issues through policies and products alone. We also believe it’s important to provide help and support to the people who are struggling. We offer support to people who search for accounts or hashtags related to suicide and self-harm and direct them to local organizations that can help. We’ve also collaborated with Samaritans, the suicide prevention charity, on their industry guidelines, which are designed to help platforms like ours strike the important balance between tackling harmful content and providing sources of support to those who need it.
We use technology to help us proactively find and remove more harmful suicide and self-harm content. Our technology finds posts that may contain suicide or self-harm content and sends them to human reviewers to make the final decision and take the right action. Those actions include removing the content; connecting the poster to local organizations who can help; or, in the most severe cases, calling emergency services. Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%.
Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union, which made it harder for us to proactively find content and send people help. I’m pleased to share that, today in the EU, we’re rolling out some of this technology, which will work across both Facebook and Instagram. We can now look for posts that likely break our rules around suicide and self-harm and make them less visible by automatically removing them from places like Explore. And when our technology is really confident that a post breaks our rules, we can now automatically remove it altogether.
This is an important step that will protect more people in the EU. But we want to do a lot more. The next step is using our technology not just to find the content and make it less visible, but to send it to our human reviewers and get people help, like we do everywhere else in the world. Not having this piece in place in the EU makes it harder for us to remove more harmful content, and connect people to local organizations and emergency services. We’re in current discussions with regulators and governments about how best to bring this technology to the EU, while recognizing their privacy considerations. We think and hope we can find the right balance so that we can do more. These issues are too important not to push for more.
A Timeline: Steps We’ve Taken to Address Self-Harm and Suicide Content on Instagram
- December 2016: Launched anonymous reporting for self-harm posts, and started connecting people to organizations that can provide help.
- March 2017: Integrated suicide prevention tools into Facebook Live, making it easier for friends and family to report and reach out to people in real time.
- November 2017: Rolled out technology outside the US (except in Europe) to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. Started using AI to prioritize reports, to help us send people help and alert emergency services as quickly as possible.
- September 2018: Created a Parent’s Guide for parents with teens who use Instagram.
- February 2019: Began hosting regular consultations with safety and suicide prevention experts around the world to discuss the evolving complexity of suicide and self-harm, and to hear regular feedback on our approach.
- February 2019: Expanded our policy to ban all graphic suicide and self-harm content, even if it would previously have been allowed as admission. We also made this content harder to find in search, blocked related hashtags and applied sensitivity screens to all admission content, sending resources to more people posting or searching for this type of content.
- October 2019: Expanded our policies to ban fictional self-harm or suicide content including memes and illustrations, and content containing methods or materials.
- September 2020: Collaborated with the Samaritans on the launch of their new guidelines on how to safely manage self-harm and suicide content online.
- October 2020: Added a message at the top of all search results, when you search for terms related to suicide or self-injury. The message offers support and directs people to local organisations that can help.
- November 2020: Rolled out new technology in the EU to proactively find more suicide and self-harm content and make it less visible.
The Numbers
We believe our community should be able to hold us accountable for how well we enforce our policies and take action on harmful content. That’s why we publish regular Community Standards Enforcement Reports to share global data on how much violating content we’re taking action on, and what percentage of that content we’re finding ourselves, before it’s reported. This timeline outlines the progress we’ve made on tackling suicide and self-harm content on Instagram, as shown through these reports.
- Q2 2019 Community Standards Enforcement Report: Took action on 835,000 pieces of content, nearly 78% found proactively.
- Q3 2019 Community Standards Enforcement Report: Took action on 845,000 pieces of suicide and self-harm content, over 79% found proactively.
- Q4 2019 Community Standards Enforcement Report: Took action on 897,000 pieces of content, over 83% found proactively.
- Q1 2020 Community Standards Enforcement Report: Took action on 1.3 million pieces of content, nearly 90% found proactively.
- Q2 2020 Community Standards Enforcement Report: Took action on 275,000 pieces of content, nearly 94% found proactively. Our enforcement numbers were lower during this period, as a result of COVID-19’s impact on content review.
- Q3 2020 Community Standards Enforcement Report: We’ll publish our next report later this month.