The web is not working for women. Women and girls are less likely to have access to and use the web, and those who are online disproportionately face abuse and harassment and research show that women of color, and Black women, are most impacted.
The Web Foundation is running a series of consultations that bring together tech companies and women from across civil society to share experiences and tackle online gender-based violence together. The second consultation held virtually on July 15, heard from 26 women-led civil society groups in 20 countries and five of the world’s biggest tech companies who discussed the online threats experienced by women activists, especially women of color and Black women. Participants then discussed content moderation and privacy and safety.
Here are seven top takeaways:
1. Online violence causes offline harms
Abuse and harassment perpetrated online has offline consequences and can ruin people’s lives. It is part of a continuum of violence against women, with abuse crossing between online and offline spaces.
Women activists are often targeted with abuse designed to silence them, including with traumatising threats of violence that often lead to emotional and physical harm. In many countries, activists who continue to speak out are at risk of arrest. Facing abuse, violence and legal risks, some women understandably self-censor, while others continue to speak out and all too often suffer consequences.
To keep users safe and to defend democratic participation, civil society organisations urged tech platforms to take this abuse seriously and work to minimise violence on their services.
2. Content policies are not enforced equally
Content moderation is a critical part of social media platforms’ responsibility to keep people safe, but the moderation of content policies is currently not applied equally.
According to civil society participants, tech companies tend to focus their resources on the United States and Europe, resulting in a serious enforcement gap against abusive content between the Global North and South.
Participants also described experiencing or witnessing companies sanctioning feminists and racial justice advocates for calling out hate speech and abuse, with the perpetrators of the hate being sanctioned less frequently.
3. Cultural context is vital
In order to make good decisions around content moderation, account suspension and other actions to address problematic content, companies must understand the cultural context in which they operate. Activists said that policies designed for the Global North are applied globally without sufficient consideration of unique cultural contexts.
One participant explained that in their conservative country, a picture of an unmarried woman standing next to a man could lead to dangerous consequences, even honour killings. While content like this would not violate a platform’s policies, it can be used against women. There have been cases of women being blackmailed and forced to pay to keep photos offline.
This demonstrates how important it is to understand cultural complexities and nuance, especially in contexts of racism and oppression. Companies must invest in understanding how their platforms are used and misused outside the US and EU. More training is needed for content moderators in local contexts, local languages and local cultures.
4. AI may be necessary, but not sufficient
Social platforms have quickly scaled to operate with near-global reach. To manage this scale, artificial intelligence has become a key tool in their efforts to identify and tackle problematic content. It’s critical that companies are transparent about the effectiveness of these algorithms, conduct regular reviews, and work to ensure they are sensitive to diverse cultural contexts.
But AI alone is not enough. Content moderation demands effective human involvement, such as human review, particularly when dealing with complex cultural contexts and for verifying automated content decisions.
5. People need more control over their settings — especially privacy settings
While privacy and user control settings have improved over time, activists said they need more granular control of who can interact with them and the content they see.
Participants were frustrated that on most platforms they aren’t able to mute specific videos or images portraying violence — which can lead to re-victimization. They also noted there aren’t enough settings built to support control and self-care. One participant suggested a feature that would let people screen out content from accounts that were identified as likely to be a “troll”.
When people depend on social media for their activism and need to spend many hours a day on platforms, the ability to screen out often traumatic content and abusive users is important for their wellbeing.
6. We have the power to protect each other online
When we go online we are part of a community. When we see others suffer abuse and harassment in public spaces online, we can help support them by being ‘active bystanders’:
- report abuse using platform-tools
- support victims of abuse by sending a private message to them know they’re not alone
- reply to their original post constructively
- amplify their message by sharing with your communities.
7. We need global and cross-platform collaboration
The tech companies participating in the consultation acknowledged the need for greater cross-platform collaboration and best practices to fight online gender-based violence. That has happened already around child protection and needs to be prioritised for women’s safety too.
A woman whose intimate images are shared without her consent must report this abuse to each platform and go through several different processes to have them removed. That’s not good enough. Greater collaboration between companies and civil society and governments could make platforms safer for women and provide victims of abuse with ‘universal’ reporting mechanisms and support services.
Next steps
Evidence from this session — and from forthcoming consultations focused on the experiences of women politicians and journalists, and on girls and young women — will inform a series of policy design workshops where women’s rights organisations and tech companies will work together to build concrete policies and products to tackle online gender-based violence, with the needs of women at the forefront, not addressed as an afterthought.
Violence against women is a huge threat to progress on gender equality. And unless we make sure the web is a safe place for women and girls, technology will be one more channel for women to be attacked, suppressed and marginalised, rather than be the platform for voice, opportunity and positive change that we know it can be. That’s why it’s urgent that companies and governments work closely with women’s activists and the wider tech community to tackle violence online and make the web safe and empowering for everyone.
This work also builds towards the UN Generation Equality Forum now taking place in 2021, to mark 25 years since the Beijing Conference. While some companies and governments have expressed support to combat online gender-based violence, we now need to match words with action. We urge the tech companies who have participated in these consultations to push forward with real commitments to address violence against women in the context of the Generation Equality Forum.
Source: Web Foundation