From sexist comments and unsolicited dick pics to rape threats and doxxing, women and girls face the majority of harassment online and have done so since the advent of social media which made it easier than ever for people to connect online. This form of violence against women (VAW) has become so ubiquitous that in 2015, UN Women recognised the rapid growth of online violence against women resulting in 75% of women and girls experiencing such violence in one form or another and that only 26% of law enforcement in the 86 countries surveyed take any appropriate action.  And it shows no sign of receding as a UN study released on 24 November 2021 showed that the online abuse of women has surged during the COVID-19 pandemic and that laws addressing this atrocity remain inadequate or non-existent.

Over the years, there have been efforts by non-governmental organisations (NGOs) to tackle this issue. One of the targets of the United Nations’ Sustainable Development Goal 5  to achieve gender equality and empower all women and girls is to eliminate all forms of violence against all women and girls in the public sphere including violence against women (VAW) in online spaces.  The Convention on Elimination of All Forms of Discrimination against Women (CEDAW) has recognised that media (including social media) play a central role in the elimination of VAW and signatories to the convention have taken concrete steps to eliminate sexist stereotypes in media and advertising by encouraging media to establish codes of production as well as stimulating public debates around the issue. Governments have also slowly stepped up to tackle online VAW through policies and legislation and the increasing scrutiny of social media platforms.

Nevertheless, the prevention and mitigation of online VAW cannot succeed if social media companies do not join ongoing efforts to do so. After all, they are the ones who control the platform and are on the frontlines of cyberspace in the 21st century. This list offers 16 actions social media companies can take to stop or prevent VAW on their platforms. Not all may be relevant or suitable for every case or situation, but we hope they can serve as a useful starting point.

Introduction by Anushia Kandasivam and Regina Yau. Written, researched, and compiled by Anushia Kandasivam.

Inspired to support The Pixel Project’s anti-violence against women work? Make a donation to us today OR buy our 1st charity anthology, Giving The Devil His Due. All donations and net proceeds from book sales go towards supporting our campaigns, programmes, and initiatives.


 

Action #1: Set clear rules and guidelines

Almost every social media platform has a set of community guidelines that everyone is expected to follow. Most of them state the obvious – no hateful comments and no violent or sexually explicit photos or video are allowed. But these guidelines err on the side of vagueness and many do not explicitly include the words “sexist” or “misogynistic” in their descriptions of what is not allowed. We still live in a world where racist comments are considered hateful but rape threats are seen as jokes. Explicit, clear, and understandable rules will allow users to check themselves and expect censure should they resort to VAW on the platform.

 

Action #2: Regulate and monitor everything

There is much debate about the privacy and security of cyberspace and what constitutes free speech. With clearly set guidelines and informed users, a social media company will be able to effectively regulate what is posted on its platform. Monitoring is essential if VAW – and any other form of cyberbullying and violence for that matter – are to be prevented and stopped.

 

Action #3: Use good artificial intelligence to set proper algorithms

There is no possible way that a team of humans could catch and filter all the VAW that streams through cyberspace, even if they worked 24 hours a day, seven days a week. But there is technology for it. AI can act as the first net to flag potential problems. We already see that happening with audio copyrights on platforms such as Facebook, and in 2018 Instagram started using machine learning to identify harassment where it previously relied solely on community policing. A good AI algorithm can be used to identify violent or pornographic visuals and perhaps even certain violent language.

 

Action #4: Invest in some good training for the human moderators

Of course, once the AI flags something, it is then up to human moderators to look through it and decide if it is indeed VAW. There have been numerous tales over the years of women being let down by moderators deciding that the VAW they were experiencing online did not violate the platform’s guidelines. And it is still happening. The human factor can only work effectively when the moderators are trained properly to recognise and understand what online VAW looks like.

 

Action #5: Set clear tolerance policies

Again, the debate about free speech comes up when deciding what is harmless, what is legitimate discussion, and what is actual violence online. Clear internal guidelines for moderators and engineers programming the company’s AI will help in creating parameters for what will and will not be tolerated. This may be a difficult process, but transparency in this area and a flexible approach that leaves room for learning and improvement will work in the company’s favour.

 

Action #6: Provide clear public communication on policies and rules

Social media companies should not only set clear internal guidelines for their moderators on what is allowed and what is not, but should also publicly announce these rules to users. Sometimes, even this can be a deterrent to would-be harassers who realise their actions will not be tolerated. The world being what it is, there is no doubt the company will face backlash from a certain type of netizen, but on balance, the social responsibility this move shows will earn it much higher rates of good publicity and loyalty.

 

Action #7: Provide resources for reporting

Even the best AI and best-trained moderators cannot catch everything that happens on any given platform. This is why social media companies should make reporting VAW (or other forms of violence) on their platforms easy for victims. The major platforms – Instagram, Facebook, Twitter, TikTok, etc – all have a place where users can report harassment. All social media platforms should provide this resource to empower users and make women feel safe using their services.

 

Action #8: Have resources for bystanders

The social media platforms that have tools for reporting harassment usually make it possible for anyone to make the report. But harassment does go unreported by both victim and bystander. Social media companies should include resources for bystanders, as well as parents, caregivers and teachers, in their help centres – information on understanding online abuse, how to help someone experiencing online abuse, and how to report it.

 

Action #9: Make reporting easy

There is no point in providing reporting tools that are difficult to find on the platform or, even worse, that require lots of information to be filled in and layers of approvals to be successfully submitted. VAW on social media platforms takes place anytime and all the time, so reporting tools should be obvious and easily accessible and the process should be quick.

 

Action #10: Provide tools for users to prevent and stop VAW

Monitoring and reporting are great steps, but sometimes the user requires an immediate solution. Social media companies can equip their platforms with tools to instantly stop harassment, such as a ‘mute’ button on comments, a way to kick out a harasser in a live discussion, and a way to ban a harasser from access to a channel, among other tools. As an extra step, they can also provide links to information and advice about dealing with cyberbullying and harassment.

 

Action #11: Keep tabs on red-flagged users

Social media companies receive hundreds, if not thousands, of reports everyday about harassment but there should be a way – using the company’s AI – to keep a record of frequent harassers and to keep tabs on them. Issues of privacy come into play here again, but flagging users in the system for the AI to monitor when they reach a tolerated limit is a simple action that can be taken.

 

Action #12: Take action when required

After all the time and money spent on monitoring and moderating, social media companies should also empower themselves and their staff to take action against harassers when required. Adding a warning to or deleting posts, suspending or banning accounts, and even reporting the user to the authorities are all actions that can and should be considered. Women on the platform will feel safer if they know that their interests are protected and that users cannot commit VAW with impunity.

 

Action #13: Make sure company policies comply with the law or international policy

This may seem obvious, but laws are changing all the time, especially when it comes to VAW and gender discrimination. Keeping abreast of what is going on in a country’s legislature will stand the company in good stead. And since social media platforms are used internationally, learning about and incorporating recommendations of international bodies is a good idea.

 

Action #14: Keep abreast of social movements and the feminist zeitgeist

Social movements for gender equality and against VAW have existed since the last century, if not longer, and in this day and age, they are growing and evolving at an increasingly fast pace. Legislation cannot keep up with what is considered socially and morally acceptable, so social media companies should also tune in to the feminist zeitgeist to understand what can and should be tolerated on their platforms. This will not only serve the intersectionality of users and make them feel safer, it will certainly make the platform more appealing to the public.

 

Action #15: Hire more women

Basic gender equality policies aside, hiring more women at every level in the company will have further-reaching implications than many realise. Women moderators will be better at identifying VAW when they see it and women engineers will know what to program into the software and AI to spot VAW. After all, 1 in 3 women experiences VAW in their lifetimes, so why not hire the experts?

 

Action #16: Involve women in decision-making processes

Hiring women at every level means placing women in the top positions where decisions on company policy and actions are made. Preventing and stopping VAW is a long-term vision for any company, which means that the people in power should not only have a vested interest in it but also understanding and expertise on the subject matter. Having women (and men) making the decisions will make for balanced and far-reaching company policies on VAW.


All pictures used are Creative Commons images (from top to bottom):