Featured image courtesy Politico

There is little research done on technology-based violence against women in Sri Lanka. In addition, there are no trilingual, easily accessible, open-source resources for digital security, key because many leaks occur due to lack of understanding about security measures to ensure privacy online. Research shows the varied impact of online harassment can be felt offline. The stigma has even driven young girls to suicide in Sri Lanka. Attempts to report violations to Facebook are met with the response that they do not violate Community Standards, as most of this content is in Sinhala or Tamil; the vernacular languages. While Information and Communication Technology has given women and girls increased capacity for self-expression and public and political engagement, often a direct proportionality can be seen in the increase of women and girls’ access to the Internet and increase of violence against women online.

It is in this context that the Center for Policy Alternatives (the institutional anchor of Groundviews), Ghosha (a feminist initiative in Sri Lanka exploring intersections between technology and women’s human rights) and youth group Hashtag Generation set out to conduct a research on the prevalence of technology-related violence against women and girls in Sri Lanka, with a specific focus on Facebook. We began to analyze a set of Facebook pages in October 2017. The past three months have also been significant on account of a global conversation on violence against women in the form of the #MeToo campaign. It is by no means the first such conversation (or even the first #MeToo campaign), nor will it be the last. It is important to recognize this as a significant moment while also acknowledging the various factors and privileges that elevated this particular movement. The momentum of the campaign has reached the Global South as well, with the release of an online list of Indian men in academia accused by women of repeated sexual harassment and the conversation and debate that arose from it. Many in Sri Lanka participated as well, highlighting the prevalence of sexual assault and harassment here too.   It is in this context that we share some preliminary observations from our research.

Key trends

Culture of Sexism and Misogyny

The Facebook pages monitored in these three months have shown that,

  1. Incidents of sexual harassment
  2. Non-consensual dissemination of images and intimate images and
  3. Other forms of technology-related violence against women and girls

are not isolated incidents. They happen in the context of a culture of casual sexism and misogyny that is the norm in these Facebook pages. This is common across pages in Sinhala, Tamil and English. The messages are often conveyed using humour and packaged as memes and cartoons. As part of the ongoing research, these posts have been documented for posterity, including screenshots, but we are opting not to share them at this stage.

Nonconsensual Dissemination of Personal Images and Intimate Images

One manifestation of the culture of sexism and misogyny online is the non-consensual sharing and dissemination of personal and intimate images, a clear form of violence against the women affected. The pages we have monitored indicate that the images shared publicly are just a preview of larger databases and are used as a way to invite page followers to privately gain access to more images. In some instances, the images used are personal images and not intimate images, but the accompanying captions and comments are derogatory, abusive, violent and incite further violence.

Propagating violence against women while defending women

Another trend noted was that even when some pages publish content in defense of women, the captions and comments are using derogatory and abusive language against other women or women in general to support their defense. This is an indication that as an extension of the public space in which people live their political and social lives, the Internet is replicating and sometimes augmenting structural inequalities against women.

Anonymity

There is a strong culture of impunity enjoyed by the perpetrators of online violence on Facebook. The pages we monitor don’t indicate who is administrating them and most often, the followers and commenters are not identifiable either. They are either using a fake identity/profile or have not shared identifiable markers such as a profile picture or a full name. While there might be opportunities to use Facebook reporting guidelines to report perpetrators and get content removed (and the next section explains why this is not so simple), if the women facing violence on Facebook want to report to law enforcement, there isn’t sufficient information to identify the perpetrators. Facebook’s real name policy has in fact been used to target members of the LGBTIQ community in the past – many of them use pseudonyms on social media but were inadvertently outed by the policy. As a response to the criticism Facebook said they would change their policy so that it did not disproportionately affect marginalised communities. Yet in practice users continue to subvert the system, as the research is showing.

Manipulation of Facebook Reporting Guidelines

Finally, the key trend we have observed during this period is that all these pages take various measures to avoid coming under the purview of Facebook reporting guidelines. This is done in a variety of ways including only liking and commenting on posts and not sharing, so that the posts are not visible beyond the pages, not including captions to the posts but only text on images and not directly linking to websites that publish non-consensual intimate images but including the link on images. These pages are also taking advantage of the fact that technology-related and online violence against women or violence and hate speech on the basis of gender are not identified as grounds for reporting under Facebook reporting guidelines for photos (as opposed to links and comments). Unless you know the women whose photos are being shared as explained before, there are no options to report the often abusively framed photos. Ultimately, most of these posts fall through the cracks of Facebook’s security framework because they are in languages for which comprehensive support is not available. This has been confirmed by several survivors and activists, who note that though content is violent and abusive in Sinhala and Tamil (though Tamil content is more closely monitored as it is comparatively widely used) Facebook does not identify it as violating community standards when reported.

Challenges

Some of the challenges our team faced during this period included:

  •    Some of the pages identified in our methodology being deleted by the time data collection happens though our different methods of data collection are addressing this issue – perhaps due to the pages being reported by other users.
  •    Monitoring both Sri Lankan and Indian Facebook pages in Tamil given that Sri Lankan Tamil language Facebook users are active on both types. We have decided to monitor both types of pages to understand user behavior in both.
  •    Mental duress on the researchers due to some of the graphic content we have been monitoring and documenting. We have formed a WhatsApp group on which we can have conversations and support each other and also identified that we must seek counseling should we need it.

As part of this project, we will be hosting a series of focus group discussions in early February, and invite all women and girls who have experienced technology-related violence against women to write to us with your input.

For those seeking support for specific cases of technology-related violence, please contact the Grassrooted Trust/Bakamoono.lk.

Editor’s Note: Also read ‘Beyond the Report Button: Tackling Incidents of Cyber-Violence in Sri Lanka” and “On the Harassment of the Comic Con Players”