To limit the spread and amplification of harmful content online, recently, social media platforms have devised important new content moderation policies and practices. Maximising the long term impact of these policies requires that we understand how users evolve their behaviour in response to platform policy changes. In this project, through a qualitative analysis of user discourses, we aim to establish a robust understanding of user reactions to content moderation policies over time, as well as the strategies users employ to circumvent them.
We will develop a dataset of YouTube videos and comments, and Reddit threads, in which users discuss their strategies for working with and circumventing Facebook’s community guidelines, advertising policies and moderation processes. We will extract a collection of user posts, published between 2005-2020, and employ qualitative research methods to identify patterns, common practices, and salient themes. We will also analyse the data to identify common patterns and practices around content moderation issues employed by users over the past 15 years.
While not all users that seek to circumvent policy enforcement do so with the goal of spreading harmful content, a robust understanding of counteracting user behaviour can support social media platforms’ efforts to limit the spread and visibility of harmful content.
- Facebook’s Content Governance funding (2020-2021)