In this seminar, Dr Ariadna Matamoros-Fernández , Dr Rosalie Gillett , and Dr Zahra Stardust discuss online harm and the discrepancies between how users and platforms conceptualise the problem.
Date & time: 12.30pm – 2.00pm. Friday 24 September, 2021
DMRC Fridays: Conceptualising, identifying, and assessing harm on digital platforms.
The extent to which digital platforms facilitate, host, and amplify harmful content and behaviours remains a pressing social issue. Emerging research highlights how users with malicious intent weaponize memes to amplify hate speech, content which can be incredibly difficult to detect. Platforms’ affordances enable users to participate in covert, yet deeply insidious, coordinated, repetitive, and networked hate campaigns against their targets. At the same time, already vulnerable groups can be further marginalised by platforms’ policies and content moderation practices—problems which are difficult to rectify, given the lack of transparent decision-making. Recent regulatory initiatives including the United Kingdom’s Online Harms White Paper show governments’ increasing pressure on platforms to limit harm perpetrated through their networks.
In response, platforms are relying more heavily on automated decision-making systems to moderate content that violates their terms of service. These automated interventions, together with platforms’ policies and governance practices, show us how technology companies think about what it means to be free from harm online. Importantly, though, how platforms conceptualise safety and harm does not always align with users’ experiences and expectations. This is important to consider because platforms’ ability to foster welcoming, safe, and respectful communities relies on a deep understanding of their users and their experiences.