This project identifies how misunderstandings of harm and safety flow into flawed data logics and ineffective automated digital platform responses. To date, platforms have presented the principal unit of harm as individual pieces of content or media objects.
Based on this assumption, platforms’ responses to harm have primarily focused on moderating discrete pieces of content.
DMRC Research Program
This project contributes to the research within the following DMRC research programs:
- Burgess, Jean (2022) Everyday data cultures: beyond Big Critique and the technological sublime. AI and Society.
- Carah, Nicholas, Angus, Daniel, & Burgess, Jean (2022) Tuning machines: an approach to exploring how Instagram’s machine vision operates on and through digital media’s participatory visual cultures. Cultural Studies.
- Matamoros Fernandez, Ariadna, Gray, Joanne, Bartolo, Louisa, Burgess, Jean, & Suzor, Nicolas (2021) What’s ‘Up Next’? Investigating Algorithmic Recommendations on YouTube Across Issues and Over Time. Media and Communication, 9(4), 234–249.
- Gillett, Rosalie & Suzor, Nicolas (2022) Incels on Reddit: A study in social norms and decentralised moderation. First Monday, 27(6), Article number: 10654.
- Suzor, Nicolas & Gillett, Rosalie (2022) Self-regulation and discretion. In Flew, Terry & Martin, Fiona R. (Eds.) Digital Platform Regulation: Global Perspectives on Internet Governance. Springer, pp. 259-279.
- Stardust, Zahra, Gillett, Rosalie, & Albury, Kath (2022) Surveillance does not equal safety: Police, data and consent on dating apps. Crime, Media, Culture.
- Nelson, Lucinda, Suzor, Nicolas, Gillett, Rosalie, & Matamoros-Fernandez, Ariadna (2021) QUT Digital Media Research Centre submission in response to the inquiry into serious vilification and hate crimes. Parliament Queensland.
- Australian Government through the Australian Research Council – ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S)