Identifying and responding to online abuse and harassment through machine learning models

Identifying and responding to online abuse

Online abuse is a pressing social issue that presents extremely complex challenges. Our team provides the capabilities to address a unique set of interrelated questions that are necessary to beginning to address this ‘wicked challenge’.

Research impacts

This project develops sophisticated new machine learning models to identify harassment and abuse on social media. This is a pressing problem: the modern internet is rife with harassment and abuse, misogyny, and exploitation.

To date, social media platforms have not been able to develop solutions that adequately protect people from harm online. By deploying new transdisciplinary collaborations, we will develop the world-leading ability to identify patterns of abuse on social media networks.

Highlights of this research are:

  • A research team that brings a unique combination of deep expertise in internet regulation, digital media, and violence against women together with strong skills in data science and visualisation
  • The development of an empirically validated classification model that can detect and measure abuse and harassment in near real-time
  • A system that can monitor abuse over time, evaluate the effectiveness of different strategies and interventions, and guide future policy
  • Engagement through social media platforms and civil society organisations to develop best practice policies for addressing online abuse and harassment
  • Research findings to engage policy-makers (including the Australian Content and Media Authority (ACMA), the eSafety Commissioner, and the Department of Communications and the Arts)

By helping to make online social media platforms safer, we seek to contribute to a more resilient society by fostering strong and inclusive communities.

Project team


Project funding

  • QUT Catapult Grant (2017-2018)