Project Summary
If you have been harmed by bad automated decision-making, from robots to loan assessments, what can you do to right the wrong? What can the law do to help you? A growing number of public controversies about discriminatory, unpredictable and dangerous automated decision-making has raised questions about the most effective methods of accountability.
Through qualitative interviews with stakeholders (including class action and pro bono lawyers), this project seeks to identify the opportunities, enablers and barriers for public interest litigation to promote accountability and fairness in automated decision-making.
Project Team
Project Outputs
Research Publications
- Fraser, Henry, Snoswell, Aaron, & Simcock, Rhyle (2022) AI Opacity and Explainability in Tort Litigation. In Proceedings of the 2022 5th ACM Conference on Fairness, Accountability, and Transparency (FAccT 2022). Association for Computing Machinery (ACM), New York, NY, pp. 185-196.
News
- Snoswell, Aaron, Fraser, Henry & Simcock, Rhyle (2022) When self-driving cars crash, who’s responsible? Courts and insurers need to know what’s inside the ‘black box’. The Conversation, 25 May 2022.
Project Funding
- Australian Government through the Australian Research Council – ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S)
