The aim is to create a computational model to address the inability of Automated Vehicles (AV), powered by Artificial intelligence, to self-explain their behaviours. This project applies novel multidisciplinary methodologies in a real-world self-driving setting to formalise the essence of driving explanations. It explores the when, why and how a driver is seeking an explanation and what type of automated explanation is truly human-interpretable.
This project will result in the design and evaluate a novel human-centric eXplainable Automated Vehicle (XAV) framework grounded on human risk perception and self-regulated learning, which improves users’ comprehension of the rationale underlying vehicle decisions.
Expected outcomes include the discovery of an acceptable, transparent and ethical explanation system that helps humans to understand the AVs decision making.
Funding / Grants
- ARC Discovery Project (2022 - 2025)