The aim of this research is to develop a framework for multiple Unmanned Aerial Vehicles (UAV), that balances information sharing, exploration, localization, mapping, and other planning objectives thus allowing a team of UAVs to navigate in complex environments in time critical situations. This project expects to generate new knowledge in UAV navigation using an innovative approach by combining Simultaneous Localization and Mapping(SLAM) algorithms with Partially Observable Markov Decision Processes (POMDP) and Deep Reinforcement learning.
The team aim to develop a framework for the motion planning, coordination and execution of time critical missions where multiple Unmanned Aerial Vehicles (UAV), also commonly known as drones, need to collectively navigate, explore, and make decisions in an unknown, GPS-denied and cluttered environment. In order for multiple UAVs autonomously to explore and map an environment in such conditions, they need to be able to localise themselves, collectively build a map, take decisions, make observations and share acquired information with the other UAVs. Our approach frames the problem as a Partially Observable Markov Decision Processes (POMDP) multi-agent sequential decision process under uncertainty for missions with multiple objectives.
Real world impact
This should provide significant benefits, such as more responsive search and rescue inside collapsed buildings or underground mines, as well as fast target detection and mapping under the tree canopy.
Other Team Members
- A/Prof Jonghyuk Kim (ANU)
- UTS Prof Sven Koenig University of Southern California, USA
UTS University of SouthernCalifornia, USA