
Project dates: 2022 - 2025
Why it matters
Current reinforcement learning approaches for robotic navigation, manipulation, and interaction typically use low-level data (such as raw pixels) from the robot’s sensors: an inefficient approach that makes learning complex tasks infeasible and is prone to overfitting to the training environment. At the same time, novel object-oriented semantic SLAM algorithms are capable of building semantically rich and meaningful graphical models of an environment.
Overview
We propose to investigate how such graph-based maps, containing both semantic and geometric information of objects in the environment, can be utilised to learn complex robotic tasks that require navigation, exploration, and interaction with the environment. Our guiding hypothesis for this project is that learning from rich semantic-geometric representations instead of from low-level data can strongly benefit learning complex tasks.
We expect to show that we can significantly improve sample efficiency, considerably reduce overfitting to the training environment, and aid transfer between different environments or robots. For this project we can build on our prior work in semantic SLAM, preparatory work that supports our hypothesis that rich semantic information aids task learning, and recent initial results demonstrating the utility of reinforcement learning from semantic graph-based maps using graph convolutional networks.
Funding / Grants
- ARC Discovery Project (DP22)