Fully Funded PhD Positions (2x) on Mapping the world: understanding the environment through spatio-temporal implicit representations

Overview

Accurately mapping large-scale infrastructure assets (power poles, bridges, buildings, whole suburbs and cities) is still exceptionally challenging for robots.

The problem becomes even harder when we ask robots to map structures with intricate geometry or when the appearance or the structure of the environment changes over time, for example due to corrosion or construction activity.

The problem difficulty is increased even more when sensor data from a range of different sensors (e.g. lidars and cameras, but also more specialised hardware such as gas sensors) need to be integrated; and when the sensor data is gathered by multiple heterogeneous agents (e.g. robots, drones, or human-operated sensor platforms of different kinds).

Extracting insights and knowledge from the created maps is another ongoing challenge, especially when the requested insights are of semantic or similar high-level nature, or not even fully known at the time of creating the representation.

This PhD project lets you develop new algorithms that enable robots to better map, represent, and understand the world around them.

 

You can solve this problem in close collaboration with researchers from the QUT Centre for Robotics, our Industry Partner Emesent, and researchers from the University of Sydney, the Australian National University, and the Australian Robotic Inspection and Asset Management Hub (ARIAM).

ARIAM is a 5-year, $10 million research project with the University of Sydney, the QUT Centre for Robotics, the Australian National University, and over 10 industry partners. You will have the opportunity to work with researchers from the involved institutions and participate in a range of exciting professional development activities.

You will also be part of the QUT Centre for Robotics, which offers a vibrant research culture with a variety of social and professional activities, ranging from PhD boardgame nights to short courses on professional skills such as presenting, academic writing, managing your time, preparing a CV, or preparing for job interviews.

Research activities

This project aims to investigate novel algorithms that can efficiently construct and maintain an implicit neural field representation from diverse sensor data such as lidars and cameras, but also more specialised hardware such as gas sensors. The resulting representation is spatio-temporal, i.e. it not only represents the 3D spatial structure of an environment but integrates a temporal dimension, allowing to integrate sensor data taken at different points in time. This project will investigate how sensor data from multiple heterogeneous robotic platforms can be utilised by the implicit representation.

Furthermore, the project aims to develop new algorithms to extract high-level semantically meaningful information and insights from the resulting representation of the environment, including identifying relevant changes in the environment over time. The nature of these insights and information will vary across different concrete applications.

We will recruit two PhD students with a strong background in machine learning and robotic vision. Each PhD student will be responsible for working towards one of the two aims described above.

Within the first 3 months, both students will conduct literature reviews and formulate concrete research questions and a research plan under the guidance of the supervisory team and a representative of Emesent.

Following this initial stage, PhD1 will investigate the efficacy of implicit representations such as neural fields to integrate multi-modal sensor data in large-scale environments. Considered goals could include:

  • extension of existing neural field representations to include multi-modal sensor data beyond the ubiquitous cameras and dense depth data (e.g. sparse depth, gas sensors and others)
  • development of approaches to allow for incremental updates of such a representation
  • development of a new neural field representation that explicitly handles time as an input dimension, thus resulting in a 4D spatio-temporal representation.

Simultaneously, PhD2 will investigate how high-level semantic knowledge and insights can be extracted from a large-scale implicit representation. Specific goals will be developed in close consultation with Emesent, but could include:

  • detecting change in the scene (geometry, semantics, or appearance) over time
  • developing a learning-based approach to distinguish relevant from irrelevant change
  • predicting change over time, and into the future.

About Us

The QUT Centre for Robotics (QCR) conducts at-scale world-leading research in intelligent robotics; translates fundamental research into commercial and societal outcomes; is a leader in education, training and development of talent to meet growing demands for expertise in robotics and autonomous systems; and provides leadership in technological policy development and societal debate. Established in 2020, the centre has been built on the momentum of a decade’s investment in robotic research and translation at QUT which has been funded by QUT, ARC, Queensland Government, CRCs and Industry. QCR comprises over 100 researchers and engineers.

QCR researchers collaborate with industry and universities around the world, including MIT, Harvard and Oxford universities, Boeing, Thales, DST, Airservices Australia, CASA, JARUS, TRAFI, Google Deepmind, Google AI, Amazon Robotics, Caterpillar, Rheinmetall, US Air Force, and NASA’s Jet Propulsion Laboratory.

We are proud of our beautiful and big modern lab space and research environment. We have a fantastic collection of equipment to support your research, including many mobile robot platforms and robotic arms.

The centre supports a flexible working environment. We support a diverse and inclusive atmosphere and encourage applications from women, Aboriginal Australians and Torres Strait Islander people.

Outcomes

This project will generate new knowledge and insights into the efficacy of using implicit representations such as neural fields for mapping large-scale environments with multi-modal and multi-agent data. The project would deliver reports, publications, datasets, and a variety of research-grade implementation and source code for the developed methods.

These outputs could be translated into new capabilities for the project partner’s business, giving the industry partner a first-to-market position advantage. In addition, the project trains two PhD researchers with expert knowledge in an area of key importance to the industry partner.

Skills and experience

Applicants should have a strong background in machine learning and robotic vision as well as strong programming skills in Python. Good teamwork, communication (verbal and written), and a strong intrinsic motivation are essential. You

Scholarships: Two fully-funded positions available

We have two fully-funded ($40,000 per year tax-free) PhD positions available for this project, starting immediately (May 2023).

Check your eligibility before contacting the principal supervisor Prof Niko Suenderhauf.