Autonomous vehicles require a highly-accurate estimate of their current location. This is achieved through an algorithmic pipeline composed of several components which have been individually refined through decades of research. This includes image retrieval based place recognition, local feature matching, pose estimation using minimal solvers, robust outlier detection (particularly RANSAC), pose refinement and bundle adjustment. However, these components do not always communicate with each other effectively. This is especially true for those based on appearance (color) versus the ones based on geometry (point triangulation), that is, the 3D geometry does not always ‘see’ the color-based features when being processed. This PhD project will develop novel ways of inducing the appearance information into geometric estimation techniques to improve the overall localization estimates.
The student will get an opportunity to deeply understand how the currently deployed AVs perform localization using vision. The project will lie at the intersection of robotics, computer vision and machine learning, taking the student to an amazing journey of learning some of key concepts of all the three major fields.