[The industry partner for this project has not yet been widely released]
Mapping, localization, SLAM and visual place recognition research has advanced rapidly over the past decade, especially with the advent of modern deep learning techniques and more advanced sensing capabilities. Emerging applications like consumer-facing robotics require high localization performance in a challenging range of environmental types and conditions, but typically have a more restrictive financial situation with respect to sensors and compute capability compared to say autonomous vehicles. The requirements of these potential consumer-facing autonomous systems are also notably different to those of on-road autonomous vehicles in several aspects, providing novel opportunities for developing localization systems that are not feasible in these other domains. This project will develop a novel hierarchical multi-process fusion localization framework that explicitly characterizes and leverages three varying characteristics of localization techniques in order to optimally fuse them together: the distribution of their localization hypotheses; their appearance- and viewpoint-invariant properties; and the resulting differences in conditions and locations in an environment where each system works well and fails. We’ll develop alternative localization systems which extract more consumer-relevant performance – longevity in the consumer environments, minimizing human interaction, power and compute-efficient local processing that maintains privacy – from drastically cheaper and more lightweight sensing and compute configurations.