This PhD aims to investigate methods for enabling robots to intelligently move their perception systems to improve their view of a target object.
Typically, robots capture images of their environment and then decide how to act: grasping an item, move to a location etc… However, sometimes it is necessary for a robot to gather more information in order to make a better decision. How can a robot decide on where to move its sensors (i.e. camera) such that it learns more about its environment?
One application area is within highly occluded environments, such as natural agricultural environments which can be highly occluded due to leaves, branches and other obstacles blocking the view to a target.
Taking inspiration from optimal control and visual servoing techniques, can new methods be developed that enable robots to intelligently maneuver their perception systems in order to improve their view of a target and hence improve the likelihood of succeeding at their tasks?