Deep Learning for Grasping and Manipulation

baxter

Why it matters

Recent advances in deep learning techniques have made impressive progress in many areas of computer vision, including classification, detection, and segmentation. While all of these areas are relevant to robotics applications, robotics also presents many unique challenges which require new approaches.

Objectives

Visual feedback is crucial to perform a range of grasping and manipulation tasks in unstructured environments. In particular we are looking at:

  • Improving robotic hand-eye coordination using learning
  • How to learn reaching using reinforcement learning
  • extracting grasp points from images
  • combining deep learning with well-established approaches such as visual servoing

Project Milestones


Other Team Members

Phd Students: Fangyi Zhang, Adam Tow, Douglas Morrison

To quickly test ideas, researchers use the Baxter robot as he has a very friendly interface