Deep Learning for Grasping and Manipulation

Project Overview

Recent advances in deep learning techniques have made impressive progress in many areas of computer vision, including classification, detection, and segmentation. While all of these areas are relevant to robotics applications, robotics also presents many unique challenges which require new approaches.

Project Objectives

Visual feedback is crucial to perform a range of grasping and manipulation tasks in unstructured environments. In particular we are looking at:

  • Improving robotic hand-eye coordination using learning
  • How to learn reaching using reinforcement learning
  • extracting grasp points from images
  • combining deep learning with well-established approaches such as visual servoing

Project Milestones


Team

Other Team Members

Phd Students: Fangyi Zhang, Adam Tow, Douglas Morrison

Partners

Publications

  • Adam Tow, Niko Sünderhauf, Sareh Shirazi, Michael Milford, Jürgen Leitner (2017) What Would You Do? Acting by Learning to Predict. .
  • Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter Corke (2017) Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies. .
  • Quentin Bateux, Eric Marchand, Jürgen Leitner, Francois Chaumette, Peter Corke (2017) Visual Servoing from Deep Neural Networks. .

To quickly test ideas, researchers use the Baxter robot as he has a very friendly interface