ACRV Picking Benchmark

APBobjects

Overview

To ensure the scientific process published results need to be reproducible and comparable. In robotics, with a variety of hardware platforms and a multitude of sensing equipment, this is a very hard problem. For object manipulation in particular there is no easy way to compare results!

Objectives

We propose a physical benchmark for robotic picking: overall design, objects, configuration, and guidance on appropriate technologies to solve it. Challenges are an important way to drive progress but they occur only occasionally and the test conditions are difficult to replicate outside the challenge.

This benchmark is motivated by experience in the recent Amazon Picking Challenge and contains a commonly-available shelf, 42 objects, a set of stencils and standardized task setups.  A major focus through the design of this benchmark was to maximise reproducibility: a number of carefully chosen scenarios with precise instructions on how to place, orient, and align objects with the help of printable stencils are defined.

To make the benchmark as accessible as possible to the research community, a white IKEA shelf is used for all picking tasks. Furthermore, we carefully curated a set of 42 objects to ensure global availability and reduced chance of import restrictions.

Milestones

  • Successfully published paper and dataset into ICRA 2017
  • IROS 2017 Workshop (in Sep 2017) on benchmarking robotic object manipulation
  • Planned ICRA 2018 Workshop

Click here for more information.


Other Team Members


Researchers are creating algorithms to recognise a variety of objects