ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition
- Daniela Massiceti ,
- Luisa Zintgraf ,
- John Bronskill ,
- Lida Theodorou ,
- Matthew Tobias Harris ,
- Ed Cutrell ,
- Cecily Morrison ,
- Katja Hofmann ,
- Simone Stumpf
Object recognition has made great advances in the last decade, but predominately still relies on many high-quality training examples per object category. In contrast, learning new objects from only a few examples could enable many impactful applications from robotics to user personalization. Most few-shot learning research, however, has been driven by benchmark datasets that lack the high variation that these applications will face when deployed in the real-world. To close this gap, we present the ORBIT dataset and benchmark, grounded in a real-world application of teachable object recognizers for people who are blind/low vision. The dataset contains 3,822 videos of 486 objects recorded by people who are blind/low-vision on their mobile phones, and the benchmark reflects a realistic, highly challenging recognition problem, providing a rich playground to drive research in robustness to few-shot, high-variation conditions. We set the first state-of-the-art on the benchmark and show that there is massive scope for further innovation, holding the potential to impact a broad range of real-world vision applications including tools for the blind/low-vision community. The dataset is available at https://doi.org/10.25383/city.14294597 (opens in new tab) and the code to run the benchmark at https://github.com/microsoft/ORBIT-Dataset (opens in new tab).
Publication Downloads
ORBIT Dataset
August 25, 2021
The ORBIT dataset is a collection of videos of objects in clean and cluttered scenes recorded by people who are blind/low-vision on a mobile phone. The dataset is presented with a teachable object recognition benchmark task which aims to drive few-shot learning on challenging real-world data.