DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition
Yuhuang Hu, Hongjie Liu, Michael Pfeiffer and Tobi Delbruck
Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zurich, Switzerland
Four targeted frame-based datasets
|VOT 2015 Dataset||Tracking Dataset||UCF-50 Dataset||Caltech-256 Dataset|
|Single Target Object Tracking||Single Target Object Tracking||Action Recognition||Object Recognition|
Statistics of converted DVS datasets
|Name||Domain||Nr. Recordings||Avg. Length/recording (s)||Max. FR (keps)||Avg. FR (keps)|
How to Get Datasets
You can get the dataset through two means, via the Resillio Sync file sharing service (recommended) or the direct download links. Please note that we pay the hosting of the dataset for the network traffic. So please use the Resillio Sync service if you can. The dataset provides two formats: 1. The raw AEDAT format that is compatible with jAER; and 2. HDF5 format that can be easily used in other programming tools.
Through Resillio Sync (RECOMMENDED)
All datasets can be downloaded through the personal file sharing service BitTorrent Sync. Use this link to access the datasets.
Please view the dataset listing via this url.
2020-06-06: Thanks to a issue report. We found that the basketball sequence in the VOT dataset has incorrect groundtruth labels for the last 46 frames. This issue is caused by the recording itself. We encourage the user to use the first 5768190 events (first 678 frames) for this recording.
- Precise control of record logging with Python.
- User interface for showing video or images in routine.
- Experiment configuration system (with JSON style).
- Post signal analysis and selection tools.
A Closer Look
Questions about these datasets should be directed to:
Hu, Y., Liu, H., Pfeiffer, M., and Delbruck, T. (2016). DVS Benchmark Datasets for Object Tracking, Action Recognition and Object Recognition. Frontiers in Neuromorphic Engineering, 10:(405).
Hu, Y. (2016). Generation of Benchmarks for Visual Recognition with Spiking Neural Networks. NSC Short Project Report. Zürich, Switzerland: University of Zürich and ETH Zürich.
For more technical information, check out this Google Docs at here.
This research is supported by the European Commission project VISUALISE (FP7-ICT-600954), SeeBetter (FP7-ICT-270324), and the Samsung Advanced Institute of Technology.
We gratefully acknowledge the creators of the original datasets.
This dataset is hosted as part of the INI Sensors Group Databases