Omnidirectional Stereo Dataset
We present synthetic datasets for the omnidirectional stereo. We virtually implement the camera rig with four mounted fisheye cameras. These datasets were rendered using Blender.
Contact: Changhee Won (changhee.won@multipleye.co)
Synthetic Urban Datasets
OmniHouse
OmniThings
Synthetic Urban Datasets
Each dataset consists of 1000 sequential frames of city landscapes, and we split them into two parts, the former 700 frames for training and the later 300 for testing.
Download
Sunny (3.91GB) | Cloudy (3.32GB) | Sunset (3.49GB) | 640x160 GT inverse depth (276.3MB) | config.yaml
Sunny
Cloudy
Sunset
Input images
Front within 220° FOV
Right
Rear
Left
Omnidirectional depth map
Inverse depth map
Reference panorama
OmniHouse Dataset
OmniHouse consists of synthesized indoor scenes which reproduced using the models in SUNCG dataset [2] and a few additional models. We collect 451 house models and present 2048 frames for training and 512 for test.
Download
OmniHouse (9.48GB) | 640x320 GT depth (1.46GB) | config.yaml
Input images
Front within 220° FOV
Right
Rear
Left
Omnidirectional depth map
Inverse depth map
Reference panorama
OmniThings Dataset
OmniThings consists of randomly generated objects around the camera rig. We collect 33474 3D object models from ShapeNet [3] and present 9216 scenes for training and 1024 for test.
Download
OmniThings (37.34GB) | 640x320 GT depth (5.81GB) | config.yaml
Input images
Front within 220° FOV
Right
Rear
Left
Omnidirectional depth map
Inverse depth map
Reference panorama
Paper
Changhee Won, Jongbin Ryu, and Jongwoo Lim, "End-to-End Learning for Omnidirectional Stereo Matching with Uncertainty Prior", in TPAMI 2020. [paper] [video] [code]
Changhee Won, Jongbin Ryu, and Jongwoo Lim, "OmniMVS: End-to-End Learning for Omnidirectional Stereo Matching", in ICCV 2019. [paper] [video]
Changhee Won, Jongbin Ryu, and Jongwoo Lim, "SweepNet: Wide-baseline Omnidirectional Depth Estimation", in ICRA 2019. [paper] [video] [code]
Citation
@article{won2020end,
title={End-to-End Learning for Omnidirectional Stereo Matching with Uncertainty Prior},
author={Won, Changhee and Ryu, Jongbin and Lim, Jongwoo},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)},
year={2020},
}
@inproceedings{won2019sweepnet,
title={Sweepnet: Wide-baseline omnidirectional depth estimation},
author={Won, Changhee and Ryu, Jongbin and Lim, Jongwoo},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
pages={6073--6079},
year={2019},
}
License
These datasets are released under the Creative Commons license (CC BY-NC-SA 3.0), which is free for non-commercial use (including research).
Reference
[1] Zhang et al., "Benefit of Large Field-of-View Cameras for Visual Odometry", in ICRA 2016. [link]
[2] Song et al., "Semantic Scene Completion from a Single Depth Image", in CVPR 2017, [link]
[3] Chang et al., "ShapeNet: An Information-Rich 3D Model Repository" [link]
©2018 CVLab in HYU