Visual Odometry with a Single-Camera Stereo Omnidirectional System

Demonstration video using sequences from Grand Central Terminal (GCT):

Source code

The demonstration code can be found at the git repository for vo_single_camera_sos

Calibration Files

Data sets

Each sequence has 2 sub-folders:

  • omni pertaining the omnistereo data
  • rgbd pertaining the RGB-D camera data

Remarks about the “ground-truth” data used as reference

  1. Ground-truth poses obey the TUM format, such that each line is spaced-separated encoding:

    • time stamp $t_x$ $t_y$ $t_z$ $q_x$ $q_y$ $q_z$ $q_w$
  2. For each sequence, the gt_TUM.txt is the raw ground-truth data, which was obtained from the motion capture system. Thus, these poses are given wrt our VICON mocap’s frame, $[\rm{V}]$.

  3. After running the demo_vo_*.py for some sequence, the resulting files will be given inside the results subfolder within the sequence path:

    • estimated_frame_poses_TUM.txt has the estimated poses ${}_{[{{\rm{C}}_i}]}^{[{\rm{K}_0}]}{\bf{\tilde T}}$ of the sequence wrt the initial camera frame, $[\rm{K}_0]$.
    • gt_associated_frame_poses_TUM.txt has the associated ground-truth poses for the registered frames. They are already transformed into the camera frame, $[\textbf{C}]$, via the appropriate hand-eye transformation, so that the pose is given as ${}_{[{{\rm{C}}_i}]}^{[{\rm{K}_0}]}{\bf{T}}$.
  4. For the real-life sequences of the RGB-D camera, the required hand-eye transformation can be downloaded from this link rgbd_hand_eye_transformation.txt.

Synthetic sequences

To run the demo_vo_sos.py script with the synthetic data set, it suffices to obtain the corresponding GUMS calibration file gums-calibrated.pkl

Name
# Frames
Video Sample
Office-0
1508
Office-1
965
Office-2
880
Office-3
1240

Real-life sequences

  • To run the demo_vo_sos.py script with the real-life SOS data set, it suffices to obtain the corresponding GUMS calibration file gums-calibrated.pkl

  • To run the demo_vo_rgbd.py script with the real-life RGB-D data set, the required hand-eye transformation rgbd_hand_eye_transformation.txt is needed.

Conventional motion

Name # Frames
Square Small 619
Square Smooth 1325
Spinning 770
Vertical 459
Free Style 611
Hallway 5636

Moving under special conditions

Name # Frames
Into Wall - Regular 1041
Into Wall - Slow 1400
Into Wall - Fast 896
Into Wall - Curvy 838
Into Dark - Straight 998
Into Dark - Turning 1260

Moving in dynamic environments

Name # Frames
Slow Dynamic 390
Fast Dynamic 518
GCT Clock 2179
GCT Stairs 3625

Static rigs in dynamic environments

Prox. [m] # People File Link # Frames
1 1 static_dynamic_1_1.zip 691
1 2 static_dynamic_1_2.zip 759
1 4 static_dynamic_1_4.zip 791
2 1 static_dynamic_2_1.zip 679
2 2 static_dynamic_2_2.zip 673
2 4 static_dynamic_2_4.zip 799
3 1 static_dynamic_3_1.zip 720
3 2 static_dynamic_3_2.zip 815
3 4 static_dynamic_3_4.zip 772
Var 2 static_dynamic_freestyle.zip 939
Var Var GCT_static.zip 1904

Citation

When using this dataset in your research, please cite:

@ARTICLE{Jaramillo2019MVAP,
  author = {Carlos Jaramillo and Liang Yang and Pablo Munoz and Yuichi Taguchi and Jizhong Xiao },
  title = {Visual Odometry with a Single-Camera Stereo Omnidirectional System},
  journal = {Springer Machine Vision and Applications (MVAP)},
  year = {2019}
}

Creative Commons LicenseAll datasets on this page are copyrighted by Carlos Jaramillo and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.
This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.

Avatar
Carlos Jaramillo
Senior Autonomy Engineer

My research interests include mobile robotics, computer vision and machine learning.

Publications

Visual Odometry with a Single-Camera Stereo Omnidirectional System

We present the advantages of a single-camera stereo omnidirectional system (SOS) in estimating egomotion in real-world environments.