Robust Monocular Visual Odometry Trajectory Estimation in Urban Environments

Full Text (PDF, 711KB), PP.12-18

Views: 0 Downloads: 0

Author(s)

Ahmed Abdu 1,* Hakim A. Abdo 2 Al-Alimi Dalal 3

1. Faculty of Information Engineering, China University of Geoscience, Wuhan, China

2. Babasaheb Ambedkar Marathwada University, Aurangabad, India

3. Faculty of Computer science, China University of Geoscience, Wuhan, China

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2019.10.02

Received: 18 Jun. 2019 / Revised: 15 Jul. 2019 / Accepted: 26 Jul. 2019 / Published: 8 Oct. 2019

Index Terms

Trajectory estimation, Monocular Visual Odometry, ORB, feature points, feature matching, PnP

Abstract

Visual SLAM (Simultaneous Localization and Mapping) is widely used in autonomous robots and vehicles for autonomous navigation. Trajectory estimation is one part of Visual SLAM. Trajectory estimation is needed to estimate camera position in order to align the real image locations. In this paper, we present a new framework for trajectory estimation aided by Monocular Visual Odometry. Our proposed method combines the feature points extracting and matching based on ORB (Oriented FAST and Rotated BRIEF) and PnP (Perspective-n-Point). Thus, it was used a Matlab® dynamic model and an OpenCV/C++   computer graphics platform to perform a very robust monocular Visual Odometry mechanism for trajectory estimation in outdoor environments. Our proposed method displays that meaningful depth estimation can be extracted and frame-to-frame image rotations can be successfully estimated and can be translated in large view even texture-less. The best key-points has been extracted from ORB key point detectors depend on their key-point response value. These extracted key points are used to decrease trajectory estimation errors. Finally, the robustness and high performance of our proposed method were verified on image sequences from public KITTI dataset.

Cite This Paper

Ahmed Abdu, Hakim A. Abdo, Al-Alimi Dalal, "Robust Monocular Visual Odometry Trajectory Estimation in Urban Environments", International Journal of Information Technology and Computer Science(IJITCS), Vol.11, No.10, pp.12-18, 2019. DOI:10.5815/ijitcs.2019.10.02

Reference

[1]D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” In Computer Vision and Pattern Recognition. Vol. 1. USA: CVPR 2004, pp. 652-659.

[2]D. Scaramuzza, F. Fraundorfer, "Visual odometry [Tutorial]. Part I: The first 30 years and fundamentals," IEEE Robot. Autom. Mag., vol. 18, no. 4, pp. 80-92, Jun. 2011.

[3]L. Yu, C. Joly, G. Bresson, F. Moutarde, “Improving robustness of monocular urban localization using augmented Street View,” In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 513–519.

[4]P. A. Zandbergen, S. J. Barbeau, "Positional Accuracy of Assisted GPS Data from High-Sensitivity GPS-enabled Mobile Phones", Journal of Navigation, vol. 64, no. 7, pp. 381-399, 2011.

[5]T. Oskiper, S. Samarasekera, R. Kumar, “CamSLAM: Vision Aided Inertial Tracking and Mapping Framework for Large Scale AR Applications,” In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; pp. 216–217.

[6]C. Fischer, P. T. Sukumar, M. Hazas, "Tutorial: Implementation of a pedestrian tracker using foot-mounted inertial sensors", IEEE Pervas. Comput., vol. 12, no. 2, pp. 17-27, Apr./Jun. 2013.

[7]A. R. Jimenez, F. Seco, C. Prieto, J. Guevara, "A comparison of Pedestrian Dead-Reckoning algorithms using a low-cost MEMS IMU," 2009 IEEE International Symposium on Intelligent Signal Processing, pp. 37-42, 2009.

[8]B. Jiang, U. Neumann, S. You, "A robust hybrid tracking system for outdoor augmented reality," Proc. VR, pp. 3-10, 2004.

[9]A. Davison, “Real-time simultaneous localisation and mapping with a single camera,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2, 2003, pp.1403–1410.

[10]X. Gao, T. Zhang, Y. Liu, Q. Yan, “14 Lectures on Visual SLAM,” From Theory to Practice. Publishing House of Electronics Industry, Chinese Version, 1st ed., vol. 1. China, 2017, pp.130–200.

[11]B.  B. Ready, and C. N. Taylor, “Improving Accuracy of MAV Pose Estimation using Visual Odometry,” Proc. American Control Conference , New York City, USA, July 2007, pp 3721-3726.

[12]R. I. Hartley, A. Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge University Press, ISBN: 0521540518, 2nd ed, 2004, pp.310–407.

[13]K. Konolige, M. Agrawal, J. Solá, "Large scale visual odometry for rough terrain," Rob. Res., vol. 66, pp. 201-212, Jan. 2011.

[14]A. Howard, "Real-time stereo visual odometry for autonomous ground vehicles," Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 3946-3952, 2008.

[15]B. Kitt, A. Geiger, H. Lategahn, "Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme," Proc. IEEE Intell. Vehicles Symp., pp. 486-492, Jun. 2010.

[16]M. Muja, D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. Intl. Conf. Comp. Vision Thy. Appl. (VISAPP), pp. 331–340, 2009.

[17]A. Geiger, P. Lenz, R. Urtasun, “Are we ready for autonomous driving? The kitti vision benchmark suite,”  In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Rhode, Island, 18–20 June 2012, pp. 3354–3361.

[18]E. Rosten and T. Drummond, “Machine learning for high speed corner detection,” in European conference on computer vision, vol. 3951. pp. 430–443, Springer, 2006.

[19]M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in European conference on computer vision, pp. 778–792, Springer, 2010.

[20]C. Mei, G. Sibley, M. Cummins, P. Newman, I. Reid, "RSLAM: A system for large-scale mapping in constant time using stereo", Int. J. Comput. Vision, vol. 94, no. 2, pp. 198-214, 2011.

[21]A. Geiger, P. Lenz, C. Stiller, R. Urtasun, “Vision meets robotics: The KITTI dataset,” International Journal of Robotics Research, vol. 32, pp. 1229–1235, 2013.

[22]G. Klein, D. Murray, "Parallel tracking and mapping on a camera phone," Proc. 8th ACM Int. Symp/Mixed Augmented Reality, pp. 83-86, Oct. 2009.

[23]C. Choi, S.-M. Baek, S. Lee, “Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based Visual Servo,” IEEE/RSJ international conference on Intelligent Robots and Systems (IROS), pp. 3983 3989, 2008.

[24]H. Strasdat, J. Montiel, A. Davison, “Real time monocular SLAM: Why filter?” in Proc. IEEE Int. Conf. Robotics and Automation, (ICRA). IEEE, pp. 2657–2664, 2010.

[25]A. Pretto, E. Menegatti, and E. Pagello, “Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation,” in Proc. IEEE Int. Conf. Robotics and Automation, pp. 3289–3296, 2011.