Parallel Implementation of a Video-based Vehicle Speed Measurement System for Municipal Roadways

Full Text (PDF, 1042KB), PP.25-37

Views: 0 Downloads: 0

Author(s)

Abdorreza Joe Afshany 1,* Ali Tourani 1 Asadollah Shahbahrami 1 Saeed Khazaee 2 Alireza Akoushideh 3

1. Department of Computer Engineering, University of Guilan, Rasht, Iran

2. Centre for Pattern Recognition and Machine Intelligence, Concordia University Montreal, Canada

3. Shahid-Chamran College, Technical and Vocational University, Tehran, Iran

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2019.11.03

Received: 21 Apr. 2019 / Revised: 20 May 2019 / Accepted: 7 Jun. 2019 / Published: 8 Nov. 2019

Index Terms

Parallelism, speed measurement, video processing, intelligent transportation systems

Abstract

Nowadays, Intelligent Transportation Systems (ITS) are known as powerful solutions for handling traffic-related issues. ITS are used in various applications such as traffic signal control, vehicle counting, and automatic license plate detection. In the special case, video cameras are applied in ITS which can provide useful information after processing their outputs, known as Video-based Intelligent Transportation Systems (V-ITS). Among various applications of V-ITS, automatic vehicle speed measurement is a fast-growing field due to its numerous benefits. In this regard, visual appearance-based methods are common types of video-based speed measurement approaches which suffer from a computationally intensive performance. These methods repeatedly search for special visual features of vehicles, like the license plate, in consecutive frames. In this paper, a parallelized version of an appearance-based speed measurement method is presented which is real-time and requires lower computational costs. To acquire this, data-level parallelism was applied on three computationally intensive modules of the method with low dependencies using NVidia’s CUDA platform. The parallelization process was performed by the distribution of the method’s constituent modules on multiple processing elements, which resulted in better throughputs and massively parallelism. Experimental results have shown that the CUDA-enabled implementation runs about 1.81 times faster than the main sequential approach to calculate each vehicle’s speed. In addition, the parallelized kernels of the mentioned modules provide 21.28, 408.71 and 188.87 speed-up in singularly execution. The reason for performing these experiments was to clarify the vital role of computational cost in developing video-based speed measurement systems for real-time applications.

Cite This Paper

Abdorreza Joe Afshany, Ali Tourani, Asadollah Shahbahrami, Saeed Khazaee, Alireza Akoushideh, "Parallel Implementation of a Video-based Vehicle Speed Measurement System for Municipal Roadways", International Journal of Intelligent Systems and Applications(IJISA), Vol.11, No.11, pp.25-37, 2019. DOI:10.5815/ijisa.2019.11.03

Reference

[1]K.N. Qureshi and A.H. Abdullah, “A Survey on Intelligent Transportation Systems,” Middle-East Journal of Scientific Research, vol. 15, No. 5, 2013.
[2]F. Zhu, Z. Li, S. Chen, and G. Xiong, “Parallel Transportation Management and Control System and its Applications in Building Smart Cities,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 6, pp. 1576-1585, 2016.
[3]M. Bommes, A. Fazekas, T. Volkenhoff, and M. Oeser, “Video based Intelligent Transportation Systems – State of the Art and Future Development,” Transportation Research Procedia, vol. 14, pp. 4495-4504, 2016.
[4]M. A. Adnan, N. Sulaiman, N. I. Zainuddin and T. B. H. T. Besar, “Vehicle Speed Measurement Technique using Various Speed Detection Instrumentation,” IEEE Business Engineering and Industrial Applications Colloquium, Langkawi, pp. 668-672, 2013.
[5]Z. Marszalek, R. Sroka and T. Zeglen, “Inductive Loop for Vehicle Axle Detection from First Concepts to the System based on Changes in the Sensor Impedance Components,” 20th International Conference on Methods and Models in Automation and Robotics, Miedzyzdroje, pp. 765-769, 2015.
[6]J. Zhang, H.W. Li, L.H. Zhang and Q. Hu, “The Research Of Radar Speed Measurement System based on TMS320C6745,” IEEE 11th International Conference on Signal Processing, Beijing, pp. 1843-1846, 2012.
[7]S. Sivaraman and M. M. Trivedi, “Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1773-1795, 2013.
[8]R. Minetto, N. Thome, M. Cord, J. Stolfi, and N. J. Leite, “T-HOG: An Effective Gradient-Based Descriptor for Single Line Text Regions,” Pattern Recognition Elsevier, vol. 46, no. 3, pp. 1078–1090, 2013.
[9]J. Shi, and C. Tomasi, “Good Features to Track,” IEEE International Conference on Computer Vision and Pattern Recognition, pp. 593–600, Seattle, 1994.
[10]J. Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature Tracker,” Intel Corporation, Microprocessor Research Labs, Technical Report, 2000.
[11]D. De Donno, A. Esposito, L. Tarricone and L. Catarinucci, “Introduction to GPU Computing and CUDA Programming: A Case Study on FDTD,” IEEE Antennas and Propagation Magazine, vol. 52, no.3, June 2010.
[12]D. C. Luvizon, B. T. Nassu, and R. Minetto, “A Video-Based System for Vehicle Speed Measurement in Urban Roadways,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 6, pp. 1393-1404, 2017.
[13]D. Dailey, F. Cathey, and S. Pumrin, “An Algorithm to Estimate Mean Traffic Speed using Uncalibrated Cameras,” IEEE Transactions on Intelligent Transportation Systems, vol. 1, no. 2, pp. 98–107, 2000.
[14]V. Madasu and M. Hanmandlu, “Estimation of Vehicle Speed by Motion Tracking on Image Sequences,” IEEE Intelligent Vehicles Symposium, pp. 185–190, 2010.
[15]S. Dogan, M. S. Temiz, and S. Kulur, “Real-time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera,” Sensors, vol. 10, no. 5, pp. 4805–4824, 2010.
[16]C. H. Xiao and N. H. C. Yung, “A novel Algorithm for Estimating Vehicle Speed from Two Consecutive Images,” IEEE Workshop on Applications of Computer Vision, p. 12-13, 2007.
[17]H. Zhiwei, L. Yuanyuan, and Y. Xueyi, “Models of Vehicle Speeds Measurement with a Single Camera,” International Conference on Computational Intelligence and Security, pp. 283–286, Harbin, 2007.
[18]C. Maduro, K. Batista, P. Peixoto, and J. Batista, “Estimation of Vehicle Velocity and Traffic Intensity Using Rectified Images,” IEEE International Conference on Image Processing, pp. 777–780, San Diego, 2008.
[19]H. Palaio, C. Maduro, K. Batista, and J. Batista, “Ground Plane Velocity Estimation Embedding Rectification on a Particle Filter Multitarget Tracking,” IEEE International Conference on Robotics and Automation, pp. 825–830, Kobe, 2009.
[20]L. Grammatikopoulos, G. Karras, and E. Petsa, “Automatic Estimation of Vehicle Speed from Uncalibrated Video Sequences,” Modern Technologies, Education and Professional Practice in Geodesy and Related Fields, pp. 332–338, 2005.
[21]T. Schoepflin and D. Dailey, “Dynamic Camera Calibration of Roadside Traffic Management Cameras for Vehicle Speed Estimation,” IEEE Transactions on Intelligent Transportation Systems, vol. 4, no. 2, pp. 90–98, 2003.
[22]G. Garibotto, P. Castello, E. Del Ninno, P. Pedrazzi, and G. Zan, “Speedvision: Speed Measurement by License Plate Reading and Tracking,” IEEE Transactions on Intelligent Transportation System, pp. 585–590, Oakland, 2001.
[23]W. Czajewski and M. Iwanowski, “Vision-based Vehicle Speed Measurement Method,” International Conference on Computer Vision and Graphics, vol. 10, pp. 308–315, Berlin, 2010.
[24]M. Garg and S. Goel, “Real-time License Plate Recognition and Speed Estimation from Video Sequences,” ITSI Transactions on Electrical and Electronics Engineering, vol. 1, no. 5, pp. 1–4, 2013.
[25]C. N. E. Anagnostopoulos, I. E. Anagnostopoulos, I. D. Psoroulas, V. Loumos, and E. Kayafas, “License Plate Recognition from Still Images and Video Sequences: A survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 3, pp. 377–391, 2008.
[26]S. Du, M. Ibrahim, M. Shehata, and W. Badawy, “Automatic License Plate Recognition (ALPR): A State-of-the-art Review,” IEEE Transactions on Circuits Systems and Video Technology, vol. 23, no. 2, pp. 311–325, 2013.
[27]B. Li, B. Tian, Y. Li, and D. Wen, “Component-based License Plate Detection using Conditional Random Field Model,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1690–1699, 2013.
[28]B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Joint Conference on Artificial Intelligence, pp. 674–679, 1981.
[29]A. Bobick and J. Davis, “The Recognition of Human Movement using Temporal Templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, 2001.
[30]J. Ha, R. Haralick, and I. Phillips, “Document Page Decomposition by the Bounding-Box Project,” International Conference on Document Analysis and Recognition, vol. 2, pp. 1119–1122, Montreal, 1995.
[31]T. Retornaz and B. Marcotegui, “Scene Text Localization based on the Ultimate Opening,” International Symposium on Mathematical Morphology, vol. 1, pp. 177–188, 2007.
[32]T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, “Introduction to Algorithms,” 3rd Edition, ISBN: 0262033844, The MIT Press, 2009.
[33]R. Minetto, N. Thome, M. Cord, N. J. Leite, and J. Stolfi, “SnooperText: A Text Detection System for Automatic Indexing of Urban Scenes,” Computer Vision and Image Understanding Elsevier, vol. 122, pp. 92–104, 2014.
[34]G. Wang, Z. Hu, F. Wu, and H. T. Tsui, “Single View Metrology from Scene Constraints,” Elsevier Image and Vision Computing, vol. 23, no. 9, pp. 831–840, 2005.
[35]D. Zheng, Y. Zhao, and J. Wang, “An Efficient Method of License Plate Location,” Pattern Recognition Letters Elsevier, vol. 26, no. 15, pp. 2431–2438, 2005.
[36]B. Epshtein, E. Ofek, and Y. Wexler, “Detecting Text in Natural Scenes with Stroke Width Transform,” IEEE International Conference on Computer Vision and Pattern Recognition, pp. 886–893, San Francisco, 2010.
[37]D. G. R. Bradski and A. Kaehler, “Learning OpenCV: Computer Vision with the OpenCV Library,” 1st Edition, ISBN: 0596516134, O’Reilly Media, 2008.
[38]H. Li, M. Feng, and X. Wang, “Inverse Perspective Mapping based Urban Road Markings Detection,” IEEE International Conference on Cloud Computing and Intelligent Systems, vol. 03, pp. 1178-1182, Hangzhou, 2012.
[39]P. KaewTraKulPong and R. Bowden, “An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection,” 2nd European Workshop on Advanced Video-based Surveillance Systems, Genova, 2002.
[40]M. Chonglei, J. Hai and J. Jeff, “CUDA-based AES Parallelization with Fine-tuned GPU Memory Utilization,” IEEE International Symposium on Parallel and Distributed Processing, 2010.
[41]J. Hedborg, J. Skoglund and M. Felsberg, “KLT Tracking Implementation on the GPU,” Swedish Symposium in Image Analysis, 2007.