Vehicle Object Tracking Based on Fusing of Deep learning and Re-Identification

PDF (841KB), PP.34-45

Views: 0 Downloads: 0

Author(s)

Huynh Nhat Duy 1 Vo Hoai Viet 1,*

1. Department of Computer Vision, University of Science, VNU-HCM

* Corresponding author.

DOI: https://doi.org/10.5815/ijem.2024.02.03

Received: 30 Oct. 2023 / Revised: 19 Dec. 2023 / Accepted: 19 Feb. 2024 / Published: 8 Apr. 2024

Index Terms

Vehicle Object Tracking, Surveillance Systems, Single-Object Tracking, Siammask, Vehicle-ReId

Abstract

Object tracking is a popular problem for automatic surveillance systems as well as for the research community. The requirement of an object tracking problem is to predict the output including the object position at the current frame based on the input the position of the object at the previous frame. To present the comparison and experiment of some object tracking methods based on deep learning and suggestions for improvement between them in this paper, we had taken some important steps to conduct this research. First, we find out the studies related to deep learning-based object tracking models. Secondly, we examined image and video data sets for verification purposes. Thirdly, to evaluate the results obtained from existing models, we experimented with a little work related to object tracking based on deep learning networks. Fourth, based on the implemented object tracking models, we had proposed a combination of these methods. And finally, we summarize and give the evaluations for each object tracking model from the results obtained. The results show that object tracking based on Siammask model has the highest results TO score of 0.961356383 on VOT dataset and 0.969301864 on UAV123 dataset, but the possibility of errors is also high. Although the result of the combined method has few scores those are lower than the object tracking based on Siammask model, the combined method is more stable than the object tracking based on Siammask model when TME score of 16.29691993 on VOT dataset and 10.16578548 on UAV123 dataset. The Vehicle ReIdentification method results have scores that are not too overwhelming. However, the TME score is the highest with the TME score of 11.55716097 on the VOT dataset and 4.576163526 on the UAV123 dataset.

Cite This Paper

Huynh Nhat Duy, Vo Hoai Viet, "Vehicle Object Tracking Based on Fusing of Deep learning and Re-Identification", International Journal of Engineering and Manufacturing (IJEM), Vol.14, No.2, pp. 34-45, 2024. DOI:10.5815/ijem.2024.02.03

Reference

[1]Wang, Q., et al. Fast Online Object Tracking and Segmentation: A Unifying Approach. IEEE, 2020, pp. 1328–38.
[2]He, Shuting & Luo, Hao & Chen, Weihua & Zhang, Miao & Zhang, Yuqi & Wang, Fan & Li, Hao & Jiang, Wei. (2020). Multi-Domain Learning and Identity Mining for Vehicle Re-Identification. DOI:10.1109/CVPRW50498.2020.00299.
[3]Z. Soleimanitaleb, M. A. Keyvanrad and A. Jafari, "Object Tracking Methods: A Review," 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), 2019, pp. 282-288, doi: 10.1109/ICCKE48569.2019.8964761.
[4]K. R. Reddy, K. H. Priya and N. Neelima, "Object Detection and Tracking -- A Survey," 2015 International Conference on Computational Intelligence and Communication Networks (CICN), 2015, pp. 418-421, doi: 10.1109/CICN.2015.317.
[5]Held, David & Thrun, Sebastian & Savarese, Silvio. (2016). Learning to Track at 100 FPS with Deep Regression Networks.
[6]X. Hou, Y. Wang and L. Chau, "Vehicle Tracking Using Deep SORT with Low Confidence Track Filtering," 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2019, pp. 1-6, doi: 10.1109/AVSS.2019.8909903.
[7]Vo Hoai Viet, Huynh Nhat Duy, " Object Tracking: An Experimental and Comprehensive Study on Vehicle Object in Video", International Journal of Image, Graphics and Signal Processing (IJIGSP), Vol.14, No.1, pp. 64-81, 2022.DOI: 10.5815/ijigsp.2022.01.06.
[8]Ravi Kumar Jatoth, Sampad Shubhra, Ejaz Ali,"Performance Comparison of Kalman Filter and Mean Shift Algorithm for Object Tracking", IJIEEB, vol.5, no.5, pp.17-24, 2013. DOI: 10.5815/ijieeb.2013.05.03 Reference.
[9]COMANICIU, D. AND MEER, P. 1999. Mean shift analysis and applications. In IEEE International Conference on Computer Vision (ICCV). Vol. 2. 1197–1203, doi: 10.1109/ICCV.1999.790416.
[10]H. Wang, X. Wang, L. Yu and F. Zhong, "Design of Mean Shift Tracking Algorithm Based on Target Position Prediction," 2019 IEEE International Conference on Mechatronics and Automation (ICMA), 2019, pp. 1114-1119, doi: 10.1109/ICMA.2019.8816295.
[11]Lim Chot Hun, Ong Lee Yeng, Lim Tien Sze and Koo Voon Chet (June 8th, 2016). Kalman Filtering and Its Real‐Time Applications, Real-time Systems, Kuodi Jian, IntechOpen, DOI: 10.5772/62352.
[12]Feng Xiao; Mingyu Song; Xin Guo; Fengxiang Ge. Adaptive Kalman filtering for target tracking. 2016 IEEE/OES China Ocean Acoustics (COA), 2016, pp. 1-5, doi: 10.1109/COA.2016.7535797.
[13]Oğuzhan Gültekİn, Bilge Günsel, "Robust object tracking by variable rate kernel particle filter", 2018 26th Signal Processing and Communications Applications Conference (SIU), 2018, pp. 1-4, doi: 10.1109/SIU.2018.8404479.
[14]Marina A. Zanina, Vitalii A. Pavlov, Sergey V. Zavjalov, Sergey V. Volvenko, "TLD Object Tracking Algorithm Improved with Particle Filter". 2018 41st International Conference on Telecommunications and Signal Processing (TSP), 2018, pp. 1-4, doi: 10.1109/TSP.2018.8441515.
[15]Z. Liang, C. Liang, Y. Zhang, H. Mu and G. Li, "Tracking of Moving Target Based on SiamMask for Video SAR System," 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), 2019, pp. 1-4, doi: 10.1109/ICSIDP47821.2019.9173432.
[16]J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788, doi: 10.1109/CVPR.2016.91.
[17]VOT Matej Kristan, Jiri Matas, Aleš Leonardis, Tomáš Vojı́ř, Roman Pflugfelder, Gustavo Fernández, Georg Nebehay, Fatih Porikli and Luka Čehovin, "A Novel Performance Evaluation Methodology for Single-Target Trackers", PAMI, vol. 38, no. 11, pp. 2137-2155, 1 Nov. 2016, doi: 10.1109/TPAMI.2016.2516982.
[18]Kristan, Matej & Matas, Jiri & Leonardis, Ales & Felsberg, Michael & Pflugfelder, Roman & Kamarainen, Joni-Kristian & Chang, Hyung & Danelljan, Martin & Čehovin Zajc, Luka & Lukežič, Alan & Drbohlav, Ondrej & Kapyla, Jani & Häger, Gustav & Yan, Song & Yang, Jinyu & Zhang, Zhongqun & Fernandez Dominguez, Gustavo & Abdelpakey, Mohamed & Bhat, Goutam & Zhu, Xue-Feng. (2021). The Ninth Visual Object Tracking VOT2021 Challenge Results. 2711-2738. 10.1109/ICCVW54120.2021.00305.
[19]Wu, Yi & Lim, Jongwoo & Yang, Ming-Hsuan. (2015). Object Tracking Benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence. 37. 1-1. 10.1109/TPAMI.2014.2388226.
[20]UAV123 Matthias Mueller, Neil Smith, and Bernard Ghanem, "A Benchmark and Simulator for UAV Tracking", ECCV, 2016.
[21]X. Liu, Y. Dong and Z. Deng, "Deep Highway Multi-Camera Vehicle Re-ID with Tracking Context," 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 2020, pp. 2090-2093, doi: 10.1109/ITNEC48623.2020.9085008.
[22]Z. Jamali, J. Deng, J. Cai, M. U. Aftab and K. Hussain, "Minimizing Vehicle Re-Identification Dataset Bias Using Effective Data Augmentation Method," 2019 15th International Conference on Semantics, Knowledge and Grids (SKG), 2019, pp. 127-130, doi: 10.1109/SKG49510.2019.00030.
[23]M. Wu, Y. Qian, C. Wang and M. Yang, "A Multi-Camera Vehicle Tracking System based on City-Scale Vehicle Re-ID and Spatial-Temporal Information," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp. 4072-4081, doi: 10.1109/CVPRW53098.2021.00460.
[24]Hao Luo, Youzhi Gu, Xingyu Liao, Shenqi Lai, and Wei Jiang. Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
[25]K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[26]Xu, Yinda & Wang, Zeyu & Li, Zuoxin & Yuan, Ye & Yu, Gang. (2020). SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines. Proceedings of the AAAI Conference on Artificial Intelligence. 34. 12549-12556. 10.1609/aaai.v34i07.6944.
[27]Li, Daqun & Yu, Yi & Chen, Xiaolin. (2019). Object tracking framework with Siamese network and re-detection mechanism. EURASIP Journal on Wireless Communications and Networking. 2019. 10.1186/s13638-019-1579-x.
[28]Li, Bo & Wu, Wei & Wang, Qiang & Zhang, Fangyi & Xing, Junliang & Yan, Junjie. (2019). SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. 4277-4286. 10.1109/CVPR.2019.00441.
[29]B. Li, J. Yan, W. Wu, Z. Zhu and X. Hu, "High Performance Visual Tracking with Siamese Region Proposal Network," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8971-8980, doi: 10.1109/CVPR.2018.00935.
[30]K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[31]L. M. Brown, A. W. Senior, Ying-li Tian, Jonathan Connell, Arun Hampapur, Chiao-Fe Shu, Hans Merkl, Max Lu, “Performance Evaluation of Surveillance Systems Under Varying Conditions”, IEEE Int'l Workshop on Performance Evaluation of Tracking and Surveillance, Colorado, Jan 2005.
[32]F. Bashir, F. Porikli. “Performance evaluation of object detection and tracking systems”, IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), June 2006.
[33]Sven Ubik; Jiří Pospíšilík. Video Camera Latency Analysis and Measurement. IEEE Transactions on Circuits and Systems for Video Technology (Volume: 31, Issue: 1, Jan. 2021): 140 - 147. DOI: 10.1109/TCSVT.2020.2978057.