Vo Hoai Viet

Work place: University of Science, Ho Chi Minh City, 700000, Viet Nam

E-mail: vhviet@fit.hcmus.edu.vn

Website:

Research Interests: Programming Language Theory, Image Processing, Image Manipulation, 2D Computer Graphics, Computer Graphics and Visualization, Computer Vision, Computer systems and computational processes

Biography

Vo Hoai Viet is a Lecturer and Senior Researcher at the University of Science, VNU-HCMC, Vietnam from 2012. He is currently working in Computer Vision at University of Science, VNU-HCMC, Vietnam. His research interests include Digital Image Processing, Programming Language, Computer Graphics, Computer vision, and Machine Learning.

Author Articles
Vehicle Object Tracking Based on Fusing of Deep learning and Re-Identification

By Huynh Nhat Duy Vo Hoai Viet

DOI: https://doi.org/10.5815/ijem.2024.02.03, Pub. Date: 8 Apr. 2024

Object tracking is a popular problem for automatic surveillance systems as well as for the research community. The requirement of an object tracking problem is to predict the output including the object position at the current frame based on the input the position of the object at the previous frame. To present the comparison and experiment of some object tracking methods based on deep learning and suggestions for improvement between them in this paper, we had taken some important steps to conduct this research. First, we find out the studies related to deep learning-based object tracking models. Secondly, we examined image and video data sets for verification purposes. Thirdly, to evaluate the results obtained from existing models, we experimented with a little work related to object tracking based on deep learning networks. Fourth, based on the implemented object tracking models, we had proposed a combination of these methods. And finally, we summarize and give the evaluations for each object tracking model from the results obtained. The results show that object tracking based on Siammask model has the highest results TO score of 0.961356383 on VOT dataset and 0.969301864 on UAV123 dataset, but the possibility of errors is also high. Although the result of the combined method has few scores those are lower than the object tracking based on Siammask model, the combined method is more stable than the object tracking based on Siammask model when TME score of 16.29691993 on VOT dataset and 10.16578548 on UAV123 dataset. The Vehicle ReIdentification method results have scores that are not too overwhelming. However, the TME score is the highest with the TME score of 11.55716097 on the VOT dataset and 4.576163526 on the UAV123 dataset.

[...] Read more.
Object Tracking: An Experimental and Comprehensive Study on Vehicle Object in Video

By Vo Hoai Viet Huynh Nhat Duy

DOI: https://doi.org/10.5815/ijigsp.2022.01.06, Pub. Date: 8 Feb. 2022

Tracking objects on camera or video is very important for automated surveillance systems. Along with the development of techniques and scientific research in object tracking, automatic surveillance systems have gradually become better. With the input of a frame including the object to be tracked and the location information of the object to be tracked in that video. The output will be the prediction of the position of the object to be tracked on the next frame. This paper presents the comparison and experiment of some traditional object tracking methods and suggestions for improvement between them. Firstly, we examined related studies, traditional object tracking models. Secondly, we examined image and video data sets for verification purposes. Thirdly, experimenting with some related research works in traditional object tracking problems, evaluation of the existing model, what has been achieved and what has not been achieved for the current models. Propose improvements based on the combination of traditional methods. Finally, we aggregate these results to evaluate for each type of object tracking model. The results show that Particles Filter method has the highest CDT with TO score of 0.907971 on VOT dataset and 0.866259 on UAV123 dataset. However, the most stable are the two hybrid methods, the Particle filter base on Mean shift method has a TF score of 31.1 on the VOT dataset and the Kalman Filter base on Mean shift method has a TME score of 28.8233 on the UAV dataset. Because low-level features cannot represent all the information of an object to be tracked during the completion of the experiment, we can conclude that combining deep learning network and using high-level feature into the tracking model can bring better performance in the future.

[...] Read more.
Spatial-Temporal Shape and Motion Features for Dynamic Hand Gesture Recognition in Depth Video

By Vo Hoai Viet Nguyen Thanh Thien Phuc Pham Minh Hoang Liu Kim Nghia

DOI: https://doi.org/10.5815/ijigsp.2018.09.03, Pub. Date: 8 Sep. 2018

Human-Computer Interaction (HCI) is one of the most interesting and challenging research topics in computer vision community. Among different HCI methods, hand gesture is the natural way of human-computer interaction and is focused on by many researchers. It allows the human to use their hand movements to interact with machine easily and conveniently. With the birth of depth sensors, many new techniques have been developed and gained a lot of achievements. In this work, we propose a set of features extracted from depth maps for dynamic hand gesture recognition. We extract HOG2 for shape and appearance of hand in gesture representation. Moreover, to capture the movement of the hands, we propose a new feature named HOF2, which is extracted based on optical flow algorithm. These spatial-temporal descriptors are easy to comprehend and implement but perform very well in multi-class classification. They also have a low computational cost, so it is suitable for real-time recognition systems. Furthermore, we applied Robust PCA to reduce feature’s dimension to build robust and compact gesture descriptors. The robust results are evaluated by cross-validation scheme using a SVM classifier, which shows good outcome on challenging MSR Hand Gestures Dataset and VIVA Challenge Dataset with 95.51% and 55.95% in accuracy, respectively. 

[...] Read more.
Other Articles