Van Thinh Le

Work place: Faculty Electronics of and Informatics Engineering Mien Trung Industrial and Trade College, Phu Yen 620000, Vietnam

E-mail: levanthinh@tic.edu.vn

Website:

Research Interests: Computer systems and computational processes, Computational Learning Theory, Hardware Security, Information Security, Network Security, Database Management System, Multimedia Information System, Information-Theoretic Security

Biography

Van Thinh Le was born in Phu Yen, Viet Nam, in 1976, received the B.E. degree in Computer Science from the Ha Noi Technology University, Viet Nam, in 2000, graduated the Master degree in Computer Science from Da Nang Technology University, Viet Nam, in 2011 and is working toward Ph.D. degrees in the School of Computer Science and Engineering, Southeast University, PR China, from 2014. In 2002 he worked at the Department of Computer Science, Mien Trung Industrial and Commercial College, as a Lecturer. He has published more than 12 peer reviewed papers. He currently research interests include multimedia security, machine learning and database system.

Author Articles
Detecting Video Inter-Frame Forgeries Based on Convolutional Neural Network Model

By Xuan Hau Nguyen Yongjian Hu Muhmmad Ahmad Amin Khan Gohar Hayat Van Thinh Le Dinh Tu Truong

DOI: https://doi.org/10.5815/ijigsp.2020.03.01, Pub. Date: 8 Jun. 2020

In the era of information extension today, videos are easily captured and made viral in a short time, and video tampering has become more comfortable due to editing software. So, the authenticity of videos becomes more essential. Video inter-frame forgeries are the most common type of video forgery methods, which are difficult to detect by the naked eye. Until now, some algorithms have been suggested for detecting inter-frame forgeries based on handicraft features, but the accuracy and processing speed of those algorithms are still challenging. In this paper, we are going to put forward a video forgery detection method for detecting video inter-frame forgeries based on convolutional neural network (CNN) models by retraining the available CNN model trained on ImageNet dataset. The proposed method based on state-the-art CNN models, which are retrained to exploit spatial-temporal relationships in a video to detect inter-frame forgeries robustly and we have also proposed a confidence score instead of the raw output score based on these networks for increasing accuracy of the proposed method.  Through the experiments, the detection accuracy of the proposed method is 99.17%. This result has shown that the proposed method has significantly higher efficiency and accuracy than other recent methods.

[...] Read more.
Three-dimensional Region Forgery Detection and Localization in Videos

By Xuan Hau Nguyen Yongjian Hu Muhmmad Ahmad Amin Khan Gohar Hayat Van Thinh Le Dinh Tu Truong

DOI: https://doi.org/10.5815/ijigsp.2019.12.01, Pub. Date: 8 Dec. 2019

Nowadays, with the extensive use of cameras in many areas of life, every day millions of videos are uploaded on the internet. In addition, with rapidly developing video editing software applications, it has become easier to forge any video. These software applications have made it challenging to detect forged videos, especially with forged videos have duplication of three-dimensional (3-D) regions. Recently, there has been increased interest in detecting forged videos, but there are very limited studies to detect forged videos which were duplicated 3-D regions. So, our research focused on this weakness and proposed a new method, which can be used for detecting and locating 3-D duplicated regions in videos based on the phase-correlation of 3-D regions residual more efficiently. To evaluate the efficiency of the proposed method, we experimented with two realistic datasets VFDD-3D and REWIND-3D. The results of the experiments proved that the proposed method is efficient and robust for detecting small 3-D regions duplication and frame sequences duplication, especially localization of duplication forgery in videos has shown impressive results.

[...] Read more.
Other Articles