A Mono Master Shrug Matching Algorithm for Examination Surveillance

Full Text (PDF, 445KB), PP.81-86

Views: 0 Downloads: 0

Author(s)

Sandhya Devi G 1,* Prasad Reddy P V G D 1 Suvarna Kumar G 2 Vijay Chaitanya B 2

1. Department of Computer Science & Systems Engineering, Andhra University, India

2. Department of Computer Science & Engineering, MVGR College of Engineering, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2015.01.10

Received: 21 Apr. 2014 / Revised: 4 Aug. 2014 / Accepted: 6 Oct. 2014 / Published: 8 Dec. 2014

Index Terms

Gesture Recognition, Template Matching, Video Surveillance, Suspicious Activity Detection

Abstract

This paper proposes an unusual slant for Shrug recognition from Gesticulation Penetrated Images (GPI) based on template matching. Shrugs can be characterized with image templates which are used to compare and match shrugs. The proposed technique makes use of a single template to identify match in the candidates and hence entitled as mono master shrug matching. It does not necessitate erstwhile acquaintance of movements, motion estimation or tracking. The proposed technique brands a unique slant to isolate various shrugs from a given video. Additionally, this method is based on the reckoning of feature invariance to photometric and geometric variations from a given video for the rendering of the shrugs in a lexicon. This descriptor extraction method includes the standard deviation of the gesticulation penetrated images of a shrug. The comparison is based on individual and rational actions with exact definitions varying widely uses histogram based tracker which computes the deviation of the candidate shrugs from the template shrug. Far-reaching investigation is done on a very intricate and diversified dataset to establish the efficacy of retaining the anticipated method.

Cite This Paper

Sandhya Devi G, Prasad Reddy P V G D, Suvarna Kumar G, Vijay Chaitanya B, "A Mono Master Shrug Matching Algorithm for Examination Surveillance", International Journal of Information Technology and Computer Science(IJITCS), vol.7, no.1, pp.81-86, 2015. DOI:10.5815/ijitcs.2015.01.10

Reference

[1]Ahad, M.A.R., 2011. Computer vision and action recognition: a guide for image processing and computer vision community for action understanding.

[2]Atlantis Ambient and Pervasive Intelligence. Atlantis Press. Ahad, M.A.R., Tan, J., Kim, H., Ishikawa, S., 2008. Human activity recognition: various paradigms. In: Internat. Conf. on Control, Automation and Systems. ICCAS 2008, pp. 1896–1901. http://dx.doi.org/10.1109/ICCAS.2008.4694407. 

[3]Gonzalez, R.C., Woods, R.E., 2001. Digital Image Processing, Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.

[4]Upal Mahbub, Hafiz Imtiaz, Tonmoy Roy, Md. Shafiur Rahman, Md. Atiqur Rahman Ahad, 2013. A Template matching approach of one-shot-learning gesture recognition.

[5]Bobick, A.F., Davis, J.W., 2001. The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Machine Intell. 23 (3), 257–267. http://dx.doi.org/10.1109/34.910878.

[6]Imtiaz, H., Mahbub, U., Ahad, M., 2011. Action recognition algorithm based on optical flow and RANSAC in frequency domain. In: Proc. SICE Annual Conf. (SICE), pp. 1627–1631.

[7]Min Li, Zhaoxiang Zhang, Kaiqi Huang and Tieniu Tan, “Estimating the Number of People in Crowded Scenes by MID Based Foreground Segmentation and Head-shoulder Detection”, IEEE Computer Society PressVol. 35, pp.96-120, 2008.

[8]H. Zhou, Y. Yuan and C. Shi, “Object Tracking Using SIFT Features and Mean Shift,” Computer Vision and Im-age Understanding, Vol. 113, No. 3, 2009, pp. 345-352.

[9]Handbook of Image and Video Processing, By Alan Conrad Bovik, Al Bovik Contributor AlBovik Edition: 2, illustrated, revised Published by Academic Press, 2005.

[10]B V Ramana, “Engineering Mathematics”, TATA McGraw Hill, 2002.

[11]Gonzalez, R, Woods, R, Eddins, S "Digital Image Processing using Matlab" Prentice Hall, 2004.

[12]Yuan, Po, M.S.E.E. "Translation, scale, rotation and threshold invariant pattern recognition system". The University of Texas at Dallas, 1993, 62 pages; AAT EP13780.

[13]H. Y. Kim and S. A. Araújo, "Grayscale Template-Matching Invariant to Rotation, Scale, Translation, Brightness and Contrast," IEEE Pacific-Rim Symposium on Image and Video Technology, Lecture Notes in Computer Science, vol. 4872, pp. 100-113, 2007.

[14]Shao, L., Chen, X., 2010. Histogram of body poses and spectral regression discriminant analysis for human action categorization. In: Proc. British Machine Vision Conference. BMVA Press, pp. 88.1–88.11. http://dx.doi.org/10.5244/C.24.88.

[15]Shao, L., Wu, D., Chen, X., 2011. Action recognition using correlogram of body poses and spectral regression. In: 18th IEEE Internet Conf. on Image Processing (ICIP), pp. 209–212. http://dx.doi.org/10.1109/ICIP.2011.6116023.

[16]Shao, L., Ji, L., Liu, Y., Zhang, J., 2012. Human action segmentation and recognition via motion and shape analysis. Pattern Recognition Lett. 33 (4), 438–445. http://dx.doi.org/10.1016/j.patrec.2011.05.015.

[17]Willamowski, J., Arregui, D., Csurka, G., Dance, C., Fan, L., 2004. Categorizing nine visual classes using local appearance descriptors. In: Workshop on Learning for Adaptable Visual Systems. In: IEEE International Conf. on Pattern Recognition.

[18]Wu, D., Shao, L., 2012. Silhouette analysis based action recognition via exploiting human poses. In: IEEE Trans. Circuits and Systems for Video Technology.

[19]Yang, W., Wang, Y., Mori, G., 2009. Human action recognition from a single clip per action. Learning, 482–489.

[20]Sen-Ching S. Cheung and Chandrika Kamath [2004] Robust techniques for background subtraction in urban traffic video,fcheung11, kamath2g@llnl.gov,Center for Applied Scientific Computing, Lawrence Livermore National Laboratory

[21]Srenivas Varadarajan*, Lina J. Karam*, and Dinei Florencio, BACKGROUND RECOVERY FROM VIDEO SEQUENCES USING MOTION PARAMETERS ,Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287-5706 , Microsoft Research, One Microsoft Way, Redmond, WA 9805.