IJEME Vol. 2, No. 5, 29 May 2012
Cover page and Table of Contents: PDF (size: 278KB)
Full Text (PDF, 278KB), PP.14-23
Views: 0 Downloads: 0
Cross-disciplinary integration, digital learning, human-like characters, motion capture
This study aims to integrate digital technology with the animation production process. By experiencing and learning how body joints move — with digital technology as the learning aid — students can create results similar to motion-captured body movements. This then can be applied to animation design with the hope that it can help the students in their future employment. This paper focuses on digital learning and technology to bring forth body movement production principles and an integrative framework. The study results can offer training to front-end talents of the digital content industry. In addition, the main contribution lies in the research and development of training methods and linking them with digital learning techniques. This can provide directions for the development of upcoming relevant cultural industrial courses. This paper will use human-like character animation —that is comparatively harder to represent in 3D computer animation — as the example. It will also discuss the variations in results achieved by different production processes. At the same time, feasible training directions are provided as references for learning digital technologies.
Sheng-Chih Chen,Wei-Kuang Chen,Tsai-Sheng Kao,Jui-I Hsu,"Retrieval of Motion Capture Data Aids Efficient Digital Learning", IJEME, vol.2, no.5, pp.14-23, 2012. DOI: 10.5815/ijeme.2012.05.03
[1]B. Jabłoński, R. Klempous, and D. Majchrzak, "Feasibility Analysis of Human Motion Identification using Motion Capture," proceedings of the 25th IASTED international conference on Modeling, identification, and control, 2006, pp. 495-500.
[2]J. Davis and A. Tyagi, "A Reliable-Inference Framework for Pose-Based Recognition of Human Actions," in IEEE International Conference on Advanced Video and Signal Based Surveillance, Miami, Florida, July 21-22, 2003.
[3]J. Davis and V. Kannappan, "Expressive Features for Movement Exaggeration," SIGGRAPH Conference Abstracts and Applications (Technical Sketches), San Antonio, Texas, July 24, 2002, pp. 182.
[4]J. Davis, and H. Gao, "An Expressive Three-Mode Principal Components Model of Human Action Style," Image and Vision Computing, Vol. 21, No. 11, 2003, pp. 1001-1016.
[5]J. Davis, and H. Gao, "Gender Recognition from Walking Movements using Adaptive Three-Mode PCA," IEEE Workshop on Articulated and Nonrigid Motion, Washington DC, June 27, 2004.
[6]J. Davis, and H. Gao, "Recognizing Human Action Efforts: An Adaptive Three-Mode PCA Framework," International Conference on Computer Vision, Nice, France, Oct 13-16, 2003, pp. 1463-1469.
[7]J. Lasseter, "Principles of Traditional Animation Applied to 3D Computer Animation," ACM SIGGRAPH Computer Graphics, vol. 21, no. 4, 1987, pp. 35-44.
[8]K. Pullen and C. Bregler, "Motion capture assisted animation: texturing and synthesis," ACM Transactions on Graphics, vol. 21, iss. 3, 2002, pp. 501-508.
[9]S. Owen, Computer Animation Website. http://www.siggraph.org/education/materials/HyperGraph/animation/anim0.htm, 2002.
[10]S. Tabata, A. Nakamura, and Y. Kuno, "Development of an Easy Dance Teaching System using Active Devices," proceedings of the IASTED International Conference on Advances in Computer Science and Technology (ACST 2004), 2004, pp. 38-43.
[11]T. Murakami, A. Nakamura, and Y. Kuno, "Generation of Digital Contents for Traditional Dances by Integrating Appearance and Motion Data," proceedings of the Second IASTED International Conference on Visualization, Imaging, and Image Processing, 2002, pp. 672-676.