What is the Truth: A Survey of Video Compositing Techniques

Full Text (PDF, 1711KB), PP.13-27

Views: 0 Downloads: 0

Author(s)

Mahmoud Afifi 1,* Khaled F. Hussain 2

1. Department of Information Technology, Assiut University, Egypt

2. Department of Computer Science, Assiut University, Egypt

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2015.08.02

Received: 3 Apr. 2015 / Revised: 25 Apr. 2015 / Accepted: 10 Jun. 2015 / Published: 8 Jul. 2015

Index Terms

Video compositing, video processing, image inpainting, image processing

Abstract

The compositing of videos is considered one of the most important steps on the post-production process. The compositing process combines several videos that may be recorded at different times or locations into a final one. Computer generated footages and visual effects are combined with real footages using video compositing techniques. High reality shots of many movies were introduced to the audience who cannot discover that those shots are not real. Many techniques are used for achieving high realistic results of video compositing. In this paper, a survey of video compositing techniques, a comparison among compositing techniques, and many examples for video compositing using existing techniques are presented.

Cite This Paper

Mahmoud Afifi, Khaled F. Hussain,"What is the Truth: A Survey of Video Compositing Techniques", IJIGSP, vol.7, no.8, pp.13-27, 2015. DOI: 10.5815/ijigsp.2015.08.02

Reference

[1]Paul Debevec. The light stages and their applications to photoreal digital actors. In SIGGRAPH Asia, Singapore, November 2012.

[2]Barbara Robertson. What's old is new again. Computer Graphics World, 32(1), 2009.

[3]Bin Li. Terra cotta warrior. Ann Arbor, Rochester Institute of Technology, United States, 2012.

[4]Steve Wright. Digital compositing for film and video. CRC Press, 2013.

[5]Hongcheng Wang, Ning Xu, Ramesh Raskar, and Narendra Ahuja. Videoshop: A new framework for spatio-temporal video editing in gradient domain. Graphical models, 69(1):57–70, 2007.

[6]Mahmoud Afifi, Khaled F. Hussain, Hosny M. Ibrahim, and Nagwa M. Omar. Video face replacement system using a modified poisson blending technique. Intelligent Signal Processing and Communication Systems (ISPACS), 2014 International Symposium, 205-210, 1-4 Dec. 2014.

[7]Alvy Ray Smith. Alpha and the history of digital compositing. URL: http://www.alvyray.com/Memos/7_alpha. pdf, zuletzt abgerufen am, 24:2010, 1995.

[8]Jian Sun, Jiaya Jia, Chi-Keung Tang, and Heung-Yeung Shum. Poisson matting. ACM Transactions on Graphics (ToG), 23(3):315–321, 2004.

[9]Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(2):228–242, 2008.

[10]Kaiming He, Jian Sun, and Xiaoou Tang. Guided image filtering. In Computer Vision–ECCV 2010, pages 1–14. Springer, 2010.

[11]Jiangyu Liu, Jian Sun, and Heung-Yeung Shum. Paint selection. In ACM Transactions on Graphics (ToG), volume 28, page 69. ACM, 2009.

[12]Jian Sun, Sing Bing Kang, Zong-Ben Xu, Xiaoou Tang, and Heung-Yeung Shum. Flash cut: Foreground extraction with flash and no-flash image pairs. In Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on, pages 1–8. IEEE, 2007.

[13]AS INCORP. Adobe photoshop user guide. 2002.

[14]Yannick Benezeth, P-M Jodoin, Bruno Emile, Hélène Laurent, and Christophe Rosenberger. Review and evaluation of commonly-implemented background subtraction algorithms. In Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, pages 1–4. IEEE, 2008.

[15]Mahmoud Afifi, Mostafa Korashy, Ebram K.William, Ali H. Ahmed, and Khaled F.Hussain. Cut off your arm: A medium-cost system for integrating a 3d object with a real actor. International Journal of Image, Graphics and Signal Processing (IJIGSP), 6(11):10–16, 2014.

[16]Christopher Richard Wren, Ali Azarbayejani, Trevor Darrell, and Alex Paul Pentland. Pfinder: Real-time tracking of the human body. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 19(7):780–785, 1997.

[17]Chris Stauffer and W Eric L Grimson. Adaptive background mixture models for real-time tracking. In Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., volume 2. IEEE, 1999.

[18]Olivier Barnich and Marc Van Droogenbroeck. Vibe: A universal background subtraction algorithm for video sequences. Image Processing, IEEE Transactions on, 20(6):1709–1724, 2011.

[19]Yaser Sheikh, Omar Javed, and Takeo Kanade. Background subtraction for freely moving cameras. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1219–1225. IEEE, 2009.

[20]Hollywood camera work - visual effects for directors. http://www.hollywoodcamerawork.us/vfx_index.html. Accessed: 2015-04-10.

[21]Petro Vlahos. Comprehensive electronic compositing system, July 11 1978. US Patent 4,100,569.

[22]David F Fellinger. Method and apparatus for applying correction to a signal used to modulate a background video signal to be combined with a foreground video signal, April 13 1993. US Patent 5,202,762.

[23]Alvy Ray Smith and James F Blinn. Blue screen matting. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 259–268. ACM, 1996.

[24]Yung-Yu Chuang, Brian Curless, David H Salesin, and Richard Szeliski. A bayesian approach to digital matting. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, volume 2, pages II–264. IEEE, 2001.

[25]Adobe Creative Team. Adobe After Effects CS4 classroom in a book. Peachpit Press, 2010.

[26]Lisa Brenneis. Final Cut Pro 3 for Macintosh: Visual QuickPro Guide. Peachpit Press, 2002.

[27]Zhengyou Zhang. Microsoft Kinect sensor and its effect. MultiMedia, IEEE, 19(2):4–10, 2012.

[28]Jungong Han, Ling Shao, Dong Xu, and Jamie Shotton. Enhanced computer vision with microsoft kinect sensor: A review. 2013.

[29]Tim Dobbert. Matchmoving: the invisible art of camera tracking. John Wiley & Sons, 2012.

[30]G Mallikarjuna Rao and Ch Satyanarayana. Visual object target tracking using particle filter: A survey. International Journal of Image, Graphics and Signal Processing (IJIGSP), 5(6):1250, 2013.

[31]Bruce D Lucas, Takeo Kanade, et al. An iterative image registration technique with an application to stereo vision. In IJCAI, volume 81, pages 674–679, 1981.

[32]Carlo Tomasi and Takeo Kanade. Detection and tracking of point features. School of Computer Science, Carnegie Mellon Univ., 1991.

[33]Jianbo Shi and Carlo Tomasi. Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94., 1994 IEEE Computer Society Conference on, pages 593–600. IEEE, 1994.

[34]Dorin Comaniciu, Visvanathan Ramesh, and Peter Meer. Kernel-based object tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(5):564–577, 2003.

[35]Pu Xiaorong and Zhou Zhihu. A more robust mean shift tracker on joint colorcltp histogram. International Journal of Image, Graphics and Signal Processing (IJIGSP), 4(12):34, 2012.

[36]Allan D Jepson, David J Fleet, and Thomas F El-Maraghi. Robust online appearance models for visual tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(10):1296–1311, 2003.

[37]Alper Yilmaz, Omar Javed, and Mubarak Shah. Object tracking: A survey. Acm computing surveys (CSUR), 38(4):13, 2006.

[38]Nadia Magnenat-Thalmann and Daniel Thalmann. Handbook of virtual humans. John Wiley & Sons, 2005.

[39]Huiyu Zhou and Huosheng Hu. Human motion tracking for rehabilitation—a survey. Biomedical Signal Processing and Control, 3(1):1 – 18, 2008.

[40]S. Obdrzalek, G. Kurillo, F. Ofli, R. Bajcsy, E. Seto, H. Jimison, and M. Pavel. Accuracy and robustness of kinect pose estimation in the context of coaching of elderly population. In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, pages 1188–1193, Aug 2012.

[41]Lulu Chen, Hong Wei, and James Ferryman. A survey of human motion analysis using depth imagery. Pattern Recognition Letters, 34(15):1995–2006, 2013.

[42]Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. Realtime performance based facial animation. ACM Transactions on Graphics (TOG), 30(4):77, 2011.

[43]Hirokazu Kato and Mark Billinghurst. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Augmented Reality, 1999.(IWAR'99) Proceedings. 2nd IEEE and ACM International Workshop on, pages 85–94. IEEE, 1999.

[44]Vincent Lepetit and Pascal Fua. Monocular model-based 3d tracking of rigid objects: A survey. Foundations and trends in computer graphics and vision,1(CVLAB-ARTICLE-2005-002):1–89, 2005.

[45]Enrico Costanza, Andreas Kunz, and Morten Fjeld. Mixed reality: A survey. Springer, 2009.

[46]Hirokazu Kato, Mark Billinghurst, Ivan Poupyrev, Kenji Imamoto, and Keihachiro Tachibana. Virtual object manipulation on a table-top ar environment. In Augmented Reality, 2000.(ISAR 2000). Proceedings. IEEE and ACM International Symposium on, pages 111–119, 2000.

[47]Istvan Barakonyi, Tamer Fahmy, and Dieter Schmalstieg. Remote collaboration using augmented reality videoconferencing. In Proceedings of Graphics interface 2004, pages 89–96. Canadian Human-Computer Communications Society, 2004.

[48]Hirofumi Saito and Jun'ichi Hoshino. A match moving technique for merging cg and human video sequences. In Acoustics, Speech, and Signal Processing, 2001. Proceedings.(ICASSP'01). 2001 IEEE International Conference on, volume 3, pages 1589–1592. IEEE, 2001.

[49]Lee Lanier. Digital Compositing with Nuke. Taylor & Francis, 2012.

[50]Todor Georgiev. Photoshop healing brush: a tool for seamless cloning. In Workshop on Applications of Computer Vision (ECCV 2004), pages 1–8, 2004.

[51]Patrick Pérez, Michel Gangnet, and Andrew Blake. Poisson image editing. In ACM Transactions on Graphics (TOG), volume 22, pages 313–318. ACM, 2003.

[52]Jiaya Jia, Jian Sun, Chi-Keung Tang, and Heung-Yeung Shum. Drag-and-drop pasting. In ACM Transactions on Graphics (TOG), volume 25, pages 631–637. ACM, 2006.

[53]Meng Ding and Ruo-Feng Tong. Content-aware copying and pasting in images. The Visual Computer, 26(6-8):721–729, 2010. 

[54]Michael W Tao, Micah K Johnson, and Sylvain Paris. Error-tolerant image compositing. International journal of computer vision, 103(2):178–189, 2013.

[55]Sameh Zarif, Ibrahima Faye, and Dayang Rohaya. Fast and efficient video completion using object prior position. In Advances in Visual Informatics, pages 241–252. Springer, 2013.

[56]Arjan Gijsenij, Theo Gevers, and Joost Van De Weijer. Computational color constancy: Survey and experiments. Image Processing, IEEE Transactions on, 20(9):2475–2489, 2011.

[57]Johannes von Kries. Influence of adaptation on the effects produced by luminous stimuli. Sources of color vision, pages 109–119, 1970.

[58]Paul Heckbert. Color image quantization for frame buffer display, volume 16. ACM, 1982.

[59]Hani M Ibrahem. An efficient and simple switching filter for removal of high density salt-and-pepper noise. International Journal of Image, Graphics and Signal Processing (IJIGSP), 5(12):1, 2013.

[60]Peter Schallauer and Roland Morzinger. Rapid and reliable detection of film grain noise. In Image Processing, 2006 IEEE International Conference on, pages 413–416, 2006.

[61]Aaron Hertzmann, Charles E Jacobs, Nuria Oliver, Brian Curless, and David H Salesin. Image analogies. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 327–340, 2001.

[62]M-E Nilsback and Andrew Zisserman. A visual vocabulary for flower classification. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 1447–1454, 2006.

[63]Mahmoud Afifi, Khaled F. Hussain, Hosny M. Ibrahim, and Nagwa M. Omar. A Low-cost System for Generating Near-realistic Virtual Actors. 3D Research, Springer, DOI: 10.1007/s13319-015-0050-y, 6(2):1-21, 2015.

[64]Philip HS Torr and DavidWMurray. The development and comparison of robust methods for estimating the fundamental matrix. International journal of computer vision, 24(3):271–300, 1997.

[65]Paul Slinger, Seyed Ali Etemad, and Ali Arya. Intelligent toolkit for procedural animation of human behaviors. In Proceedings of the 2009 Conference on Future Play on@ GDC Canada, pages 27–28, 2009.