Infrared and Visible Image Fusion (IVF) Using Latent Low-Rank Representation and Deep Feature Extraction Network

PDF (940KB), PP.30-38

Views: 0 Downloads: 0

Author(s)

Teku Sandhya Kumari 1,* Gundala Sujatha 1 Boddeda Sravya 1 Hari Jyothula 2

1. Vignan’s Institute of Engineering for Women, Department of Electronics and Communication Engineering, Visakhapatnam, Andhra Pradesh, India

2. Ministry of Education, Abu Dhabi, UAE

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2024.03.03

Received: 3 Apr. 2023 / Revised: 20 May 2023 / Accepted: 11 Jul. 2023 / Published: 8 Jun. 2024

Index Terms

Infrared images, Visible images, Image Fusion, Latent Low Rank Representation, VGG-19 network, Dense network

Abstract

The combination of visible and infrared images from different sensors can provide a more detailed and informative image. Visible images capture environmental details and texture, while infrared sensors can detect thermal radiation and create grayscale images that have high contrast. These images are useful for distinguishing between target and background in challenging conditions, such as at night or in inclement weather. When these two types of images are fused, they create high- contrast images with rich texture and target details. In this paper, an effective image fusion technique has been developed, which utilizes Latent Low Rank Representation (LatLRR) method that decomposes the source images into latent low rank and salient parts to capture common and unique information respectively. The proposed network design incorporates the dense network and VGG-19 architectures for deep feature extraction of latent low- rank and salient parts, that minimize distortion while maintaining crucial texture and details in the output. Weighted average fusion strategies are used to combine these latent low-rank and salient parts, and the resulting fused features are used for feature reconstruction to generate a fused low-rank and salient part. These parts are integrated to yield a fused image output. The proposed approach out performs existing state-of-the-art methods on both visual characteristics and objective evaluation metrics.

Cite This Paper

Teku Sandhya Kumari, Gundala Sujatha, Boddeda Sravya, Hari Jyothula, "Infrared and Visible Image Fusion (IVF) Using Latent Low-Rank Representation and Deep Feature Extraction Network", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.16, No.3, pp. 30-38, 2024. DOI:10.5815/ijigsp.2024.03.03

Reference

[1]Han, J.; Bhanu, B. Fusion of color and infrared video for moving human detection. Pattern Recognit. 2007, 40, 1771–1784. [CrossRef]
[2]Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput 2001, 21, 34–41. [CrossRef]
[3]Simone, G.; Farina, A.; Morabito, F.C.; Serpico, S.B.;Bruzzone, L. Image fusion techniques for remote sensing applications. Inf. Fusion 2002, 3, 3–15. [CrossRef]
[4]Hanna, B.V.; Gorbach, A.M.; Gage, F.A.; Pinto, P.A.;Silva, J.S.; Gilfillan, L.G.; Elster, E.A. Intraoperative assessment of critical biliary structures with visible range/infrared image fusion. J. Am. Coll. Surg. 2008,206, 1227–1231. [CrossRef]
[5]Sanchez, V.; Prince, G.; Clarkson, J.P.; Rajpoot, N.M. Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain. Pattern Recognit. 2015, 48, 2119–2128.
[6]Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22,2864–2875. [PubMed]
[7]Bavirisetti, D.P.; Xiao, G.; Liu, G. Multi-sensor image fusion based on fourth order partial differential equations. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017.
[8]J. Ma, C. Chen, C. Li, J. Huang, Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion 31 (2016) 100– 109.
[9]Teku Sandhya Kumari, Koteswara Rao Sanagapallea, and Santi Prabha Inty. A two-stage processing approach for contrast intensified image fusion. World Journal of Engineering 17.1 (2020): 68-77.
[10]Teku Sandhya Kumari, S. Koteswara Rao, and I. Santi Prabha. Adaptive window-based fractal dimension estimation for weight maps in contrast improved multi-sensor fusion. Journal of Engineering Science and Technology 15.2 (2020): 1319-1337.
[11]Teku Sandhya Kumari, S. Koteswara Rao, and I. Santi Prabha. A compendious analysis of feature-extraction algorithms to frame fusion rules. International Journal of Computing and Digital System (2021).
[12]Zhang, Hao, et al. Image fusion meets deep learning: A survey and perspective. Information Fusion 76 (2021): 323-336.
[13]Zhang, Xingchen. Deep learning-based multi-focus image fusion: A survey and a comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence 44.9 (2021): 4819-4838.
[14]H. Xu, J. Ma, J. Jiang, X. Guo, H. Ling, U2fusion: A unified unsupervised image fusion network, IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).
[15]Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs[C]//2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017: 4724-4732. 
[16]Hui Li and Xiao-Jun Wu. DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Transactions on Image Processing, 28(5):2614– 2623,2018.
[17]Yong Ma, Haojie Li, and Baocai Yin. Infrared and visible image fusion using Latent Low-Rank Representation. Information Fusion, vol. 36, (2017), pp. 191-207.
[18]H. Li, X. Wu, and J. Kittler. Infrared and visible image fusion using a deep learning framework, in Proc. 2018 24th Int. Conf. Pattern Recognit., Beijing,2018, pp. 2705–2710.
[19]Y. Liu, X. Chen, H. Peng, and Z. Wang, Multi-focus image fusion with a deep convolutional neural network, Inf. Fusion, vol. 36, pp. 191–207, Jul. 2017.
[20]R. Hou, D. Zhou, R. Nie, D. Liu, L. Xiong, Y. Guo, C. Yu, VIF-net: an unsupervised framework for infrared and visible image fusion, IEEE Transactions on Computational Imaging 6 (2020) 640–651.