Covid-19 Automatic Detection from CT Images through Transfer Learning

Full Text (PDF, 737KB), PP.84-95

Views: 0 Downloads: 0

Author(s)

B. Premamayudu 1,* Chavala Bhuvaneswari 1

1. VFSTR Deemed to be University, Guntur, Andhra Pradesh, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2022.05.07

Received: 31 Mar. 2022 / Revised: 18 Apr. 2022 / Accepted: 27 May 2022 / Published: 8 Oct. 2022

Index Terms

Deep Transfer Learning, COVID-19, Classification, Pre-trained features.

Abstract

Identification of COVID-19 may help the community and patient to prevent the disease containment and plan to attend disease in right time. Deep neural network models widely used to analyze the medical images of COVID-19 for automatic detection and give the decision support for radiologists to summarize the accurate remarks.  This paper proposed deep transfer learning for chest CT scan images to detection and diagnosis of COVID-19.  VGG19, InceptionRestNetV3, InceptionV3 and DenseNet201 neural network used for automatic detection of COVID-19 disease form CT scan images (SARS-CoV-2 CT scan Dataset). Four deep transfer learning models were developed, tested and compared. The main objective of this paper is to use pre-trained features and converge pre-trained features with targeted features to improve the classification accuracy.  It is observed that DenseNet201 noted the best performance and the classification accuracy is 99.98% for 300 epochs. The findings of the experiments show that the deeper networks struggle to train adequately and give less consistency when there is limited data. The DenseNet201 model adopted for COVID-19 identification from lung CT scans has been intensively optimized with optimal hyper parameters and performs at noteworthy levels with precision 99.2%, recall 100%, specificity 99.2%, and F1 score 99.2%. 

Cite This Paper

B. Premamayudu, Chavala Bhuvaneswari, " Covid-19 Automatic Detection from CT Images through Transfer Learning", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.14, No.5, pp. 84-95, 2022. DOI:10.5815/ijigsp.2022.05.07

Reference

[1]M. Abdel-Basset, V. Chang, and R. Mohamed, “HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images,” Appl. Soft Comput. J., vol. 95, p. 106642, 2020, doi: 10.1016/j.asoc.2020.106642.

[2]Y. H. Jin et al., “A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-nCoV) infected pneumonia (standard version),” Med. J. Chinese People’s Lib. Army, vol. 45, no. 1, pp. 1–20, 2020, doi: 10.11855/j.issn.0577-7402.2020.01.01.

[3]Y. Fang and P. Pang, “Senivity of Chest CT for COVID.19: Comparasion to RT.PCR,” Radiology, vol. 296, pp. 15–17, 2020.

[4]X. Xie, Z. Zhong, W. Zhao, C. Zheng, F. Wang, and J. Liu, “Chest CT for Typical Coronavirus Disease 2019 (COVID-19) Pneumonia: Relationship to Negative RT-PCR Testing,” Radiology, vol. 296, no. 2, pp. E41–E45, 2020, doi: 10.1148/radiol.2020200343.

[5]J. Zhang et al., “Viral Pneumonia Screening on Chest X-Rays Using Confidence-Aware Anomaly Detection,” IEEE Trans. Med. Imaging, vol. 40, no. 3, pp. 879–890, 2021, doi: 10.1109/TMI.2020.3040950.

[6]H. Hou et al., “Pr es s In Pr,” Appl. Intell., vol. 2019, pp. 1–5, 2020, [Online]. Available: http://arxiv.org/abs/2003.13865.

[7]J. P. Cohen, P. Morrison, and L. Dao, “COVID-19 Image Data Collection,” 2020, [Online]. Available: http://arxiv.org/abs/2003.11597.

[8]W. Cai, J. Yang, G. Fan, L. Xu, B. Zhang, and R. Liu, “Chest CT findings of coronavirus disease 2019 (COVID-19),” J. Coll. Physicians Surg. Pakistan, vol. 30, no. 1, pp. S53–S55, 2020, doi: 10.29271/jcpsp.2020.Supp1.S53.

[9]M. V. Moreno et al., “Applicability of Big Data Techniques to Smart Cities Deployments,” IEEE Trans. Ind. Informatics, vol. 13, no. 2, pp. 800–809, 2017, doi: 10.1109/TII.2016.2605581.

[10]M. Abdel-Baset, V. Chang, and A. Gamal, “Evaluation of the green supply chain management practices: A novel neutrosophic approach,” Comput. Ind., vol. 108, pp. 210–220, 2019, doi: 10.1016/j.compind.2019.02.013.

[11]B. Huang et al., “Deep Reinforcement Learning for Performance-Aware Adaptive Resource Allocation in Mobile Edge Computing,” Wirel. Commun. Mob. Comput., vol. 2020, 2020, doi: 10.1155/2020/2765491.

[12]V. Chang, “Computational Intelligence for Medical Imaging Simulations,” J. Med. Syst., vol. 42, no. 1, pp. 1–12, 2018, doi: 10.1007/s10916-017-0861-x.

[13]X. Li, Y. Wang, B. Zhang, and J. Ma, “PSDRNN: An Efficient and Effective HAR Scheme Based on Feature Extraction and Deep Learning,” IEEE Trans. Ind. Informatics, vol. 16, no. 10, pp. 6703–6713, 2020, doi: 10.1109/TII.2020.2968920.

[14]B. Naik, M. S. Obaidat, J. Nayak, D. Pelusi, P. Vijayakumar, and S. H. Islam, “Intelligent Secure Ecosystem Based on Metaheuristic and Functional Link Neural Network for Edge of Things,” IEEE Trans. Ind. Informatics, vol. 16, no. 3, pp. 1947–1956, 2020, doi: 10.1109/TII.2019.2920831.

[15]M. Ma and Z. Mao, “Deep-Convolution-Based LSTM Network for Remaining Useful Life Prediction,” IEEE Trans. Ind. Informatics, vol. 17, no. 3, pp. 1658–1667, 2021, doi: 10.1109/TII.2020.2991796.

[16]S. Wang et al., “IMAGING INFORMATICS AND ARTIFICIAL INTELLIGENCE A deep learning algorithm using CT images to screen for Corona virus disease ( COVID-19 ),” pp. 6096–6104, 2021.

[17]A. Narin, C. Kaya, and Z. Pamuk, “Department of Biomedical Engineering, Zonguldak Bulent Ecevit University, 67100, Zonguldak, Turkey.,” arXiv Prepr. arXiv2003.10849., 2020, [Online]. Available: https://arxiv.org/abs/2003.10849.

[18]E. Soares, P. Angelov, S. Biaso, M. H. Froes, and D. K. Abe, “SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification,” medRxiv, p. 2020.04.24.20078584, 2020, [Online]. Available: https://www.medrxiv.org/content/10.1101/2020.04.24.20078584v3%0Ahttps://www.medrxiv.org/  content/10.1101/2020.04.24.20078584v3.abstract.

[19]K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.

[20]C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning,” 31st AAAI Conf. Artif. Intell. AAAI 2017, pp. 4278–4284, 2017.

[21]C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, 2015, doi: 10.1109/CVPR.2015.7298594.

[22]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 2818–2826, 2016, doi: 10.1109/CVPR.2016.308.

[23]G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2261–2269, 2017, doi: 10.1109/CVPR.2017.243.

[24]Raveendra K, Ravi J, "Performance Evaluation of Face Recognition system by Concatenation of Spatial and Transformation Domain Features", International Journal of Computer Network and Information Security, Vol.13, No.1, pp.47-60, 2021.

[25]Avijit Kumar Chaudhuri, Arkadip Ray, Dilip K. Banerjee, Anirban Das, "A Multi-Stage Approach Combining Feature Selection with Machine Learning Techniques for Higher Prediction Reliability and Accuracy in Cervical Cancer Diagnosis", International Journal of Intelligent Systems and Applications, Vol.13, No.5, pp.46-63, 2021.