Implementation of Transfer Learning Using VGG16 on Fruit Ripeness Detection

Full Text (PDF, 490KB), PP.52-61

Views: 0 Downloads: 0

Author(s)

Jasman Pardede 1,* Benhard Sitohang 2 Saiful Akbar 2 Masayu Leylia Khodra 2

1. Department of Informatics Engineering, Institut Teknologi Nasional Bandung, Bandung, Indonesia

2. School of Electrical Engineering and Informatics, Institut Teknologi Bandung (ITB), Bandung, Indonesia

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2021.02.04

Received: 12 Jun. 2020 / Revised: 27 Sep. 2020 / Accepted: 6 Dec. 2020 / Published: 8 Apr. 2021

Index Terms

Fruit ripeness, transfer learning, MLP, overfitting, accuracy

Abstract

In previous studies, researchers have determined the classification of fruit ripeness using the feature descriptor using color features (RGB, GSL, HSV, and L * a * b *). However, the performance from the experimental results obtained still yields results that are less than the maximum, viz the maximal accuracy is only 76%. Today, transfer learning techniques have been applied successfully in many real-world applications. For this reason, researchers propose transfer learning techniques using the VGG16 model. The proposed architecture uses VGG16 without the top layer. The top layer of the VGG16 replaced by adding a Multilayer Perceptron (MLP) block. The MLP block contains Flatten layer, a Dense layer, and Regularizes. The output of the MLP block uses the softmax activation function. There are three Regularizes that considered in the MLP block namely Dropout, Batch Normalization, and Regularizes kernels. The Regularizes selected are intended to reduce overfitting. The proposed architecture conducted on a fruit ripeness dataset that was created by researchers. Based on the experimental results found that the performance of the proposed architecture has better performance. Determination of the type of Regularizes is very influential on system performance. The best performance obtained on the MLP block that has Dropout 0.5 with increased accuracy reaching 18.42%. The Batch Normalization and the Regularizes kernels performance increased the accuracy amount of 10.52% and 2.63%, respectively. This study shows that the performance of deep learning using transfer learning always gets better performance than using machine learning with traditional feature extraction to determines fruit ripeness detection. This study gives also declaring that Dropout is the best technique to reduce overfitting in transfer learning.

Cite This Paper

Jasman Pardede, Benhard Sitohang, Saiful Akbar, Masayu Leylia Khodra, "Implementation of Transfer Learning Using VGG16 on Fruit Ripeness Detection", International Journal of Intelligent Systems and Applications(IJISA), Vol.13, No.2, pp.52-61, 2021. DOI:10.5815/ijisa.2021.02.04

Reference

[1]V.E. Nambi, K. Thangavel, A. Manickavasagan, and S. Sharir, “Comprehensive ripeness-index for prediction of ripening level in mangoes by multivariate modelling of ripening behavior”, Int. Agrophys, vol. 31, pp. 35-44, 2017
[2]M. Dadwal and V.K. Banga, “Color Image Segmentation for Fruit Ripeness Detection: A Review”, 2nd International Conference on Electrical, Electronics and Civil Engineering (ICEECE'2012), 2012
[3]S. Naik and B. Patel, “Machine Vision based Fruit Classification and Grading - A Review”, International Journal of Computer Applications, Vol. 170(9), July 2017.
[4]A. Bhargava and A. Bansal, “Fruits and vegetables quality evaluation using computer vision: A review”, Journal of King Saud University – Computer and Information Sciences, 2018.
[5]Y. Onishi, T. Yoshida, K. Kurita, et al., “An automated fruit harvesting robot by using deep learning”. Robomech J 6, 13 (2019) doi:10.1186/s40648-019-0141-2
[6]I. Hussain, Q. He, and Z. Chen, “Automatic Fruit Recognition Based on DCNN For Commercial Source Trace System”, International Journal on Computational Science & Applications (IJCSA), vol. 8(2), pp. 1 – 14.
[7]H. Muresan and M. Oltean, “Fruit recognition from images using deep learning”, Acta Univ. Sapientiae, Informatica, 2018, pp. 26-42.
[8]L. Danev, “Fruit and vegetable classification from live video”, MInf Project Report, University of Edinburgh, 2017.
[9]F.M.A. Mazen and A.A. Nashat, “Ripeness Classification of Bananas Using an Artificial Neural Network”, Arabian Journal for Science and Engineering, Springer, 2019.
[10]S.R. Dubey and A.S. Jalal, “Fusing Color and Texture Cues to Categorize the Fruit Diseases from Images”, Computer Vision and Pattern Recognition, https://arxiv.org/abs/1405.4930, 2014.
[11]H.M. Zawbaa, M. Abbass, M. Hazman, and A.E. Hassenian, “Automatic Fruit Image Recognition System Based on Shape and Color Features”, Advanced Machine Learning Technologies and Applications. AMLTA 2014.
[12]S. Arivazhagan, R.N. Shebiah, S.S. Nidhyanandhan, L.Ganesan, “Fruit Recognition using Color and Texture Features”, Journal of Emerging Trends in Computing and Information Sciences, vol.1, no.2, 2010.
[13]J.Pardede, M.G. Husain, A.N. Hermana, and S.A. Rumapea, “Fruit Ripeness Based on RGB, HSV, HSL, L*a*b* Color Feature Using SVM”, International Conference of Computer Science and Information Technology, 2019.
[14]A. Patil and A.Zore, “Deep Learning based Computer Vision: A Review”, IJIRT, vol.5, no,6, 2018.
[15]M.Z. Alom, T.M. Taha, C. Yakopcic, S. Westberg et.al., “A State-of-the-Art Survey on Deep Learning Theory and Architectures”, Electronics, vol.8, no.3, 2019.
[16]F. Chollect, “Deep Learning with Python”, 2018.
[17]K. Weiss, T.M. Khoshgoftaar, and D.D.Wang, “A survey of transfer learning”, Journal of Big Data, vol. 3, no.9, 2016.
[18]S. J. Pan and Q. Yang, “A Survey on Transfer Learning”, IEEE Transactions on Knowledge and Data Engineering, Volume 22 Issue 10, October 2010, Pages 1345-1359, 2010.
[19]M. Shaha and M. Pawar, “Transfer Learning for Image Classification”, 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 2018.
[20]N. Houlsby, A Giurgiu, S. Jastrzebski, B. Morrone at al., “Parameter-Efficient Transfer Learning for NLP”, Machine Learning, arXiv: 1902.00751, 2019.
[21]H.C. Shim, H.R. Roth, M.Gao, L. Lu et all., “Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning”, vol. 35, no. 5,2016.
[22]J.Y. Lee, F. Dernoncourt, and P. Szolovits, “Transfer Learning for Named-Entity Recognition with Neural Networks”, Computation and Language, arXiv:1705.06273, 2017.
[23]X. Ying, “An Overview of Overfitting and its Solutions”, Journal of Physics: Conference Series, Intelligent system and control technology, vol. 1168, no. 2, 2019.
[24]S. Salman and X. Liu, “Overfitting Mechanism and Avoidance in Deep Neural Networks”, Machine Learning, arXiv: 1901.06566, 2019.
[25]C. Zhang, O. Vinyals, R. Munos, and S. Bengio, “A Study on Overfitting in Deep Reinforcement Learning”, Machine Learning, arXiv: 1804.06893, 2018.
[26]B. Wu, Z. Liu, Z. Yuan, G. Sun et al., “Reducing Overfitting in Deep Convolutional Neural Networks Using Redundancy Regularizer”, Artificial Neural Networks and Machine Learning – ICANN 2017, Lecture Notes in Computer Science, 2017.
[27]N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever et al., ““Dropout: A Simple Way to Prevent Neural Networks from Overtting”, Journal of Machine Learning Research, vol. 15, pp. 1929-1958, 2014.
[28]S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, Machine Learning, arXiv: 1502.03167, 2015.
[29]A. Khan, A. Sohail, U. Zahoora, and A.S. Qureshi, “A Survey of the Recent Architectures of Deep Convolutional Neural Networks”, Computer Vision and Pattern Recognition, arXiv: 1901.06032, 2019.
[30]S. Shuvaev, H. Giaffar, and A.A. Koulakov, “Representations of Sound in Deep Learning of Audio Features from Music”, arXiv:1712.02898, 2017.
[31]W. Liu, Z. Wang, X. Liu, N. Zeng et al., “A survey of deep neural network architectures and their applications”, Neurocomputing, vol. 234, pp. 11-26, 2017.
[32]J. Pardede, B. Sitohang, S. Akbar, M.L. Khodra, “Improving the Performance of CBIR Using XGBoost Classifier with Deep CNN-Based Feature Extraction”, International Conference on Data and Software Engineering, 2019.
[33]R. Raina, A. Battle, H. Lee, B. Packer et al, “Self-taught learning: Transfer Learning from Unlabeled Data”, Proceedings of the 24th international conference on Machine learning, pp. 759-766, 2007.
[34]C. Tan, F. Sun, T. Kong, W. Zhang et al., “A Survey on Deep Transfer Learning”, Machine Learning, arXiv: 1808.01974, 2018.
[35]Z. Huang, Z. Pan, and B. Lei, “Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data”, remote sensing, vol. 9, no. 8, 2017.
[36]A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep Learning for Computer Vision: A Brief Review”, Computational Intelligence and Neuroscience, 2018.
[37]J. Zhang, W. Li, P. Ogunbona, and D. Xu, “Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective”, Computer Vision and Pattern Recognition, arXiv: 1705.04396, 2017.
[38]S. Francis, J.V. Landeghem, and M.F. Moens, “Transfer Learning for Named Entity Recognition in Financial and Biomedical Documents”, Information, vol. 10, 2019.
[39]M. Hussain, J.J. Bird, and D.R. Faria, “A Study on CNN Transfer Learning for Image Classification”, Advances in Intelligent Systems and Computing, vol. 840. Springer, Cham, 2018.
[40]J. Pardede and M.G. Husada, “Comparison of VSM, GVSM, and LSI in Information Retrieval for Indonesian Text, Journal Teknologi, vol. 78, vol. (5-6), pp. 51-56, 2016.