A Conic Radon-based Convolutional Neural Network for Image Recognition

Full Text (PDF, 730KB), PP.1-12

Views: 0 Downloads: 0

Author(s)

Dhekra El Hamdi 1,* Ines Elouedi 1 Mai K Nguyen 2 Atef Hamouda 1

1. Laboratoire d’Informatique, Programmation, Algorithmique et Heuristiques (LIPAH), Faculté des sciences de Tunis, Université de Tunis EL Manar, 1068, Tunis

2. Laboratoire Equipes de Traitement de l’Information et Système (ETIS), CY Cergy Paris Université/ENSEA/ CNRS UMR 8051, F-95000 Cergy-Pontoise, France

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2023.01.01

Received: 21 May 2022 / Revised: 25 Jul. 2022 / Accepted: 7 Oct. 2022 / Published: 8 Feb. 2023

Index Terms

Image recognition, Conic Radon Transform, Convolutional Neural Networks

Abstract

This article presents a new approach for image recognition that proposes to combine Conical Radon Transform (CRT) and Convolutional Neural Networks (CNN).
In order to evaluate the performance of this approach for pattern recognition task, we have built a Radon descriptor enhancing features extracted by linear, circular and parabolic RT. The main idea consists in exploring the use of Conic Radon transform to define a robust image descriptor. Specifically, the Radon transformation is initially applied on the image. Afterwards, the extracted features are combined with image and then entered as an input into the convolutional layers. Experimental evaluation demonstrates that our descriptor which joins together extraction of features of different shapes and the convolutional neural networks achieves satisfactory results for describing images on public available datasets such as, ETH80, and FLAVIA. Our proposed approach recognizes objects with an accuracy of 96 % when tested on the ETH80 dataset. It also has yielded competitive accuracy than state-of-the-art methods when tested on the FLAVIA dataset with accuracy of 98 %. We also carried out experiments on traffic signs dataset GTSBR. We investigate in this work the use of simple CNN models to focus on the utility of our descriptor. We propose a new lightweight network for traffic signs that does not require a large number of parameters. The objective of this work is to achieve optimal results in terms of accuracy and to reduce network parameters. This approach could be adopted in real time applications. It classified traffic signs with high accuracy of 99%.

Cite This Paper

Dhekra El Hamdi, Ines Elouedi, Mai K Nguyen, Atef Hamouda, "A Conic Radon-based Convolutional Neural Network for Image Recognition", International Journal of Intelligent Systems and Applications(IJISA), Vol.15, No.1, pp.1-12, 2023. DOI:10.5815/ijisa.2023.01.01

Reference

[1]J. Radon. Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten. kad. Wiss., 69:262–277, 1917.
[2]Qiaoping Zhang and Isabelle Couloigner. Accurate Centerline Detection and Line Width Estimation of Thick Lines Using the Radon Transform. IEEE Trans- actions on Image Processing, 16(2):310–316, February 2007.
[3]B. V. Bharath, A. S. Vilas, K. Manikantan, and S. Ramachandran. Iris recognition using radon transform thresholding based feature extraction with Gradient- based Isolation as a pre-processing technique. In 2014 9th International Conference on Industrial and Infor- mation Systems (ICIIS), pages 1–8, December 2014.
[4]Thanh Phuong Nguyen and Thai V. Hoang. Projection- Based Polygonality Measurement. Image Processing, IEEE Transactions on, 24(1):305–315, 2015.
[5]A. M. Cormack. The Radon transform on a family of curves in the plane. Proceedings of the American Mathematical Society, 83(2):325–330, 1981.
[6]Koen Denecker, Jeroen Van Overloop, and Frank Sommen. The general quadratic radon transforms. Inverse Problems, 14(3):615, 1998.
[7]Gaël Rigaud, Maï K. Nguyen, and Alfred K. Louis. Circular harmonic decomposition approach for numerical inversion of circular radon transforms. In 5th International ICST Conference on Performance Evaluation Methodologies and Tools Communications, VAL- UETOOLS ’11, Paris, France, May 16-20, 2011, pages 582–591, 2011.
[8]Ines Elouedi, R. Fournier, A. Nait-Ali, and A. Hamouda. Generalized multi-directional discrete Radon transform. Signal Processing, 93(1):345–355, January 2013.
[9]Richard O. Duda and Peter E. Hart. Use of the hough transformation to detect lines and curves in pictures. Commun. ACM, 15(1):11–15, January 1972.
[10]Ines Elouedi, Régis Fournier, Amine Nait-Ali, and Atef Hamouda. The polynomial discrete Radon transform. Signal, Image and Video Processing, 9(Supplement- 1):145–154, 2015.
[11]Ines Elouedi, Atef Hamouda, Hmida Rojbani, Régis Fournier, and Amine Naït-Ali. Extracting buildings by using the generalized multi directional discrete radon transform. In Image and Signal Processing - 5th International Conference, ICISP 2012, Agadir, Morocco, June 28-30, 2012. Proceedings, pages 531–538, 2012.
[12]Ian J. Goodfellow, Yoshua Bengio, and Aaron C. Courville. Deep Learning. Adaptive computation and machine learning. MIT Press, 2016.
[13]Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- ton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pages 1106–1114, 2012.
[14]Holger Schwenk, Loïc Barrault, Alexis Conneau, and Yann LeCun. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1107–1116, 2017.
[15]Weihong Wang, Jie Yang, Jianwei Xiao, Sheng Li, and Dixin Zhou. Face recognition based on deep learning. In Human Centered Computing - First International Conference, HCC 2014, Phnom Penh, Cambodia, November 27-29, 2014, Revised Selected Papers, pages 812–820, 2014.
[16]Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pat- tern Recognition (CVPR 2005), 20-26 June 2005, SanDiego, CA, USA, pages 886–893, 2005.
[17]David G. Lowe. Object recognition from local scale invariant features. In ICCV, pages 1150–1157, 1999.
[18]Insaf Setitra and Slimane Larabi. SIFT descriptor for binary shape discrimination, classification and matching. In Computer Analysis of Images and Patterns - 16th International Conference, CAIP 2015, Valletta, Malta, September 2-4, 2015 Proceedings, Part I, pages 489–500, 2015.
[19]Ahana Gangopadhyay, Oindrila Chatterjee, and Amitava Chatterjee. Hand shape based biometric authentication system using radon transform and collaborative representation based classification. In 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013), pages 635–639, 2013.
[20]Makoto Hasegawa and Salvatore Tabbone. Histogram of Radon transform with angle correlation matrix for distortion invariant shape descriptor. Neurocomputing, 173:24–35, 2016.
[21]Jihen Hentati, Mohamed Naouai, Atef Hamouda, and Christiane Weber. Measuring rectangularity using GR- signature. Advances in pattern recognition. Lecture notes in computer science, 6718:136–145, 2011.
[22]Thanh Phuong Nguyen and Thai V. Hoang. Projection- Based Polygonality Measurement. Image Processing, IEEE Transactions on, 24(1):305–315, 2015.
[23]G. Beylkin. Discrete radon transforms. IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(2):162–172, 1987.
[24]Elouedi Ines, Hamdi Dhikra, Fournier Régis, Nait-Ali Amine, and Hamouda Atef. Fingerprint recognition using polynomial discrete radon transform. In 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pages 1–6, 2014.
[25]Ghassen Hammouda, Sellami Dorra, and Hammouda Atef. Pattern recognition based on compound complex shape-invariant radon transform. The Visual Computer, pages 1432–2315, 2020.
[26]Dhekra El Hamdi, Mai K. Nguyen, Hedi Tabia, and Atef Hamouda. Image analysis based on radon-type integral transforms over conic sections. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, pages 356–362. IN- STICC, SciTePress, 2018.
[27]Neha Sharma, Vibhor Jain, and Anju Mishra. An analysis of convolutional neural networks for image classification. Procedia Computer Science, 132:377–384, 2018. International Conference on Computational Intelligence and Data Science.
[28]Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 558–567, 2019.
[29]Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Si- mon J. Thorpe, and Timothée Masquelier. Stdp-based spiking deep neural networks for object recognition. CoRR, abs/1611.01421, 2016.
[30]Tong Zhang, Wenming Zheng, Zhen Cui, Yuan Zong, Jingwei Yan, and Keyu Yan. A deep neural network- driven feature learning method for multi-view facial expression recognition. IEEE Transactions on Multime- dia, 18(12):2528–2536, 2016.
[31]Munawar Hayat, Mohammed Bennamoun, and Senjian An. Deep reconstruction models for image set classification. IEEE Trans. Pattern Anal. Mach. Intell., 37(4):713–727, 2015.
[32]Cheng-Li Zhou, Lin-Mei Ge, Yan-Bu Guo, Dong-Ming Zhou and Yu-Peng Cun. A comprehensive comparison on current deep learning approaches for plant image classification, Journal of Physics: Conference Series, Volume 1873, 2021 2nd International Workshop on Electronic communication and Artificial Intelligence (IWECAI 2021), Nanjing, China.
[33]Pierre Barre, Ben C. Stöver, Kai F. Müller, and Volker Steinhage. Leafnet: A computer vision system for automatic plant species identification. Ecological Informatics, 40:50–56, 2017.
[34]Meet P. Shah, Sougata Singha, and Suyash P. Awate. Leaf classification using marginalized shape context and shape texture dual-path deep convolutional neural network. In 2017 IEEE International Conference on Image Processing (ICIP), pages 860–864, 2017.
[35]Aydin Kaya, Ali Seydi Keceli, Cagatay Catal, Hamdi Yalin Yalic, Huseyin Temucin, Bedir Tekinerdogan, Analysis of transfer learning for deep neural network based plant classification models, Computers and Electronics in Agriculture,Volume 158, 2019, pages 20-29.
[36]Alexander Wong, Mohammad Javad Shafiee, and Michael St. Jules. munet: A highly compact deep convolutional neural network architecture for real- time embedded traffic sign classification. CoRR, abs/1804.00497, 2018.
[37]Ying Sun, Pingshu Ge, and Dequan Liu. Traffic sign detection and recognition based on convolutional neural network. In 2019 Chinese Automation Congress (CAC), pages 2851–2854, 2019.
[38]Amara Dinesh Kumar. Novel deep learning model for traffic sign detection using capsule networks, 2018.
[39]Dan C. Ciresan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for image classification. CoRR, abs/1202.2745, 2012.
[40]Junqi Jin, Kun Fu, and Changshui Zhang. Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Transactions on Intelligent Trans- portation Systems, 15(5):1991–2000, 2014.