Deep Learning in Character Recognition Considering Pattern Invariance Constraints

Full Text (PDF, 629KB), PP.1-10

Views: 0 Downloads: 0

Author(s)

Oyebade Kayode Oyedotun 1,* Ebenezer Obaloluwa Olaniyi 1 Khashman Adnan 2

1. Near East University/Electrical & Electronic Engineering, Lefkosa, via Mersin-10, TurkeyMember, Centre of Innovation for Artificial Intelligence, CiAi

2. Founding Director, Centre of Innovation for Artificial Intelligence (CiAi), British University of Nicosia, Girne, Mersin 10, Turkey

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2015.07.01

Received: 6 Oct. 2014 / Revised: 11 Feb. 2015 / Accepted: 3 Apr. 2015 / Published: 8 Jun. 2015

Index Terms

Deep Learning, Character Recognition, Pattern Invariance, Yoruba Vowels

Abstract

Character recognition is a field of machine learning that has been under research for several decades. The particular success of neural networks in pattern recognition and therefore character recognition is laudable. Research has also long shown that a single hidden layer network has the capability to approximate any function; while, the problems associated with training deep networks therefore led to little attention given to it. Recently, the breakthrough in training deep networks through various pre-training schemes have led to the resurgence and massive interest in them, significantly outperforming shallow networks in several pattern recognition contests; moreover the more elaborate distributed representation of knowledge present in the different hidden layers concords with findings on the biological visual cortex. This research work reviews some of the most successful pre-training approaches to initializing deep networks such as stacked auto encoders, and deep belief networks based on achieved error rates. More importantly, this research also parallels investigating the performance of deep networks on some common problems associated with pattern recognition systems such as translational invariance, rotational invariance, scale mismatch, and noise. To achieve this, Yoruba vowel characters databases have been used in this research.

Cite This Paper

Oyebade K. Oyedotun, Ebenezer O. Olaniyi, Adnan Khashman, "Deep Learning in Character Recognition Considering Pattern Invariance Constraints", International Journal of Intelligent Systems and Applications(IJISA), vol.7, no.7, pp.1-10, 2015. DOI:10.5815/ijisa.2015.07.01

Reference

[1]Sternberg, R. J., & Detterman, D. K. (Eds.), ‘What is Intelligence?’, Norwood, USA: Ablex, 1986, pp.1
[2]Morgan McGuire, "An image registration technique for recovering rotation, scale and translation parameters", Massachusetts Institute of Technology, Cambridge MA, 1998, pp.3
[3]Gorge, D., and Hawkins, J., Invariant Pattern Recognition using Bayesian Inference on Hierarchical Sequences, In Proceedings of the International Joint Conference on Neural Networks. IEEE, 2005. pp.1-7
[4]Esa Rahtu, Mikko Salo and Janne Heikkil¨, Affine invariant pattern recognition using Multiscale Autoconvolution, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(6): pp.908-18
[5]Joarder Kamruzzaman and S. M. Aziz, A Neural Network Based Character Recognition System Using Double Backpropagation, Malaysian Journal of Computer Science, Vol. 11 No. 1, 1998, pp. 58-64
[6]Joel Z Leibo, Jim Mutch, Lorenzo Rosasco, Shimon Ullman, and Tomaso Poggio, Learning Generic Invariances in Object Recognition: Translation and Scale, Computer Science and Artificial Intelligence Laboratory Technical Report, 2010, pp.1
[7]Yann Lecun, Yoshua Bengio, Pattern Recognition and Neural Networks, AT & T Bell Laboratories, 1994, pp.1
[8]Cowan N. 1993. Activation, attention, and short-term memory. Mem. Cognit. 21:162–7
[9]Postle BR. 2006. Working memory as an emergent property of the mind and brain. Neuroscience 139:23–38
[10]Ruchkin DS, Grafman J, Cameron K, Berndt RS. 2003. Working memory retention systems: a state of activated long-term memory. Behav. Brain Sci. 26:709–28; discussion 728–77
[11]Alexander Grubb, J. Andrew Bagnell, Stacked Training for Overftting Avoidance in Deep Networks, Appearing at the ICML 2013 Workshop on Representation Learning, 2013, pp.1
[12]Xavier Glorot and Yoshua Bengio, Understanding the difficulty of training deep feedforward neural networks, Appearing in Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), 2010, pp.251-252
[13]Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng, Rectifier Nonlinearities Improve Neural Network Acoustic ModelsProceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013, pp.2
[14]Nitish Srivastava at el, Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Journal of Machine Learning Research 15 (2014) 1929-1958, 2014, pp.1930
[15]Kevin Duh, Deep Learning & Neural Networks: Lecture 4, Graduate School of Information Science, Nara Institute of Science and Technology, 2014, pp.9
[16]Dumitru Erhan at el, Why Does Unsupervised Pre-training Help Deep Learning?,Appearing in Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9, pp.1
[17]Li Deng, An Overview of Deep-Structured Learning for Information Processing, Asia-Pacific Signal and Information Processing Association: Annual Summit and Conference, 2014, pp.2-4
[18]Pascal Vincent et al., Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion, Journal of Machine Learning Research 11 (2010) 3371-3408, pp.3379
[19]Geoffrey Hinton, NIPS Tutorial on: Deep Belief Nets, Canadian Institute for Advanced Research & Department of Computer Science, University of Toronto, 2007, pp. 9
[20]Radford M. Neal, Connectionist learning of belief networks, Elsevier, Artificial Intelligence 56 ( 1992 ) 71-113, pp.77
[21]Geoffrey E. Hinton, Simon Osindero, Yee-Whye Teh, A Fast Learning Algorithm for Deep Belief Nets, Neural Computation 18, 1527–1554 (2006), pp.7
[22]Li Deng and Dong Yu, Deep Learning Methods and Applications, Foundations and Trends® in Signal Processing, Volume 7 Issues 3-4, ISSN: 1932-8346, 2013, pp.242-247
[23]Yoshua Bengio et al., Greedy Layer-Wise Training of Deep Networks, Universit´e de Montr´eal Montr´eal, Qu´ebec, 2007, pp.1-2
[24]Abdel-Rahman Mohamed, Geoffrey Hinton, and Gerald Penn, Understanding How Deep Belief Networks Perform Acoustic Modelling, Department of Computer Science, University of Toronto, 2012, pp.1.
[25]Yann LeCun et al., Gradient-Based Learning Applied to Document Recognition, Proceedings of the IEEE, 1998, pp.6.