Comparative Study of High Speed Back- Propagation Learning Algorithms

Full Text (PDF, 372KB), PP.34-40

Views: 0 Downloads: 0

Author(s)

Saduf 1,* Mohd.Arif Wani 1

1. Dept. of computer sciences, University of Kashmir, Srinagar, j&k, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijmecs.2014.12.05

Received: 10 Aug. 2014 / Revised: 26 Sep. 2014 / Accepted: 2 Nov. 2014 / Published: 8 Dec. 2014

Index Terms

ANN, gain, momentum, error saturation, Local minima.

Abstract

Back propagation is one of the well known training algorithms for multilayer perceptron. However the rate of convergence in back propagation learning tends to be relatively slow, which in turn makes it computationally excruciating. Over the last years many modifications have been proposed to improve the efficiency and convergence speed of the back propagation algorithm. The main emphasis of this paper is on investigating the performance of improved versions of back propagation algorithm in training the neural network. All of them are assessed on different training sets and a comparative analysis is made. Results of computer simulations with standard benchmark problems such as XOR, 3 BIT PARITY, MODIFIED XOR and IRIS are presented. The training performance of these algorithms is evaluated in terms of percentage of accuracy, and convergence speed.

Cite This Paper

Saduf, Mohd.Arif Wani, "Comparative Study of High Speed Back- Propagation Learning Algorithms", International Journal of Modern Education and Computer Science (IJMECS), vol.6, no.12, pp.34-40, 2014. DOI:10.5815/ijmecs.2014.12.05

Reference

[1]D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning internal representations by error propagation,” Parallel Distributed Processing: Explorations in the Microstructure of Cognition (D. Rumelhart and J. McClelland, editors), pp 318-362, 1986.
[2]D.E. Rumelhart, G.E. Hinton and R.J.Williams,“Learning representations by back-propagating errors”,Nature,vol 323,pp 533-536,1986.
[3]D.J. Swanston, J.M. Bishop and R.J. Mitchell, “Simple adaptive momentum, new algorithm for training multilayer perceptrons,” Electronics Letters, 30(18), pp 1498 -1500, 1994.
[4]C. Yu and B. Liu, “A backpropagation algorithm with adaptive learning rate and momentum coefficient,” Proc. Int. Conf. on Neural Networks (IJCNN’02), vol 2, pp 1218-1223, 2002.
[5]H.M. Shao, G.F. Zheng, “A new BP algorithm with adaptive momentum for FNNs training,” Proc. WRI Global Congress on Intelligent Systems (GCIS’09), vol. 4, pp. 16–20, 2009.
[6]S.H. Oh, “Improving the error back-propagation algorithm with a modified error functions,”IEEE Trans. Neural Networks 8 (3), pp 799-803, 1997.
[7]S.C. Ng, S.H. Leung, A. Luk, “Fast and global convergent weight evolution algorithm based on the modified back-propagation,”IEEE International Conference on Neural Networks Proceedings,pp. 3004-3008, 1995.
[8]A.V. Ooyen, B. Nienhuis, “Improving the learning convergence of the back propagation algorithm,” Neural Networks, vol 5, pp 465-471, 1992.
[9]H.M.Lee, C.M.Chen, T.C.Huang, “Learning efficiency improvement of back-propagation algorithm by error saturation prevention method,”Neurocomputing, vol 41, pp. 125-143, 2001.
[10]Yam, J.Y. and Chow, T.W., “A Weight initialization method for improving training speed in Feedforward neural network,” Neurocomputing, Vol. 30, pp. 219-232, 2000.
[11]T. Masters, Practical Neural Network Recipes in C + + (Academic Press, Boston, 1993).
[12]Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” Proc. Internat. Joint Conf on Neural Networks, San Diego Vol. 3, pp 21-26, 1990.
[13]X.G. Wang, Z. Tang, H. Tamura, M. Ishii, W.D. Sun, “An improved backpropagation algorithm to avoid the local minima problem,”Neurocomputing,vol 56,pp 455 – 460, 2004.
[14]Y. Bai, H. Zhang, Y.Hao, “The performance of the back propagation algorithm with varying slope of the activation function,” Chaos, Solitons and Fractals, vol 40, pp 69–77, 2009.
[15]N. M. Nawi, R. S. Ransing and M. R. Ransing, “A new method to improve the gradient based search direction to enhance the computational efficiency of back propagation based Neural Network algorithms,” Proc.IEEE Second Asia International Conference on Modelling & Simulation, pp.546-551 DOI 10.1109/AMS.2008.70,2008.
[16]M. Gori and A. Tesi, “On the problem of local minima in backpropagation,” IEEE Trans. Pattern Anal. Mach.Intell. 14 (1) pp 76–86, 1992.
[17] H. Ishibuchi, R. Fujioka, H. Tanaka, Neural networks that learn from fuzzy if-then rules, IEEE Trans. Fuzzy Syst. 1 (2), pp 85-97,1993.
[18]Saduf, M. Arif Wani, “Comparative study of adaptive learning rate with momentum and resilient back propagation algorithms for neural net classifier optimization,” International Journal of Distributed and Cloud Computing, vol 2, pp. 1-6, 2014.