On the Root-Power Mean Aggregation Based Neuron in Quaternionic Domain

Full Text (PDF, 1160KB), PP.11-26

Views: 0 Downloads: 0

Author(s)

Sushil Kumar 1,* Bipin K. Tripathi 1

1. Harcourt Butler Technical University/Department of Computer Science & Engineering, Kanpur, 208002, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2018.07.02

Received: 27 Jun. 2017 / Revised: 5 Aug. 2017 / Accepted: 15 Sep. 2017 / Published: 8 Jul. 2018

Index Terms

Quasi-arithmetic means, Root-power means in quaternionic domain (ℍ), Quaternionic-valued multilayer perceptron, Quaternionic-valued backpropagation, Quaternionic resilient propagation, 3D face recognition

Abstract

This paper illustrates the new structure of artificial neuron based on root-power means (RPM) for quaternionic-valued signals and also presented an efficient learning process of neural networks with quaternionic-valued root-power means neurons (ℍ-RPMN). The main aim of this neuron is to present the potential capability of a nonlinear aggregation operation on the quaternionic-valued signals in neuron cell. A wide spectrum of aggregation ability of RPM in between minima and maxima has a beautiful property of changing its degree of compensation in the natural way which emulates the various existing neuron models as its special cases. Further, the quaternionic resilient propagation algorithm (ℍ-RPROP) with error-dependent weight backtracking step significantly accelerates the training speed and exhibits better approximation accuracy. The wide spectrums of benchmark problems are considered to evaluate the performance of proposed quaternionic root-power mean neuron with ℍ-RPROP learning algorithm.

Cite This Paper

Sushil Kumar, Bipin K. Tripathi, "On the Root-Power Mean Aggregation Based Neuron in Quaternionic Domain", International Journal of Intelligent Systems and Applications(IJISA), Vol.10, No.7, pp.11-26, 2018. DOI:10.5815/ijisa.2018.07.02

Reference

[1]B. W. Mel. Information processing in dendritic trees. Neural Comput., vol. 6, no. 6, pp. 1031–1085, Nov. 1994.
[2]C. Koch and I. Segev. The role of single neurons in information processing. Nat. Neurosci., 3(Suppl), pp. 1171–1177, Nov. 2000.
[3]A. Polsky, B. W. Mel, and J. Schiller. Computational subunits in thin dendrites of pyramidal cells. Nat. Neurosci., 7, pp. 621–627, May, 2004.
[4]K. Sidiropoulou, E. K. Pissadaki, and P. Poirazi. Inside the brain of a neuron. EMBO Rep., vol. 7, no. 9, pp. 886–892, Sep. 2006.
[5]M. Lavzin, S. Rapoport, A. Polsky, L. Garion, and J. Schiller. Nonlinear dendritic processing determines angular tuning of barrel cortex neurons in vivo. Nature, vol. 490, no. 7420, pp. 397–401, Sep. 2012.
[6]Y. Todo, H. Tamura, K. Yamashita, and Z. Tang. Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw., vol. 60, pp. 96–103, Dec. 2014.
[7]T Jiang, D. Wang, J. Ji, Y. Todo, and S. Gao. Single dendritic neuron with nonlinear computation capacity: a case study on XOR problem. in Proc. Int. Conf. on Progress in Informatics and Computing (PIC), pp. 20–24, Dec. 2015.
[8]W. Chen, J. Sun, S. Gao, J.-J. Cheng, J. Wang, and Y. Todo. Using a single dendritic neuron to forecast tourist arrivals to japan. IEICE Trans. Inf. & Syst., vol. E100–D, no. 1, pp. 190–202, Jan. 2017.
[9]Y. Xiong, W. Wu, X. Kang, and C. Zhang. Training pi-sigma network by online gradient algorithm with penalty for small weight update. Neural Comput., vol. 19, no. 12, 3356–3368, Jan. 2008.
[10]C.-K. Li. A sigma-pi-sigma neural network (SPSNN). Neural Process. Lett., vol. 17, no. 1, pp. 1–19, Mar. 2003.
[11]N. Homma and M. M. Gupta. A general second-order neural unit. Bull. Coll. Med. Sci. Tohoku Univ., vol. 11, no. 1, pp. 1–6, 2002.
[12]B. K. Tripathi and P. K. Kalra. The novel aggregation function-based neuron models in complex domain. Soft Comput., vol. 14, no. 10, pp. 1069–1081, Aug. 2010.
[13]C. L. Giles and T. Maxwell. Learning, invariance, and generalization in high-order neural networks. Appl. Opt., vol. 26, no. 23, pp. 4972–4978, 1987.
[14]M. Zhang, S. Xu, and J. Fulcher. Neuron-adaptive higher order neural network models for automated financial data modeling. IEEE Trans. Neural Netw., vol. 13, no. 1, pp. 188–204, Jan. 2002.
[15]S. Xu. Adaptive higher order neural network models and their applications in business. IGI Global, pp. 314–329, 2009.
[16]E. B. Kosmatopoulos, M. M. Polycarpou, M. A. Christodoulou, and P. A. Ioannou. High-order neural network structures for identification of dynamical systems. IEEE Trans. Neural Netw., vol. 6, no. 2, pp. 422–431, Mar. 1995.
[17]B. K. Tripathi and P. K. Kalra. On efficient learning machine with root-power mean neuron in complex domain. IEEE Trans. Neural netw., vol. 22, no. 5, pp. 727–738, May 2011.
[18]H. Dyckhoff and W. Pedrycz. Generalized means as model of compensative connectives. Fuzzy Sets Syst., vol. 14, no. 2, pp. 143–154, Nov. 1984.
[19]R. R. Yager. Generalized OWA aggregation operators. Fuzzy Optim. Decis. Ma., vol. 3, no. 1, pp. 93–107, Mar. 2004.
[20]A. V. Ooyen and B. Nienhuis. Improving the convergence of the backpropagation algorithm. Neural Netw., vol. 5, no. 3, pp. 465–472, 1992.
[21]X. Chen, Z. Tang, and S. Li. An modified error function for the complex-value backpropagation neural network. Neural Inf. Process., vol. 8, no. 1, pp. 1–8, Jul. 2005.
[22]G. D. Magoulas, N. V. Michael, and S. A. George. Effective backpropagation training with variable stepsize. Neural Netw., vol. 10, no.1, pp. 69–82, 1997.
[23]T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence of the back-propagation method. Biol. Cybern., vol. 59, no. 4, pp. 257–263, 1988.
[24]C. C. Yu and D. B. Liu. A backpropagation algorithm with adaptive learning rate and momentum coefficient. in Proc. IEEE Int. Jt. Conf. Neural Netw., vol. 2, 2002, pp. 1218–1223.
[25]E. Istook and T. Martinez. Improved backpropagation learning in neural networks with windowed momentum. Int. J. Neural Sys., vol. 12, no. 3 and 4, pp. 303–318, Jan. 2002.
[26]R. A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Netw., vol. 1, no. 4, pp. 295–307, 1988.
[27]A. A. Minai and R. D. Williams. Back-propagation heuristics: a study of the extended delta-bar-delta algorithm. in Proc. Int. Jt. Conf. Neural Netw., June 1990, pp. 595–600.
[28]S. E. Fahlman. An empirical study of learning speed in backpropagation networks. Tech. Rep. CMU-CS-88-162, Sep. 1988.
[29]M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. in Proc. IEEE Int. Conf. Neural Netw., San Francisco, CA, Apr. 1993.
[30]C. Igel and M. Husken. Empirical evaluation of the improved Rprop learning algorithms. Neurocomputing, vol. 50, pp. 105–123, Jan. 2003.
[31]C. Igel, M. Toussaint, and W. Weishui. Rprop using the natural gradient. in Trends and Applications in Constructive Approximation, Ed. by D. Mache, J. Szabados, and M. De Bruin (Int. Series Numerical Math. (ISNM), Birkhauser, Basel), vol. 151, pp. 259–272, 2005.
[32]A. Kantsila, M. Lehtokangas, and J. Saarinen. Complex RPROP-algorithm for neural network equalization of GSM data bursts. Neurocomputing, vol. 61, pp. 339–360, Oct. 2004.
[33]C.-S. Chen, J.-M. Lin, and C.-T. Lee. Neural network for WGDOP approximation and mobile location. Mathematical Problems in Engineering, vol. 2013, Article ID 369694, 11 pages, 2013.
[34]L. Orcik, M. Voznak, J. Rozhon, F. Rezac, J. Slachta, H. T. Cruz, and J. C.-W. Lin. Prediction of speech quality based on resilient backpropagation artificial neural network. Wireless Personal Communications, doi:10.1007/s11277-016-3746-2, Oct. 2016.
[35]A. N. Kolmogoroff. Sur la notion de la moyenne. Acad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Natur. Sez., vol. 12, no. 6, pp. 388–391, 1930.
[36]M. Nagumo. Über eine klasse der mittelwerte. Jpn. J. Math., vol. 7, pp. 71–79, 1930.
[37]W. R. Hamilton. On a new species of imaginary quantities connected with a theory of quaternions. in Proc. Royal Irish Academy, vol. 2, no. 1843, pp. 424–434, Nov. 1844.
[38]B. C. Ujang, C. C. Took, and D. P. Mandic. Split quaternion nonlinear adaptive filtering. Neural Netw., vol. 23, no. 3, pp. 426–434, Apr. 2010.
[39]T. Nitta. An extension of the back-propagation algorithm to complex numbers. Neural Netw., vol. 10, no. 8, pp. 1391–1415, Nov. 1997.
[40]B. K. Tripathi. High dimensional neurocomputing: growth, appraisal and applications. India: Springer, 2015.
[41]W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys., vol. 4, no. 4, pp. 115–133, Dec. 1943.
[42]A. Hirose. Complex-valued neural networks: theories and applications. vol. 5., World Scientific, 2003.
[43]T. Nitta. A quaternary version of the back-propagation algorithm. in Proc. Int. Conf. Neural Netw., vol. 5, Nov. 1995, pp. 2753–2756.
[44]M. Schmitt. On the complexity of computing and learning with multiplicative neural networks. Neural Comput., vol. 14, no. 2, pp. 241–301, Feb. 2002.
[45]P. K. Kalra, B. Chandra, and M. Shiblee. New neuron model for blind source separation. in Proc. Int. Conf. Neural Inf. Process., Auckland, New Zealand, Nov. 2008, pp. 27–36.
[46]G. M. Georgiou. Exact interpolation and learning in quadratic neuralnetworks. in Proc. Int. Joint Conf. Neural Netw., Vancouver, BC, Canada, Jul. 2006, pp. 230–234.
[47]S. J. Sangwine and N. L. Bihan. Quaternion polar representation with a complex modulus and complex argument inspired by the Cayley-Dickson form. Advances in Applied Clifford Algebras, vol. 20, no. 1, pp. 111–120, Mar. 2010.
[48]E. Cho. De moivre's formula for quaternions. Appl. Math. Lett.,vol. 11, no. 6, pp. 33–35, Nov. 1998.
[49]D. B. Foggel. An information criterion for optimal neural network selection. IEEE Trans. Neural Netw., vol. 2, no. 5, pp. 490-497, Sep.1991.
[50]R. Naresh, S. Pandey, and J. B. Shukla. Modeling the cumulative effect of ecological factors in the habitat on the spread of tuberculosis. Int. J. Biomath., vol. 2, no. 3, pp. 339-355, Sep. 2009.
[51]R. Kaur and A. K. Narula. Artificial neural network based design of modified shaped patch antenna. IJISA, vol. 9, no. 4, pp. 32-38, Apr. 2017.
[52]S. Kumar and B. K. Tripathi. Machine learning with resilient propagation in quaternionic domain. IJIES, vol. 10, no. 4, pp. 205-216, Aug. 2017.
[53]T. Parcollet, M. Morchid, and P. M. Bousquet, R. Dufour, G. Linarès, and R. De Mori. Quaternion neural networks for spoken language understanding. In Proc. IEEE Spoken Language Technology Workshop (SLT), San Diego, CA, 2016, pp. 362-368.
[54]S. C. Nayak. Development and performance evaluation of adaptive hybrid higher order neural networks for exchange rate prediction. IJISA, vol. 9, no. 8, pp. 71-85, Aug. 2017.
[55]F. Shang and A. Hirose. Quaternion neural-network-based PolSAR land classification in Poincare-Sphere-Parameter space. IEEE Trans. on Geoscience and Remote Sensing, vol. 52, no. 9, pp. 5693-5703, Sep. 2014.
[56]D. K. Choubey and S. Paul. GA_MLP NN: A hybrid intelligent system for diabetes disease diagnosis. IJISA, vol. 8, no. 1, pp. 49-59, Jan. 2016.