Matrix-based Kernel Method for Large-scale Data Set

Full Text (PDF, 466KB), PP.1-10

Views: 0 Downloads: 0

Author(s)

Weiya Shi 1,*

1. School of Information Science and Engineering Henan University of Technology, Zhengzhou, China

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2010.02.01

Received: 5 Sep. 2010 / Revised: 12 Oct. 2010 / Accepted: 16 Nov. 2010 / Published: 8 Dec. 2010

Index Terms

Matrix, kernel, autocorrelation, computation

Abstract

In the computation process of many kernel methods, one of the important step is the formation of the kernel matrix. But the size of kernel matrix scales with the number of data set, it is infeasible to store and compute the kernel matrix when faced with the large-scale data set. To overcome computational and storage problem for large-scale data set, a new framework, matrix-based kernel method, is proposed. By initially dividing the large scale data set into small subsets, we could treat the autocorrelation matrix of each subset as the special computational unit. A novel polynomial-matrix kernel function is then adopted to compute the similarity between the data matrices in place of vectors. The proposed method can greatly reduce the size of kernel matrix, which makes its computation possible. The effectiveness is demonstrated by the experimental results on the artificial and real data set.

Cite This Paper

Weiya Shi, "Matrix-based Kernel Method for Large-scale Data Set", IJIGSP, vol.2, no.2, pp.1-10, 2010. DOI: 10.5815/ijigsp.2010.02.01

Reference

[1]Y. Kirby and L. Sirovich, “Application of the Karhunen-loeve Procedure for the Characterization of Human Faces,” IEEE Trans. Pattern Anal. Mach. Intell., vol.12,no.1,pp.103--108, 1990

[2]M. Turk, A. Pentland, “Eigenfaces for Recognition,” J. Cogn. Neurosci., pp.71--86, 1991

[3]P. Belhumeur, J. Hespanha, and K. D.J. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Analysis and Machine Intelligence., 19:711–720, 1997

[4]C. Cortes, V. Vapnik, “Support Vector Networks,” Machine. Learning. Vol.20,pp.273-297, 1995.

[5]B. Scholkopf, A. J. Smola, and K.-R. Muller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Comput., Vol.10,no.5, pp.1299--1319, 1998.

[6]K. Fukunaga, “Introduction to Statistical Pattern Recognition,” Academic Press, 1990.

[7]M. Aizerman, E. Braverman, L. Rozonoer, “Theoretical foundations of the potential function method in pattern recognition learning,” Automat Remote Control. vol. 25,pp.821-837,1964.

[8]G. Bandat and F. Anouar. Generalized discriminant analysis using a kernel method. Neural comput., 12:2385–2404, 2000

[9]B. Scholkopf, S. Mika, C. J. C. Burges, P. Knirsch, K.-R. Muller, G. Ratsch, and A.J. SmolaI. S. Jacobs and C. P. Bean, “Input space versus feature space in kernel-based methods,” IEEE Trans. Neural Network.vol.10,pp.1000-1017,1999.

[10]M.-H. Yang, “Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition using Kernel Methods,” in Proc. 5th IEEE Conf. Auto. Face Gesture Recognit.,pp.215-220, 2002

[11]S. Mika, B. Scholkopf, A. Smola, K.R. Muller, M. Scholz, and G. Ratsch, “Kernel PCA and de-noising in Feature Spaces,” In Adv. Neural Inf. Process. Syst., 1998 

[12]W.M. Zheng, C.R. Zou, and L. Zhao “An Improved Algorithm for Kernel Principal Components Analysis,” Neural Processing Letters. Vol.22, pp.49--56, 2005

[13]V. France, and V. Hlavac, “Greedy Algorithm for a Training Set Reduction in the Kernel Methods,” In IEEE Int. Conf. Comp. Anal. of Imags and Patterns, pp.426--433. 2003.

[14]D. Achlioptas, M. McSherry, and B. Scholkopf, “Sampling techniques for kernel methods,” In Adv. Neural Inf. Process. Syst., 2002

[15]C. Williams, and M. Seeger, “Using the Nystrom Method to Speed up Kernel Machine,” In Adv. Neural Inf. Process. Syst.., 2001

[16]R.Rosipal, and M. Girolami, “An Expectative-maximization Approach to Nonlinear Component Analysis,” Neural Comput, vol. 13, pp.505--510, 2001

[17]A. Smola and B. Scholkopf, “Sparse greedy matrix approximation for machine learning” Proceedings of the Seventeenth International Conference on Machine Learning (pp. 911–918). Standord, CA, USA., 2000.

[18]I. W. Tsang, J. T. Kwok and P. Cheung,” Core Vector Machines: Fast SVM training on very large data sets” J. of Machine Learning Research, vol.6,pp.:363– 392,2005

[19]K.I. Kim, M.O. Franz, and B. Scholkopf, “Iterative Kernel Principal Component Analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol.27,no.9, pp1351--1366, 2005

[20]G. Cauwenberghs, and T. Poggio, “Incremental and decremental support vector machine learning,” In Advanced Neural Information Processing Systems. Cambridge, MA: MIT Press,2000

[21]G.. Wu, Z. Zhang, and E. Chang, “Kronecker Factorization for Speeding up Kernel Machines,” SIAM Int. Conf. Data Mining(SDM), 2005

[22]E. Osuna, R. Freund, and F. Girosi, “An improved training algorithm for support vector machines,” Proc. of the 1997 IEEE Workshop on Neural Networks for Signal Processing,pp. 276–285,1997

[23]Shi Weiya, Guo Yue-Fei, and Xue Xiangyang. “Matrix-based Kernel Principal Component Analysis for Large-scale Data Set,” International Joint Conference on Neural Network, 2908-2913,2009 , USA

[24]Weiya. Shi and Dexian zhang, “SupportMatrix Machine for Large-scale data set”, International Conference on Information Engineering and Computer Science, ICIECS2009

[25]Scholkopf B and Smola A. Learning with kernel: Support Vector Machines,Regularization,Optimization and Beyond, MIT Press,2002

[26]J. Bromley and E. Sackinger,” Neural-network and k-nearest-neighbor classifiers.” Technical Report 11359- 910819-16TM, AT&T.