Classification Performance Using Principal Component Analysis and Different Value of the Ratio R


  • Jasmina Novakovic “Faculty of Computer Science” Megatrend University Belgrade Serbia, 11000 Belgrade, Bulevar Umetnosti 29
  • Sinisa Rankov Megatrend University Belgrade Bulevar Umetnosti 29


feature extraction, linear feature extraction methods, principal component analysis, classification algorithms, classification accuracy


A comparison between several classification algorithms with feature extraction on real dataset is presented. Principal Component Analysis (PCA) has been used for feature extraction with different values of the ratio R, evaluated and compared using four different types of classifiers on two real benchmark data sets. Accuracy of the classifiers is influenced by the choice of different values of the ratio R. There is no best value of the ratio R, for different datasets and different classifiers accuracy curves as a function of the number of features used may significantly differ. In our cases feature extraction is especially effective for classification algorithms that do not have any inherent feature selections or feature extraction build in, such as the nearest neighbour methods or some types of neural networks.


A.L. Blum, R.L. Rivest, Training a 3-node neural networks is NP-complete, Neural Networks, 5:117 - 127, 1992.

N. Wyse, R. Dubes, A.K. Jain, A critical evaluation of intrinsic dimensionality algorithms. In E.S. Gelsema and L.N. Kanal, editors, Pattern Recognition in Practice, pp 415-425. Morgan Kaufmann Publishers, Inc., 1980.

M. Ben-Bassat, Pattern recognition and reduction of dimensionality, In P. R. Krishnaiah and L. N. Kanal, editors, Handbook of statistics-II, pp 773-791. North Holland, 1982.

L.Breiman, J.H. Friedman, R.H. Olshen, Stone C.J., Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA, 1984.

J.R. Quinlan, C4.5: Programs for machine learning, San Mateo, Morgan Kaufman, 1993.

W. Duch, R. Adamczak, K. Grabczewski, A new methodology of extraction, optimization and application of crisp and fuzzy logical rules, IEEE Transactions on Neural Networks, vol. 12, pp. 277-306, 2001.

I.T. Jolliffe, Principal Component Analysis, Springer-Verlag, New York, 1986.

L. Sirovich, M. Kirby, Low dimensional procedure for the characterization of human faces, Journal of the Optical Society of America, 4(3) 519-524, 1987.

M. Turk, A. Pentland, Eigen faces for recognition, J. of Cognitive Neuroscience 3(1), 1991.

B. Moghaddam, A. Pentland, B. Starner, View-based and modular eigenspaoes for face recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 84-91, 1994.

P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection, Proceedings of the European Conference on Computer Vision, 1996.

H. Murase, S.K. Nayar, Learning and recognition of 3D objects from appearance, IEEE 2nd Qualitative Vision Workshop, pp 39-50, New York, NY, June 1993.

M. J. Black, D. Jepson, Eigen-tracking: Robust matching and tracking of articulated objects using a view-based representation, Proceedings of the European Conference on Computer Vision (ECCV), pp 329-342, Cambridge, England, 1996.

J.J. Atick, P.A. Griffin, N.A. Redlich, Statistical approach to shape-from-shading: deriving 3d face surfaces from single 2d images, Neural Computation, 1997.

M. Kantardzic,Data Mining: Concepts, Models, Methods, and Algorithms, John Wiley & Sons, 2003.

P. Comon, Independent component analysis, a new concept? Signal processing pages 36(3), pp 11-20, 1994.

A.J. Bell, T.J. Sejnowski, An information maximization approach to blind separation and blind deconvolution, Neural Computation, pp 1129-1159, 1995.

C. Bregler, S.M. Omohundro, Nonlinear manifold learning for visual speech recognition, iccv, Boston, Jun 1995.

T. Heap, D. Hogg, Wormholes in shape space: Tracking through discontinuous changes in shape, iccv, 1998.

T. Hastie, W. Stuetzle, Principal curves, Journal of Americam Statistical Association 84, pp 502-516, 1989.

M.A. Kramer, Non linear principal component analysis using autoassociative neural networks, AI Journal 37(2), pp 233-243, 1991.

A.R. Webb, An approach to nonlinear principal components-analysis using radially symmetrical kernel functions, Statistics and computing 6(2), pp 159-168, 1996.

V. Silva, J.B. Tenenbaum, J.C. Langford, A global geometric framework for nonlinear dimensionality reduction, Science, 290, December 2000.

J.M. Winn, C.M. Bishop, Non-linear bayesian image modelling, Proceedings of the European Conference on Computer Vision, Dublin, Ireland, June 2000.

M. Kuramochi, G. Karypis. Gene classification using expression profiles: a feasibility study, International Journal on Artificial Intelligence Tools, 14(4):641-660, 2005.

P. Domingos, M. Pazzani, Feature selection and transduction for prediction of molecular bioactivity for drug design, Machine Learning, 29:103-130, 1997.

E. P. Xing, M. L. Jordan, R. M. Karp Feature selection for high-dimensional genomic microarray data, Proceedings of the 18th International Conference on Machine Learning, 601-608, 2001.

C.M. Bishop, Neural Network for Pattern Recognition, Oxford University Press Inc., New York, 1995.

L. Wang, X. Fu, Data Mining with Computational Intelligence, Springer-Verlag Berlin Heidelberg, Germany, pages 9-14, 2005.



Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.