Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
In the field of data analysis, it is important to reduce the dimensionality of data, because it will help to understand the data, extract new knowledge from the data, and decrease the computational cost. Principal Component Analysis (PCA) [1, 7, 19] has been applied in various areas as a method of dimensionality reduction. Nonlinear Principal Component Analysis (NLPCA) [1, 7, 19] was originally introduced as a nonlinear generalization of PCA. Both of the methods were tested on various artificial and natural datasets sampled from: "F(x) = sin(x) + x", the Lorenz Attractor, and sunspot data. The results from the experiments have been analyzed and compared. Generally speaking, NLPCA can explain more variance than a neural network PCA (NN PCA) in lower dimensions. However, as a result of increasing the dimension, the NLPCA approximation will eventually loss its advantage. Finally, we introduce a new combination of NN PCA and NLPCA, and analyze and compare its performance.