Extracting nonlinear features for multispectral images by FCMC and KPCA

https://doi.org/10.1016/j.dsp.2004.12.004Get rights and content

Abstract

Classification is a very important task for scene interpretation and other applications of multispectral images. Feature extraction is a key step for classification. By extracting more nonlinear features than corresponding number of linear features in original feature space, classification accuracy for multispectral images can be improved greatly. Therefore, in this paper, an approach based on the fuzzy c-means clustering (FCMC) and kernel principal component analysis (KPCA) is proposed to resolve the problem of multispectral images. The main contribution of this paper is to provide a good preprocessed method for classifying these images. Finally, some experimental results demonstrate that our proposed method is effective and efficient for analyzing the multispectral images.

References (20)

  • A.J. Bell et al.

    The independent components of natural scenes are edge filters

    Vision Res.

    (1997)
  • D. Landgrebe
  • D. Landgrebe
  • A.A. Maciejewski et al.

    An example of principal component analysis applied to correlated images

  • S. Bannour et al.

    Principal component extraction using recursive least square learning

    IEEE Trans. Neural Networks

    (1995)
  • S. Chitroub et al.

    Principal component analysis of multispectral images using neural network

  • S. Chitroub et al.

    Unsupervised learning rules for POLSAR images analysis

  • B. Schölkopf et al.

    Nonlinear component analysis as a kernel eigenvalue

    Neural Comput.

    (1998)
  • B. Schölkopf et al.

    Kernel principal component analysis

    (1999)
  • B. Schölkopf et al.

    Input space vs feature space in kernel-based methods

    IEEE Trans. Neural Networks

    (1999)
There are more references available in the full text version of this article.

Cited by (53)

  • An overview on twin support vector regression

    2022, Neurocomputing
    Citation Excerpt :

    Owing to its high good performance, SVR immediately attracted significant attention after first being proposed [6-11]. Compared with some traditional regression algorithms, SVR has the following advantages [12-18]. Specifically for the problem of limited samples, the optimal solution under the existing sample information is obtained, not only when the sample tends toward infinity.

View all citing articles on Scopus

This work was supported by the NSF of China (Nos. 60472111 and 60405002).

View full text