ABSTRACT
It has been a challenging task for researchers to improve the classification accuracy of hyperspectral images in recent years. High dimensionality or the high number of spectral bands with respect to a comparatively less amount of training sample causes higher misclassification rate. The problem is known as 'Hughes Phenomenon'. To overcome this problem many feature selection as well as feature extraction methods are applied to reduce the dimensionality of hyperspectral image. In this paper, we introduce a new framework of class-wise Principal Component Analysis (PCA) operation for the dimensionality reduction of hyperspectral images. Our proposed approach comprises four major steps. The first step involves separating the pixels according to different existing classes in the image. Secondly, we considered the problem of feature extraction with PCA. Thirdly, we took the same number of features, extracted with PCA, from each class considering their maximum average variance. In the final step, we used the SVM classifier to find out the classification results. Experiment results show that our proposed approach turns out to be better than the primary classification results. The primary classification accuracy shown by the SVM classifier is 89.02% and the classification accuracy of our proposed method is 95.44%.
- Richards, J.A., Jia, X., 2006. Remote Sensing Digital Image Analysis: An Introduction. Springer-Verlag, Berlin, Germany.Google Scholar
- El-Hattab, M.M., 2016. Applying post classification change detection technique to monitor an Egyptian coastal zone (Abu Qir Bay). Egypt J. Remote Sensing Space Sci. 19 (1), 23--36.Google ScholarCross Ref
- Chaudhuri, S., Kotwal, K., 2013. Hyperspectral Image Fusion. Springer.Google Scholar
- Guo, B., Gunn, S.R., Damper, R.I., Nelson, J.D.B., 2006. Band selection for hyperspectral image classification using mutual information. IEEE Geosci. Remote Sens. Lett. 3 (4), 522--526.Google ScholarCross Ref
- Hossain, M.A., Jia, X., Pickering, M., 2014. Subspace detection using a mutual information measure for hyperspectral image classification. IEEE Geosci. Remote Sensing Lett. 11 (2), 424--428.Google ScholarCross Ref
- Peng, H., Long, F., Ding, C., 2005. Feature selection based on mutual information: criteria of max dependency, max-relevance, and minredundancy. IEEE Trans. Pattern Anal. Mach. Intel. 27 (8), 1226--1238.Google ScholarDigital Library
- Battiti, R., 1994. Using mutual information for selecting features in supervised neural net learning. IEEE Trans. Neural Networks 5 (4), 537--550.Google ScholarDigital Library
- Prasad, S., Bruce, L.M., 2008. Limitations of principal components analysis for hyperspectral target recognition. IEEE Geosci. Remote Sens. Lett. 5 (4), 625--629.Google ScholarCross Ref
- Yang, H., Moody, J., 1999. Data visualization and feature selection: new algorithms for non-gaussian data. Adv. Neural Inf. Process. Syst. 12.Google Scholar
- Villa, A., Benediktsson, J.A., Chanussot, J., Jutten, C., 2011. Hyperspectral image classification with independent component discriminant analysis. IEEE Trans. Geosci. Remote Sens. 49 (12), 4865--4876.Google ScholarCross Ref
- Li, W., Prasad, S., Fowler, J.E., Bruce, L.M., 2011. Locality preserving discriminant analysis in kernel-induced feature spaces for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 8 (5), 894--898.Google ScholarCross Ref
- Guo, B., Gunn, S.R., Damper, R.I., Nelson, J.D.B., 2006. Band selection for hyperspectral image classification using mutual information. IEEE Geosci. Remote Sens. Lett. 3 (4), 522--526.Google ScholarCross Ref
- H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:417--441, 1933.Google Scholar
- L.J.P. van der Maaten. An Introduction to Dimensionality Reduction Using Matlab. Report MICC/IKAT. Maastricht. 2007Google Scholar
- M. A. M. Ha san, S. Ahmad, and M. K. I. Molla, "Protein subcellular localization prediction using multiple kernel learning based support vector machine," Molecular BioSystems, vol. 13, no. 4, pp. 785--795, 2017.Google Scholar
- B. Scholkopf and A. J. Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press, 2001.Google ScholarDigital Library
- D. Meyer, E. Dimitriadou, K. Hornik, A. Weingessel, F. Leisch, C.-C. Chang, C.-C. Lin, and M. D. Meyer, "Package e1071," 2018.Google Scholar
- M. A. M. Hasan and S. Ahmad, "mlysptmpred: Multiple lysine ptm site prediction using combination of svm with resolving data imbalance issue," Natural Science, vol. 10, no. 09, p. 370, 2018.Google Scholar
- V. N. Vapnik, Statistical Learning Theory. Hoboken, NJ, USA: Wiley, 1998.Google ScholarDigital Library
Index Terms
- An Improved Class-wise Principal Component Analysis Based Feature Extraction Framework for Hyperspectral Image Classification
Recommendations
Deep neural networks-based relevant latent representation learning for hyperspectral image classification
Highlights- Learning relevant feature in HSI using multi-view deep autoencoder model.
- Propose a semi-supervised GCN for HSI classification.
- Preserve deep spectral-spatial relevant features in the classification.
- Improve image ...
AbstractThe classification of hyperspectral image is a challenging task due to the high dimensional space, with large number of spectral bands, and low number of labeled training samples. To overcome these challenges, we propose a novel methodology for ...
Orientation distance-based discriminative feature extraction for multi-class classification
CIKM '10: Proceedings of the 19th ACM international conference on Information and knowledge managementFeature extraction is an effective step in data mining and machine learning. While many feature extraction methods have been proposed for clustering, classification and regression, very limited work has been done on multi-class classification problems. ...
Residual deep PCA-based feature extraction for hyperspectral image classification
AbstractIn hyperspectral image (HSI) classification, a big challenge is the limited sample size with a relatively high feature dimension. Therefore, effective feature extraction of data is essential, which is desired to remove the redundancy as well as ...
Comments