Elsevier

Signal Processing

Volume 112, July 2015, Pages 27-33
Signal Processing

Low-rank representation for 3D hyperspectral images analysis from map perspective

https://doi.org/10.1016/j.sigpro.2014.06.018Get rights and content

Highlights

  • A novel framework jointing the maximum a posteriori (MAP) model and low-rank representation (LRR) is proposed.

  • The use of LRR can model feature selectivity and obtain a more compact and discriminative representation.

  • The use of MAP model facilities us to exploit the connectivity of adjacent pixels in hyperspectral data.

Abstract

Hyperspectral images naturally stand as 3D data, which carry semantic information in remote sending applications. To well utilize 3D hyperspectral images, signal processing and learning techniques have been widely exploited, and the basis is to divide a given hyperspectral data into a set of semantic classes for analysis, i.e., segmentation. To segment given hyperspectral data is an important and challenging research theme. Recently, to reduce the amount of human labor required to label samples in hyperspectral image segmentation, many approaches have been proposed and achieved good performance with a few labeled samples. However, most of them fail to exploit the high spectral correlation in distinct bands and utilize the spatial information of hyperspectral data. In order to overcome these drawbacks, a novel framework jointing the maximum a posteriori (MAP) model and low-rank representation (LRR) is proposed. In this paper, low-rank representation, conducted as a latent variables, can exploit the high spectral correlation in distinct bands and obtain a more compact and discriminative representation. On the other hand, a novel MAP framework is driven by using low-rank representation coefficient as latent variables, which will improve the probability that the closer pixels can be divided into the same class. The experiment results and quantitative analysis demonstrate that the proposed approach is effective and can obtain high segmentation accuracy compared with state-of-the-art approaches.

Introduction

In remote sensing and other applications, hyperspectral images naturally have three dimensions, and these ‘big data’ can bring abundant semantic information if a well designed algorithm of signal processing and learning are employed upon them [1], [2]. Classification or segmentation of 3D hyperspectral image is a perennial topic in machine learning techniques and has received much attention in recent years [3], as it is the basic but most crucial step for semantic analysis. However, the special characteristics of hyperspectral data also bring some obstacles for hyperspectral image analysis, such as the Hughes phenomenon [4], as the dimension increases. Besides, boundaries of objects are often difficult to discriminate owing to the low resolution and the existence of mixing pixels. These obstacles accelerate the development of new classification or segmentation approaches.

For hyperspectral classification or segmentation, many supervised approaches have been proposed and performed well in the past. Classic approaches, including maximum likelihood (ML) [5], nearest neighbor classifier [6], and artificial neural network [7], have been exploited in hyperspectral data to assign a unique label to each pixel vector. Some sparse-based supervised classification approaches have also been applied in hyperspectral classification in [8], [9]. Both of those approaches need a lot of labeled samples.

In remote sensing, in order to obtain good classification or segmentation performance, a large number of labeled samples are usually required due to the factors such as abundant spectral bands and the Hughes phenomenon [4]. However, A sufficient number of labeled samples are really hard to obtain, because it is extremely difficult and expensive to identify and label samples in remote sensing, and sometimes it is not even feasible. Even if large amount of research has been done for the classification in remote sensing [10], [11], [12], the classification of high-dimensional hyperspectral data using limited labeled training samples is still an open research area [1], [13], [14], [27].

Recently, many researchers devote themselves into the study of hyperspectral image segmentation with only a limited training sample. Support vector machine (SVM) [15], [16], [17] is an effective approach which works well with a few labeled samples in high dimensions. However, it is sensitive to the model selection [18]. Linear discriminant analysis (LDA) [19] is also a simple and effective classifier for hyperspectral image which can merge easy implementation and clear physical interpretation with high accuracy. Despite the good performance, LDA cannot work when the number of features is higher than the number of training samples. Multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) proposed in [20] is one of the latest approaches for hyperspectral segmentation. This approach firstly uses a multinomial logistic regression (MLR) model to learn the class posterior probability distributions. And the information acquired in the previous step is exploited to segment the hyperspectral image using a multilevel logistic prior encoding the spatial information.

Despite the good performance achieved by the aforementioned supervised approaches, the high correlation between the abundant hyperspectral bands and the intrinsic structure of hyperspectral data were not considered. In this case, how to exploit this high correlation and the spatial information of hyperspectral data as a key prior to effectively improve the segmentation accuracy is a noteworthy problem. Recently, low-rank representation (LRR) has been successfully applied in several areas of image processing [21], [22]. The goal of LRR is to obtain the lowest-rank representation of the data with a given dictionary. LRR is regarded as a good technique to exploit the correlation of the data lying in several subspaces. Furthermore, the information of hyperspectral data in different bands is highly correlated [1]. Hence, LRR can be an effective way to divide hyperspectral images into different classes. Nevertheless, the original LRR fails to utilize the spatial information of data while recovering the subspaces. In this case, the maximum a posteriori (MAP) is exploited to learn the prior that the similar pixels are located in the same classes.

In this paper, we propose a mechanism for hyperspectral segmentation to overcome the aforementioned limitations for hyperspectral segmentation. The central idea of the proposed approach is to build a MAP model upon LRR. On the one hand, the use of LRR can model feature selectivity and obtain a more compact and discriminative representation. On the other hand, the use of MAP model facilitates us to exploit the connectivity of adjacent pixels in hyperspectral data. The proposed approach contains two steps. First, the information contained in hyperspectral is transformed into the latent variables by using LRR. Second, a novel MAP framework is driven by treating the LRR coefficient as latent variables, which will improve the probability that closer pixels can be divided into the same class.

Extensive experiments are taken on several real hyperspectral data sets, and in most of the cases, the experiment results show that the proposed approach performs better than state-of-the-art approaches. And the results also show that this approach is robust to the noise.

The rest of the paper is organized as follows. Section 2 describes the problem formulation. The proposed approach is described in detail in Section 3. Section 4 presents experiment results and discussions. And the conclusion of this paper is shown in Section 5.

Section snippets

Problem formulation

Hyperspectral segmentation is aimed to divide a remote sensing image into different classes, each of which consists of many adjacent pixels. Let s{1,,n} be the index of n pixels of a hyperspectral image; ζ{1,,K} denote the set of K labels and x=(x1,,xn)Rd×n represent an image of d-dimensional feature vectors; let y={y1,,yn}ζn be the labels of image and DL{(x1,y1),,(xL,yL)}(Rd×ζ)L be the training set where L denotes the total number of available samples. According to the definition,

Proposed approach

In this paper, we formulate a novel MAP framework with LRR which is regarded as latent variables. In this section, the proposed approach for hyperspectral segmentation is described in detail.

Experiment evaluation

In this section, we evaluate the performance of the proposed approach on two real hyperspectral data sets with a series of experiments compared with support vector machines (SVM), linear discriminant analysis (LDA), and MLR via variable splitting and augmented Lagrangian (LORSAL). It should be noted that the experiment result is affected strongly by the selection of training data. In order to make the proposed method universal, our training data set is absolutely selected randomly. All the

Conclusions

Segmentation has been standing as a key step in semantic understanding of 3D hyperspectral image data, and signal processing and learning techniques have been proposed to achieve satisfactory performance of segmentation. In this paper, a novel framework jointing the maximum a posterior (MAP) and the low-rank representation (LRR) is proposed for 3D hyperspectral image segmentation with limited randomly selected samples. The proposed approach utilizes sufficiently the high correlations between

Acknowledgments

This work is supported by the National Basic Research Program of China (973 Program) (Grant no. 2011CB707104), and the National Natural Science Foundation of China (Grant nos. 61172143 and 61100079).

References (28)

  • A. Plaza et al.

    Recent advances in techniques for hyperspectral image processing

    Remote Sens. Environ.

    (2009)
  • X. Lu et al.

    Manifold regularized sparse NMF for hyperspectral unmixing

    IEEE Trans. Geosci. Remote Sens.

    (2013)
  • X. Lu et al.

    double constrained NMF for hyperspectral unmixing

    IEEE Trans. Geosci. Remote Sens.

    (2014)
  • G.F. Hughes

    On the mean accuracy of statistical pattern recognizers

    IEEE Trans. Inf. Theory

    (1968)
  • X. Jia, Block-based maximum likelihood classification for hyperspectral remote sensing data, in: Proceedings of the...
  • S. Bo, Y. Jing, Specific class extraction from remote sensing imagery based on nearest neighbor classification, in:...
  • D.L. Civco

    Artificial neural networks for land-cover classification and mapping

    Int. J. Geogr. Inf. Syst.

    (1993)
  • Y. Chen, N.M. Nasrabadi, T.D. Tran, Classification for hyperspectral imagery based on sparse representation, in:...
  • Q.S. ul Haq, L. Shi, L. Tao, S. Yang, Hyperspectral data classification via sparse representation in homotopy, in:...
  • D. Lu et al.

    A survey of image classification methods and techniques for improving classification performance

    Int. J. Remote Sens. Instrum.

    (2007)
  • R. Ji et al.

    Spectral-spatial constraint hyperspectral image classification

    IEEE Trans. Geosci. Remote Sens.

    (2014)
  • P. Ghamisi et al.

    Spectral-spatial classification of hyperspectral images based on hidden Markov random fields

    IEEE Trans. Geosci. Remote Sens.

    (2013)
  • L. Zhang et al.

    On Combining Multiple Features for Hyperspectral Remote Sensing Image Classification

    TGRS

    (2012)
  • B. Du et al.

    A Manifold Learning Based Feature Extraction method for Hyperspectral Classification

    International Conference on Information Science and Technology (ICIST)

    (2012)
  • Cited by (16)

    • Hyperspectral image denoising via minimizing the partial sum of singular values and superpixel segmentation

      2019, Neurocomputing
      Citation Excerpt :

      In view of its great advance, PSSV is also exploited for hyperspectral image denoising in this paper. Motivated by the fact that there has a high spectral correlation in distinct bands in HSI [27], Yuan et al. [4] employed low-rank representation to obtain compact and discriminative representation which illustrate good performance. But it ignores the prior information about the target rank.

    • Low-rank tensor learning for classification of hyperspectral image with limited labeled samples

      2018, Signal Processing
      Citation Excerpt :

      Moreover, motivated by the rapid development of compressed sensing, the SRC [12–17] has attracted much interest and become mainstream for HSI classification in the last few years. In addition, the low-rank representation (LRR) [18], which seeks the lowest-rank representation from the candidates to represent all vectors as the linear combination of the bases in a dictionary, has been of growing interest in image processing, as well as in HSI analysis [19–24]. Recently, many researches are focus on developing other promising classifiers based on ensemble learning (EL) [25,26], active learning (AL) [27,28] and deep learning (DL) [29,30].

    • Low rank constraint and spatial spectral total variation for hyperspectral image mixed denoising

      2018, Signal Processing
      Citation Excerpt :

      Nevertheless, there are still two main problems within the SSTV: (1) The paper [14] only uses the SSTV within the maximum a posteriori (MAP) denoising framework and can only capture the local spatial and spectral information, but ignores the global low rank property of HSI. Many works indicate that the low rank property of HSI could be utilized to further exploit the essential property of HSI and improve the final performance, for example, the 3D hyperspectral images analysis [38], hyperspectral denoising [39], and hyperspectral destriping [40]. ( 2) In [14], there is a lack of detailed explanation about why the spatio-spectral TV works well for HSI.

    • Learning group-based sparse and low-rank representation for hyperspectral image classification

      2016, Pattern Recognition
      Citation Excerpt :

      A generalized LRR is proposed to solve the abundance estimation problem in [19]. The maximum a posteriori (MAP) model is built upon LRR for hyperspectral segmentation in [41]. Low-rank regularization is added into the HSI denoising model to deal with the global redundancy and correlation (RAC) in spectral domain [42].

    • Low-rank group inspired dictionary learning for hyperspectral image classification

      2016, Signal Processing
      Citation Excerpt :

      Low-rank prior [23] is enforced on the coefficient matrix to obtain more flexible and significant performance than joint sparsity prior. A maximum a posteriori (MAP) framework is constructed upon LRR for hyperspectral segmentation in [35]. Low-rank constraint [36] is employed to exploit the global redundancy and correlation (RAC) in spectral domain for HSI denoising.

    • Diagonalized Low-Rank Learning for Hyperspectral Image Classification

      2022, IEEE Transactions on Geoscience and Remote Sensing
    View all citing articles on Scopus
    View full text