Review
Coral reef image classification employing Improved LDP for feature extraction

https://doi.org/10.1016/j.jvcir.2017.09.008Get rights and content

Highlights

  • ILDP is proposed which has reduced the bin size of histogram.

  • ILDP reduce the time complexity and improve classification accuracy.

  • For an effective classification, three classifiers namely KNN, CNN and SVM are used.

  • ILDP is demonstrated by comparing with existing approaches in terms of accuracy and time complexity.

  • The suitability of the proposed work for both texture and coral data sets is justified.

  • Experiments and Comparative analysis are conducted in five coral and four texture data sets.

Abstract

This paper presents a scheme for feature extraction that can be applied for classification of corals in submarine coral reef images. In coral reef image classification, texture features are extracted using the proposed Improved Local Derivative Pattern (ILDP). ILDP determines diagonal directional pattern features based on local derivative variations which can capture full information. For classification, three classifiers, namely Convolutional Neural Network (CNN), K-Nearest Neighbor (KNN) with four distance metrices, namely Euclidean distance, Manhattan distance, Canberra distance and Chi-Square distance, and Support Vector Machine (SVM) with three kernel functions, namely Polynomial, Radial basis function, Sigmoid kernel are used. The accuracy of the proposed method is compared with Local Binary pattern (LBP), Local Tetra Pattern (LTrP), Local Derivative Pattern (LDP) and Robust Local Ternary Pattern (RLTP) on five coral data sets and four texture data sets. Experimental results indicate that ILDP feature extraction method when tested with five coral data sets, namely EILAT, RSMAS, EILAT2, MLC2012 and SDMRI and four texture data sets, namely KTH-TIPS, UIUCTEX, CURET and LAVA achieves the highest overall classification accuracy, minimum execution time when compared to the other methods.

Introduction

Submarine imagery is an aspect of marine science. Object identification in uneven submarine surroundings is not an easy task for several reasons. Coral reefs are some of the most diverse and precious ecosystems on the Earth [1], [2], [3], [4], [5]. Coral reefs, similar to every other ecosystem, obviously change over time. Healthy coral reefs provide home to over one million diverse aquatic species. They provide revenue in the order of billions of dollars and millions of jobs in over hundred countries around the world. Submarine natural scene coral images present several challenges [6] that may vary a lot from one data set to another. The following are the common problems concerning coral images: Imbalanced information about coral reef is a general crisis as some coral species are tremendously rare. Submarine coral images have different scale, orientation and lighting. When travelling submarine, a common artifact, red channel information [7], [8] loss occurs. Many of the coral classes are difficult to model. So, submarine image classification with feature extraction is not a simple task. This accentuates the necessity to classify the image with the help of its texture to reflect the actual information collected from the real world.

The rest of the paper is structured as follows: Section 1.1 discusses the contribution of the proposed work. A survey of coral image enhancement techniques, feature extraction techniques and classification techniques is given in Section 1.2. Section 1.3 discusses the overview of the proposed work. Section 2 represents the concepts of the proposed feature extraction approach Improved Local Derivative Pattern (ILDP). The experimental results are presented in Section 3. Finally, conclusion and future work are discussed in Section 4.

The contributions of the proposed work are as follows

  • (i)

    An improvement in LDP termed as ILDP is proposed which has reduced the bin size of histogram, thereby reducing the time complexity improving the recognition rate.

  • (ii)

    For an effective classification, three classifiers namely K-Nearest Neighbor (KNN), Convolutional Neural Network (CNN) and Support Vector Machine (SVM) are used and the results are compared.

  • (iii)

    The effectiveness of ILDP is demonstrated by comparing it with existing approaches in terms of accuracy and time complexity on five coral and four texture data sets.

  • (iv)

    The suitability of the proposed work for both texture and coral data sets is justified with experiments and the comparative analysis made with the state-of-the-art approaches.

Coral image classification is to be addressed in three stages, and so the related researches are presented in their order of occurrence, namely Coral Image Enhancement, Feature Extraction and Classification.

Pre-processing is the first step of coral image enhancement. Image enhancement is needed to improve the classification accuracy. The related papers pertaining to coral image enhancement are as follows: Blanchet et al. [6] have used Histogram Equalization for enhancing the submarine images. Kevin et al. [9] have proposed a software package using Visual Basic program CPCe (Coral Point Count with Excel extensions) for the purpose of coral image using random point count methodology. These techniques are used for preliminary image analysis such as enhancement, edge detection and segmentation. Beijbom et al. [10] have used coral image with color spaces such as RGB, LAB and HSV for enhancements such as intensity stretching and color channel stretching. Eduardo et al. [11] have used normalization process to measure the range of pixel intensity values of coral image so as to increase contrast. Mohammad et al. [12] have used Normalization to remove global illumination influence in coral images. Shihavuddin et al. [13] have considered Contrast Limited Adaptive Histogram Specification (CLAHS) as an important enhancement technique which provides better results for image enhancement. Dead corals and sand have similar chromaticity and differ only in glowness. So, Shiela et al. [4] have combined Histogram Back propagation with color matching technique to improve the results. Judgment on the best enhancement method for a given coral data set is a challenging task. Most importantly, all enhancement methods could not address the red channel information loss challenge, which is however necessary for extracting useful color features.

A dominant dictionary-based texture descriptor, texton, is proposed as a feature by Beijbom et al. [10]. Padmavathi et al. [14] have used Kernel Principal Component Analysis (KPCA) and PCA-SIFT (Principal Component Analysis- Scale Invariant Feature Transform) for dimension reduction and feature extraction of submarine images respectively. Shihavuddin et al. [13] have employed Completed Local Binary Pattern (CLBP), Grey Level Co-occurrence Matrix (GLCM) with twenty-two features, Gabor filter response and opponent angle and hue channel color histograms as feature descriptors. Eduardo et al. [11] have used a bank of Gabor Wavelet filters to extract texture feature descriptors with learning classifiers from OpenCV library. Shiela et al. [15] have determined the living and the nonliving count of corals by extracting texture features using LBP descriptor. Pican et al. [16] have used GLCM with six features and Kohonen Self-Organizing Map (SOM) for texture feature extraction. GLCM has twenty-four types of features for extraction, and for each image suitable features have to be chosen for extraction.

Blanchet et al. [6] have used CLBP as texture descriptor and Hue and opponent angle histograms as color descriptors for extracting submarine coral images. Oscar et al. [17] have represented texture with a bag of words using Scale Invariant Feature Transform (SIFT) which has four major stages, and each of them is a time-consuming process. Clement et al. [18] and Soriano et al. [19] have extracted texture features using LBP. According to Hedley et al. [20] ground truth comparisons have revealed high error estimates rarely surpassing with 60% accuracy results. Mohammad et al. [12] have proposed two mapping methods using CLBP. Stokes et al. [21] have considered color and texture descriptors. RGB histogram is used for color features, and Discrete Cosine Transform (DCT) is used for texture. Anand Mehta et al. [22] have employed an approach that does not require any explicit feature extraction. Support Vector classifier implicitly performs feature extraction by means of a kernel which is defined by a dot product of two non-linear mapped patterns. Though the feature representations available in literature are accepted, none has reported performance to a satisfactory level on full-scale normal coral scene image data sets. Hence there is still a need for a feature extraction technique which could better aid in the classification process.

Image-based coral classification is done by extracting color and texture features and then by classifying them. Anand Mehta et al. [22] and Bewley et al. [23] have classified coral reef with its texture features using SVM. Three kernel functions, namely Polynomial, Radial basis function and Sigmoid kernel are used. Anand Mehta et al. [22] have obtained 95% accuracy while classifying three coral species, but only a small amount of samples have been used to train and test the classifiers. Dictionary-based methods are further investigated by Bewley et al. [24] using small patches characterized with Principal Component Analysis (PCA) dimensionality-reduced intensity values. Their results, however, suggest that a simple LBP representation remains competitive with such methods. Shiela et al. [4], [15] have classified coral images using a feed-forward back-propagation NN with a rule-based decision tree classifier into three benthic types: living coral, dead coral and sand. Shiela et al. [15] have got an overall recognition rate between 60% and 77%. Stokes et al. [21] have used Probability Density Weighted Mean Distance (PDWMD) and Euclidean distance for classification with eighteen classes of data sets. Clement et al. [18] have applied log-likelihood measure on image blocks to find the best matched texture with an accuracy of 77%. Soriano et al. [19] have classified corals with KNN rule, and the distance metric used is log-likelihood and have reported an accuracy of 80%. Mahmood et al. [25] have used Convolutional Neural Network (CNN) with texton and color for classification.

Padmavathi et al. [14] have classified submarine images using Probabilistic Neural Network (PNN) which provides better results when compared to SIFT [26], [27] algorithms with three classes of data set. Mohammad et al. [12] have classified coral and textures using KNN by considering K = 1 and have reported an accuracy of 90.35%. Shihavuddin et al. [13] have classified corals using techniques such as KNN, Neural Network (NN), SVM and Probability Density Weighted Mean Distance (PDWMD) and have reported an accuracy of 85.5%. Marine habitat is classified by Oscar et al. [17] using voting of best matches method with 95% confidence bounds. Their classification is achieved through voting for the best match. In their method, each image is classified as belonging to one class, and the sub-image level classification is not addressed. Beijbom et al. [10] have classified coral reef images using SVM with Radial Basis Function kernel. The method has reported an accuracy between 67% and 83% for a nine-class data set of natural images with over one hundred thousand labelled points. Blanchet et al. [6] have obtained an accuracy of 78.7% using three state-of-the-art feature representations, namely LBP combined with color information, textons, and a CNN-based feature. Eduardo et al. [11] have classified corals using nine machine learning algorithms such as Decision Trees, Random Forest, Extremely Randomised Trees, Boosting, Gradient Boosted Trees, Normal Bayes Classifier, Expectation Maximisation NN and SVM. On comparison of performance, Decision Trees algorithm has yielded the most accurate performance, and SVM has resulted in poor performance. Jose [28] has classified coral images using Euclidean distance and has reported an accuracy of 80.5%. The classification techniques used for coral data sets will replace many hours of labour of a marine biologist dedicated to coral reef studies. However, more work has to be done in coral reef images to improve classification accuracy.

To overcome the gaps in submarine coral image classification problems, the best enhancement technique has to be used to increase classification accuracy. CLAHE is the best enhancement technique that can be used to enhance the coral images that are affected due to loss of light, loss of color and artifacts. Contrast Stretching (CS) is the best technique that can be used to enhance the coral images that are affected due to low-contrast and non-uniform lighting. The best feature extraction technique has to be used for extracting features with minimum execution time. LBP is an efficient technique for feature extraction, but it only covers the neighboring pixels of an image. Since an efficient method that could cover the derivative directions of neighboring pixels has to be used, ILDP technique is proposed which efficiently covers all pixels of an image in diagonal direction and executes in minimum time. Classification is important and each coral image must be classified accurately to its class. For coral image classification, SVM, CNN and KNN are known to provide efficient classification accuracy.

The existing techniques for coral reef image classification have been experimented only on respective data sets. Its suitability has not been tested on other data sets. All the earlier techniques have used preset classification schemes for all kinds of submarine coral image data sets. In common, coral image characteristics such as the size of the coral images, number of images, and resolution of image and color availability vary from one data set to another. So without comparison of all standard data sets, it is not possible to assess the correctness and competence of all techniques.

Fig. 1 shows the complete map of the proposed work. The proposed work has 3 stages: Coral Image Enhancement, Feature Extraction using ILDP and Coral Image Classification.

Image enhancement improves the visibility of coral image features by responding to various effects of the medium such as blurring, color transfer and light scattering. This image enhancement step helps to improve the accuracy of coral image classification process [13]. Two image enhancement methods are adopted (Fig. 2). Submarine images will be of low contrast, and some features will not be visible due to artifacts. CLAHE is the suitable method to enhance the blurred images. It avoids amplifying any noise that might limit those artifacts [29]. Contrast Stretching provides high-contrast coral image by stretching and remapping pixel values [30].

The first stage of image enhancement is achieved with Contrast Limited Adaptive Histogram Equalization (CLAHE) [29]. It locally improves the contrast of images by dividing the image into several sub regions and by transforming the intensity values of each sub region independently to fulfil with a specified target histogram [29]. This scheme works very successfully for submarine ater coral images of any size or resolution. It is much more computationally intensive than histogram equalization.

The second stage of image enhancement is achieved with Contrast Stretching (CS) [30] also called normalization. It is a straightforward submarine image enhancement technique that attempts to improve the contrast in a submarine coral image by stretching the range of intensity values. It is often referred to as the Dynamic Range Adjustment (DRA). The results of image enhancement step are presented in Fig. 2.

Feature extraction provides important information about the textural arrangement of surfaces and its relationship to the surrounding submarine environment. For coral images, features are extracted using texture and color descriptors. LBP is used for feature extraction in coral images [19], [24]. For extracting features efficiently, a novel texture pattern ILDP is proposed. This method extracts features diagonally along four (45°, 135°, 225° and 315°) directions so that it covers all neighboring pixels efficiently as shown in Fig. 3. The speciality of ILDP is that all pixels of the image are considered while reducing the bin size of histogram used in the classification stage.

Discriminating between coral images can be a very time-consuming task and is prone to human error. Automatic classification offers a great increase in speed as well as increasing uniformity among decisions. Two image classification methods are adopted. For smaller coral data sets, the KNN classifier has the best performance [13]. However, as the bulk of the data sets gets outsized, the efficiency of this technique decreases, due to the complex storage space, lower efficiency in classification response and lower noise acceptance. However, SVM can be the appropriate classifier for bigger data sets [13].

The first classification method is SVM [6], [23], [31] with Radial Basis function. SVMs are binary classifiers that estimate the optimum separating hyperplane that maximizes the margin between two classes. Given a training set of examples V = {(si,li),i=1,2,L} where siRq, and li {-1,1}, a new test data x is classified by using Eq. (1).f(x)=signi=1Lαilik(xi,xj)+bwhere αi are Lagrange multipliers of dual optimization problem, b is a bias or threshold parameter, and k is a kernel function. The training sample xi with >0 is called a support vector, and the separating hyper plane maximizes the margin with respect to these support vectors. Radial Basis Function (RBF) kernel [22], [31], [32], Polynomial kernel [22] and Linear kernel [31] are used in the experiments, where σ is manually selected, and r is selected as polynomial of degree 2 and kept fixed for all experiments. The RBF kernel, Polynomial kernel, and the Linear kernel used are given by Eq. (2), Eq. (3), and Eq. (4), respectively.k(xi,xj)=e-12σ2||xi-xj||k(xi,xj)=[xi·xj+1]rk(xi,xj)=xi·xjwhere xiandxj are features extracted using ILDP (described elaborately in the subsequent section) on the training and the testing samples respectively.

The second classification method used is KNN [33] classifier with four distance metrices, namely Euclidean distance [28], Manhattan distance [34], Canberra distance [34] and Chi-Square distance [35]. It finds the correlation between quantitative and continuous variables. An image is classified by a majority vote of its neighbors, with the image being assigned to the class most common among its k nearest neighbors. It is reported to be faster than most other means of determining correlation. If x = (x1,x2,xn) and u = (u1,u2,un), then the Euclidean distance between the feature vectors x and u is given by Eqs. (5), (6). This is used to estimate the similarity for the nearest neighbor classifier. This measure has an instinctive inspiration in that it estimates the nearest neighbor for two histograms. Its computational complexity is very low as it involves only straightforward operations. Manhattan distance between the feature vectors x and u is given by Eq. (7), Canberra distance between the feature vectors x and u is given by Eq. (8), and the Chi-Square distance between the feature vectors x and u is given by Eq. (9).d(x,u)=(x1-u1)2+(x2-u2)2++(xn-un)2d(x,u)=i=1n(xi-ui)2d(x,u)=i=1n|xi-ui|d(x,u)=i=1n|xi-ui||xi|+|ui|d(x,u)=i=1n(xi-ui)2(xi+ui)where x and u are the training and the testing samples generated from the histograms of the features extracted by ILDP.

The third classification method used is CNN [36], [37] which is particularly well-adapted to classify images. Local receptive fields, shared weights, and pooling are the steps followed in CNN for classification. Small localized region of the input image is connected to every hidden neuron. That region in the input image is called the local receptive field for the hidden neuron. The local receptive field is identified for an entire input image through sliding. For each local receptive field, there is a different hidden neuron in the first hidden layer. The weights and bias are shared between the neurons. A big advantage of sharing weights and biases is that it greatly reduces the number of parameters involved in a convolutional network. Max-pooling technique is used for pooling. It is a way of condensing information from the convolutional layer. The training data is used to train the network's weights and biases so that the network classifies the input images.

For an entire image, SDV is considered as the feature vector. The histogram of SDV as specified in Eq. (10) corresponds to the feature vector of the test sample which is compared with the histograms of SDVs of the training samples during classification.H(I)=H(SDV)

Section snippets

The proposed work

In this section, the proposed approach ILDP for feature extraction is discussed elaborately.

Experiment

In this section, experimental results and analysis made with the proposed ILDP for feature extraction are presented for five coral data sets, namely EILAT [42], RSMAS [43], EILAT2 [42], MLC 2012 [10] and SDMRI and four texture data sets, namely, KTH-TIPS [44], UIUCTEX [45], CURET [35], [46] and LAVA that are publicly available. Implementation is carried out using MATLAB 2010a. A concise review of the image data sets used in this work is presented in Table 1. Fig. 8 shows the example image

Conclusion

ILDP feature extraction approach is efficient for submarine coral reef images and texture data sets. ILDP template extracts local information by encoding various spatial relationships that capture more detailed information. The combination of the proposed feature extractor and the classifiers SVM, CNN and KNN attains the best results and are presented here. The accuracy and the efficiency of the proposed method are compared with LBP, LTrP, LDP and RLTP. The existing approaches are compared in

Acknowledgments

The authors would like to thank Oscar Beijbom for making MLC 2012 data set publicly available on the web, ASM Shihavuddin for providing data sets such as EILAT, EILAT2, RSMAS, LAVA, KTH-Tips and UIUCTEX data sets and J.K. Patterson Edward for providing Suganthi Devadason Marine Research Institute (SDMRI) data set.

References (63)

  • Haocheng Wen, Yonghong Tian, Tiejun Huang, Wen Gao, Single underwater image enhancement with a new optical model, in:...
  • O. Beijbom, P.J. Edmunds, D.I. Kline, B.G. Mitchell, D. Kriegman, Automated annotation of coral reef survey images, in:...
  • Eduardo Tusa, Alan Reynolds, David M. Lane, Hyxia Villegas, Antonio Bosnjak, Implementation of a fast coral detector...
  • Mohammad Hossein Shakoor et al.

    A Novel Advanced Local Binary Pattern for Image-based Coral Reef Classification. Multimedia Tools and Applications

    (2017)
  • A. Shihavuddin et al.

    Image-based coral reef classification and thematic mapping

    Remote Sensing

    (2013)
  • G. Padmavathi, M. Muthukumar, S. Thakur, Kernel principal component analysis feature detection and classification for...
  • Ma Shiela Angeli Marcos et al.

    Automated benthic counting of living and non-living components in Ngedarrak Reef, Palau via subsurface underwater video

    Environ. Monit. Assess.

    (2008)
  • N. Pican, E. Trucco, M. Ross, D.M. Lane, Y. Petillot, I. Tena Ruiz, Texture analysis for seabed classification:...
  • Oscar Pizarro et al.

    Towards image-based marine habitat classification

    Oceans

    (2008)
  • Ryan Clement, Matthew Dunbabin, Gordon Wyeth, Toward robust image detection of crown-of-thorns starfish for autonomous...
  • Maricor Soriano, Sheila Marcos, Caesar Saloma, Image classification of coral reef components from underwater color...
  • J. Hedley et al.

    Spectral unmixing of coral reef benthos under ideal conditions

    Coral Reefs

    (2004)
  • M. Dale Stokes et al.

    Automated processing of coral reef benthic images

    Limnol. Oceanogr.: Methods

    (2009)
  • A. Mehta, E. Ribeiro, J. Gilner, R. van Woesik, Coral reef texture classification using support vector machines, in:...
  • M.S. Bewley, B. Douillard, N. Nourani-Vatani, A. Friedman, O. Pizarro, S.B. Williams, Automated species detection: an...
  • M.S. Bewley, N. Nourani-Vatani, D. Rao, B. Douillard, O. Pizarro, S.B. Williams, Hierarchical classification in AUV...
  • A. Mahmood, M. Bennamoun, S. An, F. Sohely, F. Boussaid, R. Hovey, G. Kendrick, R.B. Fisher, Coral classification with...
  • Silvia Silva da Costa Botelho et al.

    Appearance-based odometry and mapping with feature descriptors for underwater robots

    J. Brazil Comput. Soc.

    (2009)
  • Yan Ke, Rahul Sukthankar, PCA-SIFT: a more distinctive representation for local image descriptors, in: Proceedings of...
  • A. Diaz Jose, E. Torres Raul, Classification of underwater color images with applications in the study of deep coral...
  • Ravindra Pal Singh et al.

    Histogram equalization: a strong technique for image enhancement

    Int. J. Signal Process., Image Process. Pattern Recogn.

    (2015)
  • Cited by (54)

    • A novel feature descriptor based coral image classification using extreme learning machine with ameliorated chimp optimization algorithm

      2022, Ecological Informatics
      Citation Excerpt :

      Overall, the texture features achieve higher classification accuracy than the shape and colour (Soriano et al., 2001). The author (Mary and Dharma, 2017) created an Improved Local Binary Pattern (ILBP) to extract diagonal intensity values in images. They used several datasets to test their system, including EILAT, EILAT2, MLC-2012 and RSMAS.

    View all citing articles on Scopus
    View full text