A new content-based image retrieval technique using color and texture information☆
Graphical abstract
The binary image (2D signal) R and its reconstructed image. The directional subbands and lowpass subband of 3 levels Contourlet decomposition for Zoneplate image.
Highlights
► To capture the color characteristics using Zernike chromaticity distribution moments. ► To extract the texture features using invariant image descriptor in Contourlet domain. ► To combine the color and texture information for color image retrieval.
Introduction
Nowadays, with increased digital images available on Internet, efficient indexing and searching becomes essential for large image archives. Traditional annotation heavily relies on manual labor to label images with keywords, which unfortunately can hardly describe the diversity and ambiguity of image contents. Hence, content-based image retrieval (CBIR) [1] has drawn substantial research attention in the last decade. CBIR usually indexes images by low-level visual features such as color and texture. The visual features cannot completely characterize semantic content, but they are easier to integrate into mathematical formulations [2]. Extraction of good visual features which compactly represent a query image is one of the important tasks in CBIR.
Color is widely regarded as one of the most expressive visual features, and as such it has been extensively studied in the context of CBIR, thus leading to a rich variety of descriptors. As conventional color features used in CBIR, there are color histogram, color correlogram, and dominant color descriptor (DCD) [1], [3], [4]. A simple color similarity between two images can be measured by comparing their color histograms. The color histogram, which is a common color descriptor, indicates the occurrence frequencies of colors in the image. The color correlogram describes the probability of finding color pairs at a fixed pixel distance and provides spatial information. Therefore color correlogram yields better retrieval accuracy in comparison to color histogram [3]. DCD is MPEG-7 color descriptors. DCD describes the salient color distributions in an image or a region of interest, and provides an effective, compact, and intuitive representation of colors presented in an image. However, DCD similarity matching does not fit human perception very well, and it will cause incorrect ranks for images with similar color distribution [5]. In Ref. [6], Yang et al. presented a color quantization method for dominant color extraction, called the linear block algorithm (LBA), and it has been shown that LBA is efficient in color quantization and computation. For the purpose of effectively retrieving more similar images from the digital image databases (DBs), Lu et al. [7] uses the color distributions, the mean value and the standard deviation, to represent the global characteristics of the image, and the image bitmap is used to represent the local characteristics of the image for increasing the accuracy of the retrieval system. Aptoula et al. [8] presented three morphological color descriptors, one making use of granulometries independently computed for each subquantized color and two employing the principle of multiresolution histograms for describing color, using respectively morphological levelings and watersheds.
Textures are psycho-physically perceived by the human visual system (HVS), particularly, on the aspects of orientation and scale of texture patterns. Texture is also an important visual feature that refers to innate surface properties of an object and their relationship to the surrounding environment. Many objects in an image can be distinguished solely by their textures without any other information. In conventional texture features used for CBIR, there are statistic texture features using gray-level co-occurrence matrix (GLCM), Markov random field (MRF) model, simultaneous auto-regressive (SAR) model, Wold decomposition model, edge histogram descriptor (EHD), etc. Recently, BDIP (block difference of inverse probabilities) and BVLC (block variation of local correlation coefficients) features have been proposed which effectively measure local brightness variations and local texture smoothness, respectively [9]. These features are shown to yield better retrieval accuracy over the compared conventional features. Kokare et al. [10] designed a new set of 2D rotated wavelet by using Daubechies eight tap coefficients to improve the image retrieval accuracy. The 2D rotated wavelet filters that are non-separable and oriented, improves characterization of diagonally oriented textures. In Ref. [11], He et al. presented a novel method, which uses non-separable wavelet filter banks, to extract the features of texture images for texture image retrieval. Compared to traditional tensor product wavelets (such as db wavelets), the new method can capture more direction and edge information of texture images. Tzagkarakis et al. [12] described the design of a rotation-invariant texture retrieval system that exploits the non-Gaussian heavytailed behavior of the distributions of the subband coefficients, representing the texture information via a steerable pyramid. Han et al. [13] proposed a scale-invariant Gabor representations, where each representation only requires few summations on the conventional Gabor filter impulse responses, and the texture features are then extracted from these new representations for conducting scale-invariant texture image retrieval.
Most of the early studies on CBIR have used only a single feature among various visual features. However, it is hard to attain satisfactory retrieval results by using a single feature because, in general, an image contains various visual characteristics. Recently, active researches in image retrieval using a combination of color and texture features have been performed [1], [2], [14]. In Ref. [15], two-dimensional or one-dimensional histograms of the CIELab chromaticity coordinates are chosen as color features, and variances extracted by discrete wavelet frames analysis are chosen as texture features. In scheme [16], an efficient approach for querying and retrieval by multiple visual features is proposed. The approach employs three specialized histograms (i.e. distance, angle, and color histograms) to store feature-based information. Choraś et al. [17] developed original CBIR methodology that uses Gabor filtration, in which the texture features based on thresholded Gabor features, and color features based on histograms are calculated. Lin et al. [18] propose three image features for use in image retrieval. The first image feature is based on color distribution and is called an adaptive color histogram (ACH). The second and third image features, called adaptive motifs co-occurrence matrix (AMCOM) and gradient histogram for adaptive motifs (GHAM), are based on color and texture features, respectively. Chun et al. [19] proposed a CBIR method based on combination of multiresolution color and texture features. As its color features, color autocorrelograms of the hue and saturation component images in HSV color space are used. As its texture features, BDIP and BVLC moments of the value component image are adopted. In Ref. [20], an automatic content-based video shot indexing framework is proposed employing five types of MPEG-7 low-level visual features (color, texture, etc.). Kebapci et al. [21] presented a content-based image retrieval system for plant image retrieval, intended especially for the house plant identification problem. The suitability of various well-known color and texture features was studied, and some new texture matching techniques are introduced. Hiremath et al. [22], [23] presented novel retrieval frameworks for combining multiple image information, in which the local color and texture descriptors are captured in a coarse segmentation framework of grids.
In this paper, we propose a new content-based image retrieval technique using Zernike chromaticity distribution moments and rotation-scale invariant Contourlet texture feature, which achieves higher retrieval efficiency. The rest of this paper is organized as follows. Section 2 presents Zernike chromaticity distribution color moments extraction. Section 3 describes the Contourlet transform and rotation-scale invariant texture representation. Section 4 contains the description of similarity measure for image retrieval. Simulation results in Section 5 will show the performance of our scheme. Finally, Section 6 concludes this presentation.
Section snippets
The Zernike chromaticity distribution moment
In general, color is one of the most dominant and distinguishable low-level visual features in describing image. Many CBIR systems employ color to retrieve images, such as QBIC system and VisualSEEK. In the proposed image retrieval method, we capture the characteristics of the color contents of an image by using Zernike chromaticity distribution moments directly from the chromaticity space. It is shown that the set of Zernike chromaticity distribution moments can provide a compact, fixed-length
The rotation-scale invariant Contourlet texture feature
Most natural surfaces exhibit texture, which is an important low-level visual feature. Texture recognition will therefore be a natural part of many computer vision systems. In this paper, we propose a new rotation-invariant and scale-invariant texture representation for image retrieval based on Contourlet transform.
Similarity measure
After the color and texture feature vectors are extracted, the retrieval system combines these feature vectors, calculates the similarity between the combined feature vector of the query image and that of each target image in an image DB, and retrieves a given number of the most similar target images.
Experimental results
In this paper, we propose a new and effective color image retrieval scheme for combining color and texture information, which achieve higher retrieval efficiency. To evaluate the performance of the proposed algorithm, we conduct an extensive set of CBIR experiments by comparing the proposed algorithm to several state-of-the-art image retrieval approaches [16], [23].
Conclusion
CBIR is an active research topic in image processing, pattern recognition, and computer vision. In this paper, a CBIR method has been proposed which uses the combination of Zernike chromaticity distribution moments and rotation-scale invariant Contourlet texture descriptor. Experimental results showed that the proposed method yielded higher retrieval accuracy than the other conventional methods with no greater feature vector dimension. In addition, the proposed method almost always showed
Acknowledgement
This work was supported by the National Natural Science Foundation of China under Grant Nos. 61272416, 60873222, 60773031, the Open Foundation of State Key Laboratory of Information Security of China under Grant No. 04-06-1, and Liaoning Research Project for Institutions of Higher Education of China under Grant Nos. 2008351 & L2010230.
Xiang-Yang Wang is currently a professor with the Multimedia and Information Security Laboratory, School of Computer and Information Technology, Liaoning Normal University, China. His research interests lie in the areas of information security, image processing, pattern recognition, and computer vision.
References (29)
- et al.
Performance evaluation and optimization for content-based image retrieval
Pattern Recognition
(2006) - et al.
Effective image retrieval using dominant color descriptor and fuzzy support vector machine
Pattern Recognition
(2009) - et al.
A fast MPEG-7 dominant color extraction with new similarity measure for image retrieval
Journal of Visual Communication and Image Representation
(2008) - et al.
Texture image retrieval using rotated wavelet filters
Pattern Recognition Letters
(2007) - et al.
Texture image retrieval based on non-tensor product wavelet filter banks
Signal Processing
(2009) - et al.
Rotation-invariant and scale-invariant Gabor features for texture image retrieval
Image and Vision Computing
(2007) - et al.
A histogram-based approach for object-based query-by-shape-and-color in image and video databases
Image and Vision Computing
(2005) - et al.
An effective image retrieval scheme using color, texture and shape features
Computer Standards & Interfaces
(2011) Invariant pattern recognition: a review
Pattern Recognition
(1996)- et al.
Image retrieval: ideas, influences, and trends of the new age
ACM Computing Surveys
(2008)
Color image retrieval technique based on color features and image bitmap
Information Processing and Management
Morphological description of color images for content-based image retrieval
IEEE Transactions on Image Processing
Cited by (37)
Optimized Feature Integration and Minimized Search Space in Content Based Image Retrieval
2019, Procedia Computer ScienceMulti-layer multi-level color distribution – User feedback model with wavelet analysis for color image retrieval
2018, Computers and Electrical EngineeringCitation Excerpt :Image Retrieval based on the combination of Color Histogram and Color Moment [5] feature vectors are combined. Content-based image retrieval using color moment and Gabor texture feature [6] uses color features, color moments of the Hue, Saturation and Value (HSV) component images in HSV color space [11]. Since, it uses texture features, Gabor texture descriptors are adopted.
Resemblance of biological particle swarm optimization and particle swarm optimization for CBFR by using NN
2018, Materials Today: ProceedingsImage moment invariants as local features for content based image retrieval using the Bag-of-Visual-Words model
2015, Pattern Recognition LettersCitation Excerpt :Unlike conventional local features such as SIFT (scale-invariant feature transform) [22], which extract a histogram of local oriented gradients from the region of interest, or SURF (speeded up robust features) [23], which describe the distribution of the Haar-wavelet responses within the interest point neighborhood, this paper presents a set of descriptors that produce compact representations of the color content included in a patch of interest using AMIs. In general, both color and color moment invariants are predominantly used in image retrieval tasks as global features [1,24–26] and have demonstrated that their combination favors the retrieval of visually similar images. Additionally, recently presented papers [8,27–31] confirmed that by enriching the salient patches detected by keypoint detectors with color information, leads to a notable increase in the retrieval performance and reduces the impact of false positive matches.
Fusion of Colour, Texture, and Shape Features with Supervised Learning Model for Content Based Image Retrieval
2023, International Journal of Intelligent Systems and Applications in EngineeringA Novel Hybrid Approach for a Content-Based Image Retrieval Using Feature Fusion
2023, Applied Sciences (Switzerland)
Xiang-Yang Wang is currently a professor with the Multimedia and Information Security Laboratory, School of Computer and Information Technology, Liaoning Normal University, China. His research interests lie in the areas of information security, image processing, pattern recognition, and computer vision.
Hong-Ying Yang is currently a professor with the School of Computer and Information Technology at the Liaoning Normal University, China. Her research interests include signal processing and communications, digital multimedia data hiding.
Dong-Ming Li received the B.E. degree from the School of Computer and Information Technology, Liaoning Normal University, China, in 2009, where he is currently pursuing the M.S.E. degree. His research interests include image retrieval and signal processing.
- ☆
Reviews processed and recommended for publication to Editor-in-Chief by Deputy Editor Dr. Ferat Sahin.
- 1
Tel.: +86 0411 85992415.