A novel method for automated correction of non-uniform/poor illumination of retinal images without creating false artifacts

https://doi.org/10.1016/j.jvcir.2018.01.005Get rights and content

Highlights

  • A novel illumination correction method for color fundus image.

  • A novel color restoration method to minimize false color/artifacts creation.

  • Extensive subjective experiment to justify the effect of artifacts creation.

  • Objective experiment to justify the significance of the proposed method.

Abstract

Retinal images are frequently corrupted by unwanted variations in the brightness that occur due to over-all imperfections in the image acquisition process. This inhomogeneous illumination across the retina can limit the pathological information that can be gained from the image; and can lead to serious difficulties when performing image processing tasks that requires qualitative as well as quantitative analysis of feature presence on the image. On that perspective we have proposed a novel two-step approach for non-uniform and/or poor illumination correction in the context of retinal imaging. A subjective experiment was conducted to ensure that the proposed method did not create visually noticeable false color or artifacts on the images, especially on the areas that did not suffer non-uniform/poor illumination prior to correction. An objective experiment on 25,872 retinal images was performed to justify the significance of the proposed method for automated pathology detection/classification.

Introduction

Certain eye diseases such as age related macular degeneration (AMD) [1] and diabetic retinopathy (DR) [2] are becoming more prevalent nowadays [3]. AMD and DR can cause blindness if not treated in due time. Early detection is the key to treat AMD and DR, and to overcome blindness. Imaging technique named color fundus photography [4] which is a non-invasive examination of the eye, is considered as an efficient modality to screen for and diagnose several eye diseases including DR and AMD. The widespread availability of the color fundus cameras and the easily manageable data format have made this imaging technique popular nowadays [5].

Retinal images obtained in a screening program are acquired at different sites, using different cameras that are operated by qualified people who have varying levels of experience [5]. These result in a large variation in image quality [6], and a relatively high percentage of images with poor illumination. Studies have shown that poor illumination can impede human grading in about 10–15% of retinal images [7]. For automated method, non-uniform and/or poor illumination (see Fig. 1) can significantly affect the grading performance [8], [9]. Thus methods for automated correction of non-uniform/poor illumination have got utmost importance. While a great number of methods exist for automated correction of non-uniform/poor illumination, majority of them interfere with the color appearance and create false colors. In retinal imaging where different pathologies are primarily attributed based on color, creation of false color by automated methods will mess-up the whole grading. On that perspective in this paper, we propose a novel two-step approach for illumination correction of color fundus image that does not create false color or artifacts. We experimentally identify the best color space transform model [10] to split the luminosity channel from the RGB image, so that the illumination correction can be performed in the luminosity channel only, without affecting the chromaticity (i.e. color) values. A novel color restoration method is also proposed to restore the color of the original image to make sure that the illumination correction does not create false color/artifacts on the image, specially on the areas that did not suffer non-uniform or poor illumination prior to correction.

The performance of multiple color space transformations was investigated. Both subjective and objective experiments were performed to evaluate the performance of the proposed method.

Specific contributions of the paper include:

  • 1.

    Identification of best color space transform to perform illumination correction in the context of retinal imaging, and adaptation of background subtraction model accordingly to process color fundus photographs.

  • 2.

    A novel color restoration method.

  • 3.

    Extensive subjective experiments to justify the proposed method that the illumination correction does not create false artifacts at least in the areas that did not suffer non-uniformness or poor illumination prior to correction.

  • 4.

    Objective experiments on a dataset of 25,872 retinal images to show that the proposed method contributes to better detection/classification of pathologies by automated method, when applied as a pre-processing technique.

Several methods for non-uniform illumination and shade correction have been described in the literature. Popular techniques for non-uniform illumination and shade correction include linear filtering [11], homomorphic filtering [12] and surface fitting [13]. Linear filtering assumes that only the additive shading component has distorted the image and that it can be estimated by filtering the acquired image with a low pass filter. Homomorphic filtering assumes that only the multiplicative shading component is present in the acquired image, and its estimation is performed by low pass filtering of the acquired image in the log domain. Surface fitting method assumes that the intensity variations of the background can be estimated by fitting a shading model [14]. Usually, a second order polynomial is used as the function of the model for least-squares fitting. This function may represent either the multiplicative or the additive shading component and then is used to estimate the shading-free image. Histogram equalization [15], Gamut mapping and Gamma correction [16], Retinex approach [17], [18], are some of the other commonly used methods for illumination correction.

In the context of automated analysis of retinal images, Narasimha-Iyeret et al. [19] proposed an illumination correction method that combines the advantages of homomorphic filtering and surface fitting. The method also exploits and uses retina-specific information. In [9], Foracchia et al. proposed a method for automated luminosity and contrast normalization of retinal images. The method estimates of the luminosity and contrast variability in the background part of the image and then manages to compensate them. Grisan et al. [20] proposed a model-based approach for the correction of retinal images. The method uses the hue, saturation, value (HSV) color space to better decouple the luminance and chromatic information. Then, it fits an illumination model on a proper subregion (the retinal background) of the saturation and value channels. Leahy et al. [21] applied Laplace interpolation and a multiplicative image formation model illumination correction of retinal images. In [22], Zheng et al. uses the sparsity property of image gradient distribution for illumination correction of retinal fundus images. Kolar et al. [23] proposed a non-uniform illumination correction method in color fundus images relying on B-spline approximation of the illumination surface.

In [24], Varnousfaderani et al. proposed a method to remove non-uniform illumination in retinal images and improve their contrast based on a reference image. The method uses the LUV color space for normalization. Experiment shows that the proposed method significantly increases the accuracy of a computer based DR grading system.

While a large number of illumination correction methods have been proposed to help automated analysis of retinal images, none of them analysed whether such correction method might change the appearance or color of the overall image, specifically on the areas of the image that do not suffer non-uniform illumination. In DR or AMD where each pathology is primarily associated with color, changing color will have significant consequence on overall pathology analysis. A majority of the automated retinal image analysis methods that used non-uniform illumination correction as pre-processing, they performed correction only on the green channel of the image and then analysed pathology based on the green channel data only [25], [26]. While few arbitrary attempts [24], [20] have been made that split the luminance or brightness channel of the image then perform illumination correction on that channel only, not much of reasoning behind the choice of a particular model for splitting the luminance channels was provided. We differ from these studies by experimenting our method on a range of color spaces in an attempt to find the best one in the context of retinal imaging. Nevertheless we also perform a subjective evaluation to analyze the effect of artifacts (if there is any) created by the method.

Section snippets

Colour spaces

Color space, also known as the color model (or color system), is an abstract mathematical model which contains all realizable color combinations, and relates numbers to actual colors [10]. Color space is a useful conceptual tool for understanding the color capabilities of a particular device or digital file. Color spaces are typically divided into two different types, namely, device-dependent color spaces and device-independent spaces. Device-dependent color spaces express color relative to

Proposed illumination correction

The proposed illumination correction consists of two phases:

  • Background subtraction based illumination correction on the luminance or brightness channel.

  • Color restoration.

Subjective experiment

This experiment was conducted to identify the best color space transform model to split the luminance channel in the context of retinal imaging and to better analyze the effect of false color/artifacts creation by the proposed method on the images.

Discussions and conclusion

Illumination correction is an important pre-processing step for the grading of retinal pathology. While a large number of methods have been proposed, they did not give enough consideration to analyze the effect of the proposed correction on the overall color appearance of the image, or whether the method will create false color or not. To minimize the effect of color change, few arbitrary attempts have been made to convert the RGB to luminance and chrominance channels and then performing

Funding information

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References (48)

  • J. Cunha-Vaz, B. Rui, S. Torcato, et al. Computer-Aided Detection of Diabetic Retinopathy Progression. Digital...
  • A.D. Fleming et al.

    Automated assessment of diabetic retinal image quality based on clarity and field definition

    Investig. Ophthalmol. Vis Sci.

    (2006)
  • A.A.A. Youssif, A.Z. Ghalwash, A.S. Ghoneim, A comparative evaluation of preprocessing methods for automatic detection...
  • E.S. Varnousfaderani, S. Yousefi, A. Belghith, M.H. Goldbaum, Luminosity and contrast normalization in color retinal...
  • M.D. Fairchild

    Color Appearance Models

    (2005)
  • J.C. Russ

    The Image Processing Handbook

    (1995)
  • R. Guillemaud

    Uniformity correction with homomorphic filtering on region of interest

    Int. Conf. Image Process.

    (1998)
  • M.D. Vlachos et al.

    Non-uniform illumination correction in infrared images based on a modified fuzzy c-means algorithm

    J. Biomed. Graphics Comput.

    (2012)
  • R.C. Gonzalez et al.

    Digital Image Processing

    (2008)
  • G. Finlayson et al.

    Improving gamut mapping color constancy

    IEEE Trans. Image Process.

    (2000)
  • D.J. Jobson et al.

    Properties and performance of a center/surround retinex

    IEEE Trans. Image Process.

    (1997)
  • B. Li, S. Wang, Y. Geng, Image enhancement based on Retinex and lightness decomposition, in: 18th IEEE International...
  • H. Narasimha-Iyer et al.

    Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy

    IEEE Trans. Biomed. Eng.

    (2006)
  • E. Grisan, A. Giani, E. Ceseracciu, A. Ruggeri, Model-based illumination correction in retinal images, in: 3rd IEEE...
  • Cited by (0)

    This paper has been recommended for acceptance by Dr Zicheng Liu.

    View full text