1 Introduction

Glaucoma refers to a group of eye diseases associated to optic nerve damage (i.e. glaucomatous optic neuropathy), a common pathology which possibly leads to blindness.

The guidelines of the International Council of Ophthalmology (ICO) recommend that all patients should be checked for glaucoma risk factors during ophthalmic examinations [1]. Among them, a high intra-ocular pressure (IOP) is considered one of the most indicative parameters to be monitored [2], and the structure mainly responsible for its regulation, the trabecular meshwork (TM), one of the major targets of clinical assessment.

The iridocorneal angle (or simply angle) is the region between the periphery of the iris and the sclera-cornea junction. The TM is a layered structure located on the side wall of the angle between the scleral spur (a collagen ridge of scleral tissue), and the Schwalbe’s line (a ring demarcating the border between the trabecula and the cornea) as illustrated in left of Fig. 1.

Fig. 1.
figure 1

Left: Iridocorneal angle in a gonioscopic image. The TM is located between the Schwalbe’s line and the scleral spur and is divided into pigmented (or functional) and nonpigmented (or nonfunctional) band. Right: Scheie’s pigmentation grading system of the TM. Reprinted with permission. Arch Ophthalmol. 1957;58:510–512. Copyright 1957, American Medical Association. All Rights Reserved.

IOP is mainly regulated by the outflow of aqueous humor from the anterior chamber, through the TM into the Schlemm’s canal, and finally into the episcleral veins [3].

Depending on the patient’s specific conformation of the angle, the iris can be separated from the trabecula (open angle) or folded onto the trabecula (closed angle), respectively, permitting or limiting the normal outflow of aqueous humor and determining the overall capacity of controlling the IOP. According to this anatomical categorization, glaucomatous eyes are mainly classified as open-angle or closed-angle [2].

Secondary Open-Angle Glaucoma (SOAG) is related to the dispersion of pigment particles originated by the rubbing of the iris epithelium against the lens zonules [4, 5]. These particles follow the aqueous humor flow, and accumulate on the TM, yielding the pigment dispersion syndrome (PDS) [6] which is typical of pathologies like the pigmentary glaucoma. Pigmentation may also result from the deposition of membrane-like material which is distinctive of the pseudoexfoliation syndrome.

Gonioscopy permits the inspection of the iridocorneal angle, and is indicated as a mandatory procedure by ICO for glaucoma assessment. Other techniques for the evaluation of the anterior chamber angle, like ultrasound biomicroscopy or anterior segment OCT, give anatomical quantification of the angle width. However, they do not provide any chromatic information which is also extremely important for a comprehensive and correct diagnosis.

The pigmentation of the TM was firstly described in 1957 by Scheie [7] who also proposed a pigmentation grading system consisting of five categories numbered from grade 0 (or None) when no TM pigmentation is perceivable, up to a grade IV when the TM is severely pigmented. A graph of the Scheie’s pigmentation grading system is in the right of Fig. 1. Usually, pigmentation is not constant, and tends to be higher in the inferior part of the eye.

In this paper, a study on an automatic pigmentation grading system of the trabecular meshwork is presented. The proposed algorithm makes use of machine learning concepts in order to provide a classification comparable with the Scheie’s pigmentation grading system.

The paper is organized as follows. Section 2 presents some insights on the data preparation. Section 3 introduces the processing algorithm. Results are presented in Sect. 4. Finally, conclusions are given in Sect. 5 together with some ideas for the future development of this work.

2 Data Preparation

This study was conducted on a dataset of 317 images of Japanese eyes already diagnosed with glaucoma. All images were collected at the Matsue Red Cross Hospital (Matsue, Japan), in 2016, during one of the clinical trials of the prototype of a newly introduced gonioscopy device (GS-1, Nidek Co., Ltd.). The study adhered to the tenets of the Declaration of Helsinki. The institutional review boards of Matsue Red Cross Hospital, Matsue (Japan), reviewed and approved the research. All subjects provided written informed consent.

Acquired images present a high variability of illumination conditions. This depends on several factors such as the illumination LED power used during the data acquisition and the actual reflectivity of the patient’s iridocorneal angle. In addition, the pigmentation pattern of the TM may change significantly between different sectors of the same eye. Hence, an initial preprocessing is mandatory for reducing the input data variability.

In each image, a region of interest (ROI) was manually defined by a skilled operator in order to include the scleral spur and the TM up to the Schwalbe’s line. This ROI (marked in red in Fig. 2(a)) contains the information considered during the manual assessment of the Scheie’s pigmentation grade which was also given by the clinician and assumed as the ground truth.

Given the relatively small size of the available dataset, it was decided to reduce the original Scheie’s five-class system to a simplified three-class system by grouping together grades 0 and I, and grades III and IV, respectively. On the other hand, grade II was kept unchanged and common to both grading systems. According to this grouping, the reduced system has grades labeled 0/I, II, and III/IV.

The rationale behind this decision was that clinical differences between a grade 0 or I, or between a grade III or IV given by an expert are often very subjective because they strongly depend on the illumination conditions during the examination. On the contrary, grade II, having a characteristic pattern with a central heavy-pigmented band, is easily identifiable and thus more reliable.

During the selection of the dataset, images were excluded either when too many iris processes or extended synechiae were covering the TM (thus preventing the assessment of the pigmentation grade), or when the image quality was considered inadequate because of insufficient brightness or for the presence of saturated regions (thus preventing any reliable color analysis).

3 Algorithm Description

The proposed algorithm relies on a machine learning (ML) classifier for attributing the pigmentation grade to the TM portion inside the selected ROI. As usual in ML, a first image preprocessing Matlab script is applied for extracting the set of relevant features which are then fed into the classifier.

3.1 Image Preprocessing

Acquired images present a high variability in brightness and pigmentation patterns. Thus, an image preprocessing step is required for reaching an illumination-independent representation and for simplifying the set of features that will be processed by the ML algorithm.

As a preliminary step, RGB colors are converted to the Lab color space. This operation separates the lightness component L from the two chromatic opponents a and b.

For addressing the dependency on the illumination variability, an auxiliary ROI (aux ROI) is considered over the cornea just anterior to the Schwalbe’s line. Consequently, the auxiliary ROI can be automatically selected given the position of the operator-selected ROI (main ROI). Figure 2(a) shows some examples of the displacement of the main ROI (in red) and the aux ROI (in blue).

More importantly, the cornea is not pigmented, and has a rather homogeneous color (white or slightly yellowish) suitable for defining a reference point in the Lab color space. It was decided to consider the aux ROI median value, \((L_{aux}, a_{aux}, b_{aux})\) as this reference point. As it will be seen later on, this value is used for gaining an illumination-independent representation of colors in the main ROI.

In order to cope with the pattern variability, a fundamental observation from right of Fig. 1 is that, as a first approximation for assessing the pigmentation grade, it may suffice to characterize the color gradient along the radial direction with respect to the pupil’s center instead of characterizing any specific pigmentation patterns of the TM.

For taking into consideration this geometrical behavior, pixels in the main ROI are grouped in a discrete number of arched stripes according to their distance from the estimated pupil’s center. Then, each pixel within a cluster is substituted by the median Lab value of the cluster it belongs to. The effect of this operation is represented in Fig. 2(b).

The radially-dependent median operator is able to simplify the pattern still maintaining the color gradient along the radial direction. In addition, the median operator may also filter out some sporadic pigmentation spots as in the top of Fig. 2(b), or even isolated iris processes possibly entering the main ROI.

Fig. 2.
figure 2

Preprocessing results for a grade 0/I TM (top), a grade II TM (center), a grade III-IV TM (bottom). (a) Main ROI (in red) and aux ROI (in blue) before preprocessing. (b) Resulting main ROI gradient after preprocessing. Beginning and ending of the bisecting segment are represented by the red and blue squares, respectively. (c) (Above) illumination profile in the original image, with the \(L_{aux}\) marked as a dashed line. (Below) the corresponding illumination-independent profile and its linear interpolation (dashed line). The X axis is normalized so that “0” represents the beginning of the bisecting segment close to the iris, whereas “1” represents the end of the bisecting segment toward the cornea. Observe that illumination-independent profiles of grade 0/I (top) are characterized by a linear behavior and a small subtended area, while profiles of grade II (center) are characterized by a highly nonlinear behavior, and those of grade III/IV (bottom) are characterized by a linear behavior and a large subtended area. (Color figure online)

Once that the color description of the ROIs has been reduced, it is possible to proceed with the extraction of the radial color gradient (or “profile”) from the main ROI, and with the creation of an illumination-independent representation exploiting the reference point given by the median value of the aux ROI previously defined.

3.2 Illumination-Independent Profiles

Consider a segment bisecting the processed main ROI (i.e. the one connecting the red square to the blue square in Fig. 2(b)).

Let \(\{L_i\}, \;i=1,\dots ,N\) be the sampled values of L along the segment in N points. Also let \(L_{aux}\) be the median L value in the auxiliary ROI. Then, an illumination-independent description of the information carried by L is given by:

$$\begin{aligned} \hat{L}_i = L_i - L_{aux}\;\;\;\; i = 1,\dots ,N. \end{aligned}$$
(1)

Examples of these profiles are shown in Fig. 2(c) for grades 0/I, II, and III/IV, respectively.

The sampled values are included in the set of features forwarded to the ML classifier. The L profile along this segment was sampled in \(N=16\) equispaced points. The illumination-independent profiles give some hints on additional features that could be extracted from the data. First of all, grade II has a peculiar dark band in the center of TM (corresponding to the position of the functional TM, i.e. the part of the TM located ahead of the Schlemm’s canal), and two less pigmented TM regions (corresponding to the nonfunctional TM). This results in a nonlinear behavior of the L profile which is not typical for grades 0/I or III/IV, having both a rather uniform shade and, hence, an almost linear gradient. This suggested us that a linearity index could be included in the feature set, and the mean squared error between the actual profile and its linear fitting was chosen:

$$\begin{aligned} \text{ mse } = \frac{1}{N}\sum _{i=1}^N \left( \hat{L}_i - \bar{L}_i\right) ^2, \end{aligned}$$
(2)

where \(\bar{L}_i\) is the linear fitting of \(\hat{L}_i\). Finally, other indices for better discriminating grade 0/I from grade III/IV (both having a linear behavior of the illumination-independent profile) were found to be the area subtended by the profile itself. In fact, being grade III/IV darker than the cornea (due to the heavy pigmentation), its profile should be rather distant from the reference value \(L_{aux}\) extracted from the auxiliary ROI. Instead, the ROI for a grade 0/I shows a color similar to the one of the cornea. Thus, the area subtended by the profile is smaller than the one for a grade III/IV. Hence, the following integral indices were also introduced:

$$\begin{aligned} \text{ Area }(\hat{L}) = \sum _{i=1}^N \hat{L}_i \ , \ \text{ Area }(\hat{a}) = \sum _{i=1}^N \hat{a}_i \ , \ \text{ Area }(\hat{b}) = \sum _{i=1}^N \hat{b}_i . \end{aligned}$$
(3)

3.3 Classification Using Random Forests

The correspondence between the reduced pigmentation grading system and the extracted image indices is obtained by means of the random forest classifier [8] implemented in the R statistical framework [9].

The dataset containing 317 manually-graded images was divided into a training set with 80% of the available images, and a validation set with 20% of images. The three classes were reasonably distributed, accounting 25.2% for grade 0/I, 36.9% for grade II, and 37.9% for grade III/IV, respectively. A total of 2000 trees were grown in the forest during the training phase obtaining the error rates reported in Fig. 3(a).

4 Results

Performance of the trained random forest was tested on a validation set having 64 images never used during the training. The confusion matrices for the training and validation are reported in Table 1.

Table 1. Confusion matrix during (a) ML training (b) ML validation

During validation, the maximum class error was 11.54% for grade II, whereas the average grade error was 7.81%, corresponding to a success rate of 92.19%. Interestingly, grade 0/I was always correctly predicted, and wrong predictions were always between the grade II and III/IV.

Fig. 3.
figure 3

(a) Error rates of the random forest computed during the training phase. (b) Variable importance for the trained random forest: prism facet, angle of the bisecting segment with respect to the horizontal axis of the eye, L, a, and b median values of the auxiliary ROI, sampled illumination-independent L profile along the ROI bisecting segment as illustrated in Sect. 3.2, the integral indices given in (3), mean squared error between the illumination-independent L profile and its linear fitting as given by (2). The most significant variable for discriminating among the grades was the linearity index (mse).

A relevant outcome of the random forest training is the characterization of the importance of input features, intended as their discriminant capability in producing the correct classification.

Results obtained in this study are reported in Fig. 3(b), where variables are sorted in descending order according to their estimated importance.

This figure suggests that the best discriminant feature is related to the linear fitting index (mse). This is not completely unexpected since there is an evident change of behavior for the illumination-independent L profile when passing from grade 0/I to grade II, and from grade II to grade III/IV.

Other relevant features are the facet and angle indices. These two variables identify the eye sector from which the image of the iridocorneal angle was acquired. Again, this does not surprise much since pigmentation tends to be more evident in the inferior part of the eye due to the gravity force that conveys more aqueous humor, and hence pigment, toward the bottom part of the anterior chamber. Interestingly, this fact statistically arises during the training of the random forest. Finally, also the terminal section of the processed L profiles (namely, L15, L14, L16 and L13) were important. These indices characterize the nonfunctional TM near the sclera and this is the TM region that changes more significantly as a function of the pigmentation grade.

5 Conclusions

The color information of the TM was analyzed in iridocorneal-angle images. An appropriate image preprocessing permits to define a feature set that can be classified by means of machine learning. A random forest classifier obtained a success rate of 92.19% with respect to the classification of an expert grader. Future work will regard the development of an automatic five-class Scheie’s grading system, for which further acquisitions are necessary for extending the dataset.