A hybrid method for 3D mosaicing of OCT images of macula and Optic Nerve Head
Introduction
Retinal Optical Coherence Tomography (OCT) is a medical imaging system similar to sonography that instead of sound waves, uses light waves to take cross-section images of the retina. Three-dimensional OCT images render information about the layers of the retina, macula and Optic Nerve Head (ONH) [1]. In addition to the cross-sectional images of the retina layers, some OCT imaging systems also provide a fundus image [2], a two-dimensional image of the retina's interior surface that represents the structure of the retina and optic disc and provides information from the macula, fovea avascular zone, and posterior pole. 3D-OCT images contain multiple cross-sectional images called B-scans (Fig. 1), each comprising of a series of scan-lines (namely A-scans), which are in the path of the light spreading into the tissue of interest. 3D-OCT images can provide information about intra-retinal layers and 3D pathologies within the retina which help ophthalmologists improve the diagnosis of eye diseases. Since manual analysis of 3D OCT data is an expensive, subjective, and time consuming process, many semi/fully automatic OCT image analysis algorithms have been introduced [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]. The mosaicing of OCT images of macula and ONH is an automatic OCT analysis procedure that improves ophthalmologist judgment by enabling simultaneous analysis of different information from an eye-wide field of view. OCT systems are often unable to provide aligned scans of the macula and ONH together at the same time, therefore B-scan mosaicing is required to display them in one frame together and create a wide-field OCT image.
Image mosaicing generates a high resolution panorama or “stretching image” [14], [15], [16], [17] by merging two or more overlapping images of a similar scene. This leads to an image with larger Field of View (FOV), which is an important task in many image processing applications, since the FOV of cameras is generally much narrower than the human retinal extension [18]. This is a problem that arises in many areas in the field of computer vision and image processing, ranging from satellite imagery and industrial applications (such as investigation, mapping, teleoperation, maintenance and inspection) to medical imaging [19]. In medical applications, large panoramic images help clinicians improve their visual observation on the focus and encircling parts [18], [20], [21].
The mosaicing process is usually carried out in three stages: 1) Registration [22], 2) Reprojection [20], [23], and 3) Blending [24]. In the registration step one of the input images is considered as a reference image and another image as a target and these two images are matched together to identify overlapping areas. The parameters for matching target and reference images are found in this step; and the overlapping areas for mosaicing are extracted. Several different methods have been developed for registration [25], including pixel based algorithms [26], [27], frequency domain based algorithms [28], low-level feature based algorithms (such as edge and corner detection and matching approaches) [29] and high-level feature based methods (such as matching based on identified parts of the object or relations between features, i.e., graph theoretic methods) [30]. In the reprojection step, the obtained parameters are applied on the target image. Finally, the overlapping areas of two images are combined in the blending step [31], [32].
Table 1 provides an overview of mosaicing methods for retinal and OCT images. In Ref. [33], Yang and Huang present an automatic retinal image mosaicing method based on similarity of blood vessels. In this approach, characteristics of blood vessels are extracted via mathematical morphology and maximum entropy algorithms. In the registration step, optimal parameters are obtained using a global optimization scheme such as genetic algorithm [34], [35]. Afterwards, a fade-by-fade method is used for the blending process.
In Ref. [36] a new method for mosaicing several color and fluorescein retinal images is presented wherein the registration step consists of extracting features by local classification based on image gradient and segmentation models. A suitable cost function is considered for model fitting by a gradient based method and features are matched between pairs of images using the maximum mutual information criteria presented in Ref. [37] and an affine transform is used for image registration [38]. The reference frame is selected by minimizing the registration error via the Floyd-Warshall all-pairs shortest path algorithm [39]. The mosaic of color and fluorescein retinal images is then obtained via alpha-blending.
In Ref. [40] a mosaicing method is presented for fundus images based on Principal Components Analysis with Scale Invariant Feature Transform (PCA-SIFT) [41]. The quadratic transformation is estimated by an m-estimator and the affine transformation model is finalized using the RANdom SAmple Consensus (RANSAC) algorithm [42], [43], which is a randomized estimator that increases the accuracy of estimation by rejecting outliers. Finally, the weighted mean method is used for blending.
In Ref. [44] a novel and effective hierarchical mosaicing algorithm has been introduced for neonatal retinal images. In this work, registration is performed by selection of features using the Hessian matrix, and bilateral matching is utilized for matching features. Maximum Likelihood Estimation SAmple Consensus (MLESAC) [45] is used for estimating the affine transformation and rejecting outliers. A 6-level Laplacian pyramid is then constructed for each registered image and a mask is derived for blending the overlapping regions. For the blended region, the Laplacian pyramid is constructed by applying the mask at each level [46] and the non-overlapping regions are remembered in the final mosaic.
In another study, researchers have processed video slit lamp bio-microscopic fundus image sequences for making wide-field and high quality fundus image mosaics. Minimizing the Sum of Squares of the Differences (SSD) is used to compose the image from video by only searching over translations. Blending is performed by breaking down each fundus slit image into a Laplacian pyramid. Mosaicing of slit lamp data is useful when all optometric practices have one or more slit lamp bio-microscopes, without access to a fundus camera [47].
In Ref. [48], wide-field mosaics of speckle variance OCT images are produced for visualizing the retinal vascular system. Data is gathered at different positions of the retina around the fovea avascular zone. Positions for patient fixation are selected to avouch approximately 50% lateral overlap between data sets. In this proposed approach, B-spline based free form deformation method is used for the registration process, and the overlapping parts of the wide-field mosaics of the 3 retinal layers are obtained by averaging.
In Ref. [49], phase correlation is employed for global 3D alignment of a pair of overlapping OCT volume scans. Phase correlation is a Fourier transform-based method that can be used to calculate translational offset between two images [50]. Each overlapping pair of B-scans is considered separately and the angle of rotation between them is found using 2D phase correlation. Finally, the B-scans are stitched together by feathering or center-weighted image blending.
According to advantages/disadvantages of reviewed methods (Table 1), some of these techniques are not applicable in our datasets (such as [47] which is for video slit lamp bio-microscopic fundus image sequences and [48] which has been introduced for speckle variance OCT images). In addition, some of these methods are not accurate enough for our 3D OCT mosaicing problem (such as [33], [40], [49] which use some features that are not scale/rotation invariant). In addition, our method tries to benefit from the advantages of current methods. For example for registration of OCT projection from OCT and color fundus image which the main features are available in the vessels, SURF + RANSAC method is used (similar to [36], [40], [44]). However for registration of macular OCT projection and color fundus image, in order to reduce the computational complexity and provide a more accurate registration a multi-step correlation based method is used.
In this paper, we propose a novel method for mosaicing B-scans of macular 3D-OCT and 3D-OCT of ONH using both fundus and OCT images. In the proposed method, two main steps are performed: 1) OCT projection registration (x-y plane mosaicing) and 2) B-scan mosaicing (x-z plane mosaicing). These two steps provide a new framework for 3D OCT mosaicing based on two 2D mosaicing procedures in x-y and x-z planes. For the first step, we register the projection images of macular 3D OCT and 3D OCT of ONH. The main goal in this step is finding the corresponding B-scans and their overlapping areas. For this purpose, blood vessels are identified in the color fundus image as well as projection images and used for registering images. The image of vessels in the fundus is proposed as a reference image and vessel images of macula and ONH projections are selected as target images. A multi-step correlation algorithm is employed for registration of projections of macular OCT and color fundus images, while projections of ONH OCT and color fundus images are registered using SURF and RANSAC as follows. First, the blob features of ONH projections and the fundus are extracted as interest points using the Speeded-Up Robust Features (SURF) algorithm [51]. These features are matched by the nearest neighbor symmetric method and with SSD as a metric. SURF is invariant to scale and rotation and, unlike other feature extraction methods (such as SIFT and corner), it uses the integral of images which reduces computational complexity and speeds up registration time. Outliers are then discarded using RANSAC. Next, in the primary reprojection step, the obtained registration parameters are applied to all x-y slices of macula and ONH OCT images resulting in new 3D-OCT of macula and ONH. For B-scan mosaicing, an appropriate model is obtained by matching corresponding A-scans of overlapping areas, and in turn applied to all ONH-based B-scans.
The rest of this paper is structured as follows. In Section 2, the proposed method for mosaicing of 3D OCT images is explained in detail. In Section 3, the experimental results are presented and finally, the paper is concluded in Section 4.
Section snippets
Methods
Registration of the ONH-based and macular OCT images is faced with two major challenges; 1) availability of an accurate model for the 3D nature of OCT data, 2) lack of sufficient overlap between 3D OCTs. In order to address these issues, the mosaicing of 3D macular OCT and OCT images of ONH is performed in two main steps, as illustrated in Fig. 2; first, OCT projection registration, which performs registration in the x-y plane as shown in Fig. 1, and second, B-scan mosiacing for registration in
Results
The proposed algorithm was applied on OCT images of 45 healthy eyes taken by the 3D-OCT-1000 Topcon system. The three-dimensional OCT images obtained by this device are 650 × 512 × 128 voxels each with spatial resolution of 3.54 × 46.88 × 11.72 μm, and the 2D color fundus image of the retina is 1536 × 1612 pixels. We first present the results for registering of OCT projections using the proposed method, and then illustrate the algorithm's performance for mosaicing of B-scans.
Conclusions
This paper presents a fully automatic method for generation of mosaics from 3D OCT images of macula and ONH. The mosaicing is performed in two main steps: OCT projection registration and B-scan mosiacing; the former for alignment in the x-y plane, and the latter for alignment in x-z plane without affecting alignment. In the OCT projection registration step, vessels of 3D OCT images are registered to vessels of their corresponding color fundus image. The obtained registration parameters for
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
Acknowledgements
The authors would like to thank Dr. MR Akhlaghi and Dr. AR Dehghani from the Feiz Eye Hospital for their contribution in clinical evaluation of this mosaicing algorithm.
References (60)
- et al.
Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map
Med. Image Anal.
(2013) - et al.
Genetic algorithm for affine point pattern matching
Pattern Recognit. Lett.
(2003) - et al.
MLESAC: a new robust estimator with application to estimating image geometry
Comput. Vis. Image Underst.
(2000) - et al.
Speeded-up robust features (SURF)
Comput. Vis. Image Underst.
(2008) - et al.
Recognition of boards using wood fingerprints based on a fusion of feature detection methods
Comput. Electron. Agric.
(2015) - et al.
Optic nerve head (ONH) topographic analysis by stratus OCT in normal subjects: correlation to disc size, age, and ethnicity
J. Glaucoma
(Jun-Jul 2010) - et al.
Spectral-domain optical coherence tomography: a comparison of modern high-resolution retinal imaging systems
Am. J. Ophthalmol.
(2010) - et al.
Thickness mapping of eleven retinal layers segmented using the diffusion maps method in normal eyes
J. Ophthalmol.
(2015) - et al.
Three dimensional data-driven multi scale atomic representation of optical coherence tomography
IEEE Trans. Med. Imaging
(2015) - et al.
Segmentation of the optic disc in 3-D OCT scans of the optic nerve head
IEEE Trans. Med. Imaging
(Jan 2010)
Optical coherence tomography noise reduction using anisotropic local bivariate gaussian mixture prior in 3D complex wavelet domain
Int. J. Biomed. Imaging
Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography
IEEE Trans. Med. Imaging
Comparison of macular OCTs in right and left eyes of normal people
An accurate multimodal 3D vessel segmentation method based on brightness variations on OCT layers and curvelet domain fundus image analysis
IEEE Trans. Biomed. Eng.
Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema
Biomed. Opt. Express
Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images
Biomed. Opt. Express
Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model
IET Image Process.
A system for real-time panorama generation and display in tele-immersive applications
IEEE Trans. Multimedia
A multicamera setup for generating stereo panoramic video
IEEE Trans. Multimedia
Smart rebinning for the compression of concentric mosaic
IEEE Trans. Multimedia
Generating panorama photos
Industrial applications of image mosaicing and stabilization
Review on mosaicing techniques in image processing
Mosaicing of Endoscopic Placenta Images
A review of recent advances in registration techniques applied to minimally invasive therapy
IEEE Trans. Multimedia
Image alignment by piecewise planar region matching
IEEE Trans. Multimed.
Image Mosaicing and Super-resolution
A Review of Image Mosaicing Techniques
High-accuracy subpixel image registration based on phase-only correlation
IEICE Trans. Fundam. Electron. Commun. Comput. Sci.
Cited by (4)
New computerized volume measurement method for optic nerve head (ONH) region comparison with measurements by Heidelberg SPECTRALIS optical coherence tomography
2020, Informatics in Medicine UnlockedCitation Excerpt :The third category implemented mainly image segmentation and registration of different human eye's structures, such as the internal limiting membrane and Bruch's membrane, as appear in OCT ONH B-scan frames [17–20]. As in Ref. [14–16,27], they concluded that the 3D morphometrics of the ONH region, may differentiate normal from abnormal occurrences in the ONH region. Patel et al. (2019) suggested volume calculations and studied their association with clinical observations [18], such as the variations of intraocular pressure IOP.
Study on Manufacturing-Aid Optical Coherence Tomography
2020, Guangxue Xuebao/Acta Optica SinicaThree-dimensional surface presentation of optic nerve head from SPECTRALIS OCT images: observing glaucoma patients
2019, International Ophthalmology