Keywords

1 Introduction

Nowadays, stenting after angioplasty has become one of the principal treatment options for coronary artery disease (CAD) [11]. Stents are tiny tube-like devices designed to support the vessel wall and are implanted in the coronary arteries by means of the percutaneous coronary intervention (PCI) procedure [12]. Among a series type of stents, Bioresorbable Vascular Scaffold (BVS) could offer temporary radial strength and be fully absorbed at a later stage [5]. However, malapposition which is defined as a separation of a stent strut from the vessel wall [6] may take place when BVS is implanted improperly and may potentially result in stent thrombosis. Therefore, it is crucial to analyze and evaluate BVS struts malapposition accurately.

Fig. 1.
figure 1

An IVOCT image with immediately implanted BVS struts. One strut contour (white) can be segmented based on the four corners (green) labeled by expert.

Intravascular optical coherence tomography (IVOCT) is the most suitable imaging technique used for accurate BVS analysis due to its radial resolution of about 10 \(\upmu \)m [11]. An IVOCT image with immediately implanted BVS struts in the Cartesian coordinate system is shown in Fig. 1, in which apposed and malapposed BVS struts could be recognized obviously. However, the recognition is mainly conducted manually by experts. It is labor intensive and time consuming on account of large quantities of struts in IVOCT pullbacks. Thus, automatic analysis is highly desired.

Few articles about automatic BVS struts analysis has been published previously. Wang et al. proposed a method [11] of detecting the box-shape contour of strut based on gray and gradient features. It has limitation of poor generalization due to its a lot of empirical threshold setting. Lu et al. proposed a novel framework [13] of separating the whole work into two steps, which were machine learning based strut region of interest (ROI) detection and dynamic programming (DP) based strut contour segmentation. However, DP algorithm requires of searching all points in image to get the energy-minimizing active contour. On the energy-minimizing way, segmentation is sometimes influenced by fractures inside strut and blood artifacts around strut contour, which may cause inaccurate strut contour segmentation.

Stents are tiny tube-like devices designed manually and BVS struts have obvious feature of the box-shape in IVOCT image. As shown in the left part of Fig. 1, one strut can be represented by four corners in green color labeled by experts and strut contour represented by white rectangle can be segmented based on these four corners. Taking this prior knowledge into account, a novel corner based method of segmenting BVS struts using four corners is proposed in this paper, which transforms the problem of segmenting strut contour into detecting four corners of the strut. The advantage of this method is that it prevents the segmentation results from the influence of interference information, such as fractures inside strut and blood artifacts around strut contour, during the segmentation process. Specially, a cascaded AdaBoost classifier based on Haar-like features is trained to detect corners of struts. Experiment results illustrate that the corner detection method is accurate and contour segmentation is effective.

2 Method

The overview of this novel method is presented in Fig. 2. The segmentation method can be summarized as two main steps: (1) Training a classifier for corners detection; (2) Segmentation based on detected corners. Each step is described at length in the following subsections.

2.1 Training a Classifier for Corners Detection

A wide variety of detectors and descriptors have already been proposed in the literature [1, 4, 7,8,9]. Feature prototypes of simple haar-like, as described in proposed work [10], consist of edge, line and special diagonal line features and have been successfully applied to face detection. Haar-like features prove to be sensitive to corner structure. In our method, 11 feature prototypes of five kinds given in Fig. 2(b) are designed based on struts structure, consisting of two edge features, two line features, one diagonal line feature, four corner features and two T-junctions features. Those feature prototypes are combined to extract haar-like features which are capable of detecting corners effectively. However, the total number of haar-like features is very large associated with each image sliding window. High dimension of haar-like features need to be reduced. Due to the effective learning algorithm and strong bounds on generalization performance of AdaBoost [10], the classifier is trained using AdaBoost based on haar-like features.

During training, an image is transformed into polar coordinates shown as Fig. 2(a) based on lumen center. Samples are selected based on corners labeled by an expert, which are displayed as green points shown in Fig. 2(a). Bias about 1–3 pixels of labeling points is allowable, so the labeled corners is extended to its eight neighborhoods, which are shown as eight purple points around green center points. Positive samples are selected centered on each of the nine points for one corner and negative samples are selected when those centers of sliding windows are not coincided with nine labeled points.

As shown in Fig. 2(c), s stages of strong classifiers are trained and cascaded to speed up the process of corners detection. The front stages of strong classifiers with simple structure are used to remove most sub-windows with obvious negative features. Then, strong classifiers with more complex structures are applied to remove false positives that are difficult to reject. Therefore, a cascaded AdaBoost classifier is constructed by gradually increasing complex strong classifiers at subsequent stages.

Fig. 2.
figure 2

The framework of our novel method of segmenting struts using four corners.

2.2 Segmentation Based on Detected Corners

As the right part of Fig. 2 presents, segmentation results are obtained based on four corners of each strut. This step consists of three main parts stated as follows.

  1. (1)

    Image transformation and getting detection ROI. Cartesian IVOCT images are transformed into polar images using a previously developed method [2, 3]. It can be seen that the shape of BVS struts in polar images are more rectangular and approximately parallel to lumen since the lumen in polar coordinates is nearly horizontal. Detection ROI in polar images can be determined using the method in [13], which are represented as yellow rectangles in Fig. 2(d).

  2. (2)

    Sliding on the ROI and getting candidate corners. As Fig. 2(e)① shows, a sliding window of size N is sliding on the ROI to detect corners and the step of sliding window is set as K. Each sliding window is classified utilizing the cascaded Adaboost classifier based on the selected haar-like features. Assuming that the cascaded classifiers have a total of s stages, the sum \(\sum \limits _{i=1}^{s}S_{i}\) can be seen as a score to measure how likely it is that an input sliding window’s center can be a candidate corner, where \(S_{i}\) represents the score of \(i_{th}, i=1,2,...s\) stage. A sliding window’s center is considered as a candidate corner when it passes through all s stages of cascaded classifier. The detection results are shown as orange points in Fig. 2(e)②, which are considered as candidate corners.

  3. (3)

    Post-processing and segmentation. As shown in the Fig. 2(d), the upper and bottom sides of strut is approximately parallel to the lumen in polar coordinates. Considering the box-like shape of the strut, four corners to represent the strut should be located in four independent area. As shown in Fig. 2(e)③, the yellow rectangle of ROI is divided into four parts according to the center of the rectangle. Each candidate point has a score got from the cascaded AdaBoost classifier mentioned above. According to these scores, four points with top score of the four separated parts are chosen, which are considered as the four corners of the strut. As shown in Fig. 2(e)③, corners are displayed in red color.

After getting all four corners of struts in polar image shown as Fig. 2(f), the segmentation results are obtained by connecting these four corners in sequence, which are represented by rectangle in magenta color. Finally, the segmented contours are transformed back to Cartesian coordinate system as the final segmentation results.

3 Experiments

3.1 Materials and Parameter Settings

In our experiment, 15 pullbacks consisting of 3903 IVOCT images of baseline taken from 15 different patients were acquired using the FD-OCT system (C7-XR system, St. Jude, St. Paul, Minnesota). There were totally 1617 effective images containing struts. All BVS struts represented by four corners in effective images were manually labeled by an expert and the total number of struts was more than 4000. 10 pullbacks were used as training set which consisted of 1137 effective images and another 5 pullbacks of 480 effective images were used as test set.

During the experiment, all parameters mentioned above was set as follows: the size of sliding window was \(N=11\), which was near two-thirds of BVS strut’s width (16 pixels) to ensure that there are only one corner in the sliding window. The sliding window step K was set as 3 pixels to ensure detected corners are in accurate positions. The stages of cascaded AdaBoost classifiers was \(s=23\) empirically, so that simple classifiers at early stages and complicated classifiers at late stages are both enough.

3.2 Evaluation Criteria

To quantitatively evaluate the performance of corners detection, the corner position error (CPE), defined as the distance between a detected corner and corresponding ground truth, is calculated as:

$$\begin{aligned} CPE = \sqrt{ (x_{D} - x_{G})^{2}+ (y_{D} - y_{G})^{2} } \end{aligned}$$
(1)

where \((x_{D}, y_{D})\) and \( (x_{G}, y_{G})\) are the location of detected corner and corresponding ground truth, separately.

In order to assess the segmentation method quantitatively, Dice’s coefficient was applied to measure the coincidence degree of the area between the ground truth and segmentation result. Dice’s coefficient is defined as following in our experiment:

$$\begin{aligned} Dice = 2 \times \frac{\mid {S_{D}\bigcap S_{G}}\mid }{\mid {S_{D}}\mid +\mid {S_{G}}\mid } \end{aligned}$$
(2)

where \(S_{G}\) and \(S_{D}\) represent the area of the ground truth and segmentation result, separately.

3.3 Results

To make a fair comparison between segmentation results of the proposed method and Lu et al. [13], both used the same data set. Specially, these two methods were tested on the same segmentation region, which was the detection rectangle described above. Qualitative and quantitatively evaluations were demonstrated based on the segmentation results.

Fig. 3.
figure 3

The first row displays some segmentation results (cyan contour) using the method of Lu et al. The corresponding segmentation results of our proposed method (magenta contour) are shown in the second row. Contours of the ground truth labeled by an expert using corners are in white color. ① and ② in the \(a_{2}\)\(d_{2}\) represent fractures and blood artifacts respectively.

Table 1. The quantitative evaluation results.

Qualitative Results. The qualitative results demonstrated in Fig. 3 present some final segmentation results using the method of Lu et al. [13] in the first row and our proposed method in the second row. The green corners and white contour are the ground truth labeled by an expert. Red corners are the corners detection results and magenta contour refers to the segmentation result of our proposed method. Blood artifacts and fractures are indicated on the amplified white rectangle. The cyan contours are segmentation results using DP algorithm [13]. Figure 3\(a_{2}\)\(d_{2}\) display some of segmentation results, as shown in the white rectangles, struts with special cases of fractures and blood artifacts, which fail to be segmented accurately using the DP method based on contour, whose segmentation results are displayed in Fig. 3\(a_{1}\)\(d_{1}\), can be segmented well utilizing our proposed method. Qualitative results illustrate that our method has better performance and is more robust than the state-of-the-art.

Quantitative Results. The quantitative statistic results are presented in Table 1. The 2\(th \) to 4\(th \) columns are the number of images, struts and corners evaluated in the 5 data sets respectively. It can be seen that the average corner position error (CPE) of 5 data sets in 5\(th \) column is \(31.96\pm 19.19\) \(\upmu \)m, namely about \(3.20\pm 1.92\) pixels, for immediately implanted struts. Considering the allowable bias of detected corners, this is reasonable. It proves that the detection method is accurate and effective. Average dice of Lu et al. and our proposed work in 6\(th \) and 7\(th \) column of 5 data sets are 0.84 and 0.85 respectively. A rise of about 0.01 of average dice with our proposed method makes difference in clinical practice and this is difficult for small objection segmentation. It suggests that our proposed method is more accurate and robust compared to the DP method using by Lu et al. [13].

4 Conclusion

In this paper, we proposed a novel method of automatic BVS strut segmentation in 2D IVOCT image sequences. To our best knowledge, this was the first work that using the prior knowledge of struts’ box-shape to segment strut contour based on four corners. Our presented work transforms the struts segmentation problem into corners detection, which avoids segmentation results from the influence of blood artifacts around strut contour and fractures inside strut. Specially, corners are detected using a cascaded AdaBoost classifier based on enriched haar-like features. Segmentation results are got based on the detected corners. Qualitative and quantitative evaluation results suggest that the corners detection method is accurate and our proposed segmentation method is more effective and robust than DP method. Automatic analysis of struts malapposition is feasible based on the segmentation results. Future work will mainly focus on further improving the current segmentation results by post-processing.