Abstract
Real-time motion assessment of the cervical spine provides an understanding of its mechanics and reveals abnormalities in its motion patterns. In this paper we propose a vertebral segmentation approach to automatically identify the vertebral landmarks for cervical joint motion analysis using videofluoroscopy. Our method matches a template to the vertebral bodies, identified using two parallel segmentation approaches, and validates the results through comparison to manually annotated landmarks. The algorithm identified the vertebral corners with an average detection error under five pixels in the C3–C6 vertebrae, with the lowest average error of 1.65 pixels in C4. C7 yielded the largest average error of 6.15 pixels. No significant difference was observed between the intervertebral angles computed using the manually annotated and automatically detected landmarks (\(p>0.05\)). The proposed method does not require large amounts of data for training, eliminates the necessity for manual annotations, and allows for real-time intervertebral motion analysis of the cervical spine.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Individual motion contributions of the cervical vertebrae provide valuable information about the natural neck movement and reveal abnormalities associated with spinal injuries or medical conditions [1]. Digital videofluoroscopy is an imaging modality which allows a real-time in vivo analysis of unrestricted cervical motion, otherwise is not possible when using static radiographic images. Cervical range of motion has been investigated in whiplash-associated disorders [1], in neck pain [1, 2], as well as in healthy subjects [1, 3, 4], and it has been shown to be significantly decreased in whiplash and neck pain [1]. New evidence indicates that the cervical joints contributions to the range of motion, previously thought to be regular and continuous [3], in fact prove to be opposite to the direction of movement [4,5,6], and that the vertebral motion patterns to and from the end ranges of movement are not mirror images of each other [5, 7]. Therefore, a rotational and translational cervical motion analysis is of considerable importance.
Motion analysis of the cervical spine requires an annotation of landmarks on vertebral corners [3]. The majority of studies analyzing cervical joint motion employed manual and semi-automated approaches for landmark annotations [3, 4, 6, 8, 9, 11]. Manual methods have been shown to be highly reliable [4, 6, 12], but also time-consuming, and thus impractical for large data analysis. Automatic vertebral tracking studies have used template matching [11, 13], Active Appearance Models [14,15,16], or feature tracking algorithms [17]. However, these methods still require manual identification of vertebral landmarks in the first frames of the videos. Fully automatic landmark identification has been successful in the lumbar spine [8, 10, 18, 19], due to the larger size and better visibility of the vertebral bodies, or when using imaging modalities providing higher contrast and spatial resolution, such as Computed Tomography [15, 20, 21] or X-ray [14, 16, 22]. Nonetheless, these approaches have not been successful when applied to the cervical vertebrae in fluoroscopic images, due to their smaller size, small field-of-view, lower image quality, and considerable presence of motion blur.
In this paper we propose a procedure for automatic identification and segmentation of cervical vertebrae in videofluoroscopic sequences. It allows an accurate computation of vertebral landmarks necessary for a real-time cervical motion analysis, and eliminates the prerequisite for manual annotation (Fig. 1a) of the C3–C7 vertebrae.
2 Method
2.1 Experimental Procedure
Four young adult subjects were included in this study: two women (age: 23.5 ± 0.71 years; height: 167.5 ± 17.7 cm; weight: 73.8 ± 26.6 kg, and two men (age: 25.0 ± 1.4 years; height: 184.5 ± 6.4 cm; weight: 77.5 ± 6.4 kg. Exclusion criteria were: neck disorders, any neck symptoms up to three months prior to the study, and possible pregnancy. Fluoroscopic video sequences (Fig. 1a) were acquired at 25 frames per second, with a resolution of 576 \(\times \) 768 pixels, using the Phillips BV Libra mobile diagnostic fluoroscopic image acquisition and viewing system. For each subject, two average quality fluoroscopic sequences were recorded: one at the onset of flexion, and one at the onset of extension. The average source-to-participant distance (C7 spinous process) was 76 cm, and the average exposure of 45-kV, 208-mA, 6.0-ms X-ray pulses yielded 0.12 mSv per individual motion from upright to end-range (PCXMC software, STUK, Helsinki, Finland). Subjects were asked to sit in the normal upright position and perform movement in the sagittal plane, starting from the neutral position to the end-range of movements. They all wore plastic glasses with two small metal bearings on each side, attached to the glasses by metal wires. The purpose of them was to serve as external markers of the occiput visible under fluoroscopy (Fig. 1a).
(a) Manually annotated corners on the cervical spine, from C3 to C7, with visible external markers. (b) Marking order for the corners, as well as the posterior and anterior midpoints (red) which form the mid-planes used for joint angle calculations, illustrated on the C5, C6, and C7 vertebrae. (Color figure online)
2.2 Automatic Identification of Vertebral Landmarks
The automatic vertebral landmark identification algorithm consisted of the following steps (Fig. 2): template matching; two parallel segmentation methods, using contrast-limited adaptive histogram equalization and gradient magnitude approaches; registration of the segmented vertebrae to the template; and identification of the vertebral corners as landmarks.
Template Matching: A binary template was created to represent an average shape of the cervical vertebrae (Fig. 5a). Videofluoroscopic sequences of subjects in neutral position were preprocessed with a local range filter. Canny edge detection (sensitivity thresholds [0.02, 0.05]) and a morphological closing (spherical structuring element, radius of 2 pixels) were then performed. Next, the binary template was matched to the preprocessed image at every location, and the candidate locations where the template matched the vertebrae were identified by means of the following criteria: Dice similarity coefficient (DSC) \(>0.34\) (Eq. 1); average pixel intensity range [100, 150]; entropy threshold \(>1.99\); gray-level co-occurrence matrix properties: contrast range [0.045, 0.12], correlation range [0.93, 0.98], energy range [0.2, 0.34], and homogeneity range [0.94, 0.98].
In Eq. 1 for the Dice coefficient, \(\left| X \right| \) was the number of pixels in the template image and \(\left| Y \right| \) the number of pixels in the candidate locations. The identified candidate locations were then edge-, and contrast-enhanced using a power law transformation (\(\gamma =1.1\), \(c=1\)). Finally, a quadratic anisotropic diffusion filter was applied. At the end of this step, regions-of-interest (ROIs) around the vertebral bodies were identified for segmentation, using two parallel approaches (Fig. 2), both of which were applied only to these ROIs.
Segmentation 1 - Contrast-Limited Adaptive Histogram Equalization: First, a contrast-limited adaptive histogram equalization (CLAHE) algorithm was applied in order to enhance the contrast in the identified gray-scale candidate ROIs (Fig. 3). The candidate locations were sharpened to enhance the contrast of the edges. Next, adaptive thresholding was applied to 3-by-3 neighborhoods of the vertebral ROIs to filter the noise, while simultaneously preserving the edges. The gray-level co-occurrence matrix was calculated once more and adaptive thresholding was applied to the scaled image. The resulting images were processed in three parallel pathways (Fig. 3). In (1), a fourth order Butterworth bandpass filter was applied (cut-off frequencies: [5, 71]), and the residual noise was removed through binarization (threshold = 0.99). The holes in the binarized objects were filled using morphological filling. In (2), no image filtering was applied before binarization and morphological hole filling. In (3), the vertebral edges were computed using Canny edge detection (sensitivity thresholds [0.02, 0.05]). The three images were fused together, so that the pixels constituting the vertebral edges were kept in the fused images if and only if they had the same value of 1 (white) at the same pixel locations. Finally, this step concluded with morphological opening and then closing. The results of Segmentation 1 are illustrated in Fig. 5b.
Segmentation 2 - Gradient Magnitude: In Segmentation 2 (Fig. 4), a gradient magnitude was applied to the fluoroscopic images, filtered with a quadratic anisotropic diffusion filter. The images were then filtered using an edge-preserving, local Laplacian filter (\(\sigma =0.9\), \(\alpha =0.1\)). The vertebral ROIs were then sharpened to enhance the contrast along the edges (radius = 3; sharpening strength = 2, minimum contrast threshold = 0). Next, adaptive thresholding was applied for binarization, and morphological opening and closing for filling the holes and bridging the edges in the segmented vertebrae. The results of Segmentation 2 are illustrated in Fig. 5c.
Template Registration: At the beginning of this step, each vertebral ROI was segmented using the two aforementioned segmentation procedures. In order to quantitatively determine which of them provided the best results, the template image (Fig. 5a) was matched once again with the vertebral boundaries by means of affine registration (Fig. 5d and e). The registration was optimized by means of mean squared error, with a regular step gradient descent configuration, initial step length of 0.01, and 1000 iterations. The segmentation result with the highest DSC (Fig. 6a) was selected.
Corner Detection: The corners of the segmented vertebrae (Fig. 6a) were located by determining the largest Euclidean distance between all the points of the vertebral boundary (Fig. 6b). The four corners obtained in this process were then selected as vertebral landmarks (Fig. 6c). The results of the corner detection are shown in red in Fig. 7, superimposed on a fluoroscopic image with manually annotated vertebral corners (blue).
2.3 Manual Annotation of Vertebral Corners
For the purpose of validating the algorithm, vertebral corners were also manually annotated on the fluoroscopic images in C3–C7 (Fig. 1a). Additionally, intervertebral joint angles were computed using the automatically detected and manually annotated vertebral corners. The marking procedure is described in detail in Plocharski et al. [12]. Briefly, four corners were manually marked on the C3–C6 vertebrae at points where lines through soft or cancellous corners intersect with the outer edges of the compact bone. Figure 1b illustrates the placement and the order of the markings. Due to the fact that C7 is often partially obscured in fluoroscopic recordings, it was only marked with two points on the superior cancellous corners under the superior vertebral plate [12]. In order to compute the cervical joint angles, we incorporated the vertebral landmark methodology developed by Frobin et al. [3]. A line connecting the posterior and anterior midpoints, defined as equidistant points between corners 1 and 4, and 2 and 3 respectively (red points in Fig. 1b) formed a mid-plane, which was used for angle computation between two adjacent vertebrae (angles \(\theta _1\) and \(\theta _2\), Fig. 1b). The C6/C7 angle was computed between the C6 mid-plane and a line going through the two corners of C7. All angles were calculated as four-quadrant inverse tangents of the determinant and dot product of the two direction vectors, measured counterclockwise from the posterior to the anterior midpoints in the range from \(0^{\circ }\) to \(180^{\circ }\) [12].
3 Results
Figure 7 illustrates the automatically identified (red), and manually annotated vertebral corners (blue) on C3–C7. For C3–C6 vertebrae, the automatic detection method provided locations in close proximity to the manual annotations. A few inaccurate detections of the first and fourth corners were observed in C3 (Fig. 7a and g), and in the second and fourth corners of C6 (Fig. 7a, d, and d). Corner detection of C7 yielded somewhat inferior results to, especially for the second corner (Fig. 7d, h), likely due to an absence of clear vertebral edges. For each vertebral corner, we compared point coordinates of the automatically identified corners and the corresponding manual annotations. Error was calculated as the average Euclidean distance between the two corresponding corners in (\(n=8\)) fluoroscopic images (Eq. 2):
where \((x_{A},y_{A})\) was the automatically detected corner, and \((x_{M},y_{M})\) was the manually annotated one. Table 1 illustrates the mean errors and standard deviations in pixels. Errors smaller than five pixels were deemed acceptable. Additionally, a one-tailed t-test was computed for every automatically identified corner to test the null hypothesis that the average detection error was equal to or smaller than five pixels. The p-values for all tests are shown in Table 1. Statistical analysis was performed in SPSS (IBM Statistics, v.25). All data in Tables 1 and 2 was initially tested for normality using the Shapiro-Wilk test. Normality of the data was confirmed (\(p>0.05\)). Statistical analysis indicates that the average corner detection errors were not significantly larger than five pixels. Table 2 illustrates the intervertebral angles, computed using the approach illustrated in Fig. 1b, using both the manually annotated and the automatically detected vertebral corners. A paired-sample t-test was computed for each joint to determine if the angles obtained using the two approaches differed significantly. No significant difference was found between the two methods (\(p>0.05\) for all cervical joints).
4 Discussion
Vertebral landmarks are a requirement for range of motion analysis, which is a crucial tool for understanding the spine joint mechanics [1]. The time-consuming process of manual landmark annotations is still a prerequisite of the state-of-the-art automatic vertebral tracking algorithms [11, 17, 19, 20]. In this paper we propose a method to automatically identify and segment the C3–C7 vertebral bodies in videofluoroscopic images, and to detect the vertebral landmarks necessary for cervical joint motion analysis. We compare this automatic detection method with manually annotated vertebral landmarks.
Results from Table 1 showed the average detection error under five pixels in the C3–C6 vertebrae, with the lowest average error of \(1.65\pm 1.60\) pixels in the C4 vertebra. The one-sample, one-tailed t-test for each of the average detection errors of the four corners in C3–C7 vertebrae revealed that the errors were not significantly greater than five pixels. Given the spatial resolution of 576 \(\times \) 768 pixels, five pixels corresponded to respectively \(0.9\%\) and \(0.7\%\) of the height and width of the images. However, the average errors for C7 vertebrae were larger than the other vertebrae. A possible explanation may be partial occlusion, a lower contrast, and a lack of well-defined edges on the C7. The joint angles results for C3–C7 were also not significantly different from the angles computed with the manually annotated corners (\(p>0.05\)). This suggests that the presented method can be suitable for large data analysis of cervical joint motion using automatic tracking algorithms.
Comparison of these results with other work is difficult, since similar vertebral landmark detection methods in the cervical spine using videofluoroscopic sequences were not found in literature. A similar study by Xu et al. [14] used a combination of Haar-like features and Active Appearance Models training algorithms for automatic segmentation of cervical vertebrae in X-ray images. They obtained the lowest average error of 4.79 pixels. Al-Arif et al. reported the lowest average median error of 2.08 mm using Haar-like features in radiographic images [20], and the lowest average error of 0.7688 mm using Active Shape Models with Random Classification Forest in X-ray images [16]. Automatic approaches to detect and label the vertebral landmarks have also been developed using deep learning [23, 24]. However, they require large data sets and high quality imaging modalities, such as CT or MRI, and thus are not directly comparable to fluoroscopic sequences of the cervical spine.
The following limitations to this study need to be addressed. First, a larger number of participants would be beneficial. Secondly, the fluoroscopic images were of relatively good quality, and thus we did not evaluate the ability of our approach to automatically identify the cervical corners in images with higher degrees of image blurring. However, the aim of this approach was vertebral detection at the onset of movement, with stationary subjects in a neutral position, and thus motion blur was not expected to occur. Finally, our approach did not aim to detect C1 or C2. C1 does not have the vertebral body and is seldom used in most vertebral analyses, while C2 is often obscured and its corners are often not visible.
5 Conclusion
The proposed method to automatically detect and segment the cervical vertebrae allows a computation of the vertebral landmarks for a real-time intervertebral motion analysis in videofluoroscopy. It also eliminates the necessity for a manual annotation of the C3–C7 vertebrae for automatic landmark tracking. Additionally, our approach does not require large datasets necessary for training the algorithm to be able to detect the vertebrae, as is the case in deep learning approaches.
References
Stenneberg, M.S., et al.: To what degree does active cervical range of motion differ between patients with neck pain, patients with whiplash, and those without neck pain? A systematic review and meta-analysis. Arch. Phys. Med. Rehabil. 98(7), 1407–1434 (2017)
Qu, N., Lindstrøm, R., Hirata, R.P., Graven-Nielsen, T.: Origin of neck pain and direction of movement influence dynamic cervical joint motion and pressure pain sensitivity. Clin. Biomech. 61, 120–128 (2019)
Frobin, W., Leivseth, G., Biggemann, M., Brinckmann, P.: Sagittal plane segmental motion of the cervical spine. A new precision measurement protocol and normal motion data of healthy adults. Clin. Biomech. 17(1), 21–31 (2002)
Wang, X., Lindstroem, R., Plocharski, M., Østergaard, L.R., Graven-Nielsen, T.: Cervical flexion and extension includes anti-directional cervical joint motion in healthy adults. Spine J. 18(1), 147–154 (2018)
Wu, S.K., Kuo, L.C., Lan, H.C.H., Tsai, S.W., Su, F.C.: Segmental percentage contributions of cervical spine during different motion ranges of flexion and extension. Clin. Spine Surg. 23(4), 278–284 (2010)
Wang, X., Lindstroem, R., Plocharski, M., Østergaard, L.R., Graven-Nielsen, T.: Repeatability of cervical joint flexion and extension within and between days. J. Manipulative Physiol. Ther. 41(1), 10–18 (2018)
Anderst, W.J., Donaldson, W.F., Lee, J.Y., Kang, J.D.: Cervical spine intervertebral kinematics with respect to the head are different during flexion and extension motions. J. Biomech. 46(8), 1471–1475 (2013)
Breen, A.C., Teyhen, D.S., Mellor, F.E., Breen, A.C., Wong, K.W., Deitz, A.: Measurement of intervertebral motion using quantitative fluoroscopy: report of an international forum and proposal for use in the assessment of degenerative disc disease in the lumbar spine. Advances in Orthopedics (2012)
Lecron, F., Benjelloun, M., Mahmoudi, S.: Cervical spine mobility analysis on radiographs: a fully automatic approach. Comput. Med. Imaging Graph. 36(8), 634–642 (2012)
Ahmadi, A., Maroufi, N., Behtash, H., Zekavat, H., Parnianpour, M.: Kinematic analysis of dynamic lumbar motion in patients with lumbar segmental instability using digital videofluoroscopy. Eur. Spine J. 18(11), 1677–1685 (2009)
Nøhr, A.K., et al.: Semi-automatic method for intervertebral kinematics measurement in the cervical spine. In: Sharma, P., Bianchi, F.M. (eds.) SCIA 2017. LNCS, vol. 10270, pp. 302–313. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59129-2_26
Plocharski, M., Lindstroem, R., Lindstroem, C.F., Østergaard, L.R.: Motion analysis of the cervical spine during extension and flexion: reliability of the vertebral marking procedure. Med. Eng. Phys. 61, 81–86 (2018)
Cerciello, T., Romano, M., Bifulco, P., Cesarelli, M., Allen, R.: Advanced template matching method for estimation of intervertebral kinematics of lumbar spine. Med. Eng. Phys. 33(10), 1293–1302 (2011)
Xu, X., Hao, H.W., Yin, X.C., Liu, N., Shafin, S.H.: Automatic segmentation of cervical vertebrae in X-ray images. In: The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, June 2012
Klinder, T., Ostermann, J., Ehm, M., Franz, A., Kneser, R., Lorenz, C.: Automated model-based vertebra detection, identification, and segmentation in CT images. Med. Image Anal. 13(3), 471–482 (2009)
Al Arif, S.M.M.R., Gundry, M., Knapp, K., Slabaugh, G.: Improving an active shape model with random classification forest for segmentation of cervical vertebrae. In: Yao, J., Vrtovec, T., Zheng, G., Frangi, A., Glocker, B., Li, S. (eds.) CSI 2016. LNCS, vol. 10182, pp. 3–15. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-55050-3_1
Nauman, M., et al.: Automatic tracking of cervical spine using fluoroscopic sequences. In: Intelligent Systems Conference (IntelliSys) 2017, pp. 592–598. IEEE, September 2017
Sa, R., Owens, W., Wiegand, R., Chaudhary, V.: Fast scale-invariant lateral lumbar vertebrae detection and segmentation in X-ray images. In: 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), pp. 1054–1057. IEEE, August 2016
Wang, L., Zhang, Y., Lin, X., Yan, Z.: Study of lumbar spine activity regularity based on Kanade-Lucas-Tomasi algorithm. Biomed. Sig. Process. Control 49, 465–472 (2019)
Al Arif, S.M.R., Asad, M., Knapp, K., Gundry, M., Slabaugh, G.: Cervical vertebral corner detection using Haar-like features and modified hough forest. In: 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 417–422. IEEE, November 2015
Zhang, G., Shao, Y., Kim, Y., Guo, W.: Vertebrae detection algorithm in CT scout images. In: Tan, T., et al. (eds.) IGTA 2016. CCIS, vol. 634, pp. 230–237. Springer, Singapore (2016). https://doi.org/10.1007/978-981-10-2260-9_26
Mahmoudi, S.A., Lecron, F., Manneback, P., Benjelloun, M., Mahmoudi, S.: GPU-based segmentation of cervical vertebra in X-ray images. In: 2010 IEEE International Conference on Cluster Computing Workshops and Posters (CLUSTER WORKSHOPS), pp. 1–8. IEEE, September 2010
Yang, D., et al.: Automatic vertebra labeling in large-scale 3D CT using deep image-to-image network with message passing and sparsity regularization. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 633–644. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_50
Shi, D., Pan, Y., Liu, C., Wang, Y., Cui, D., Lu, Y.: Automatic localization and segmentation of vertebral bodies in 3D CT volumes with deep learning. In: Proceedings of the 2nd International Symposium on Image Computing and Digital Medicine, pp. 42–46. ACM, October 2018
Acknowledgements
Ethical approval: The study was conducted according to the Declaration of Helsinki and approved by The Scientific Ethical Committee for the Region of North Jutland (N20140004).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Conflicts of Interests
None.
Funding
None.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Jakobsen, I.M.G., Plocharski, M. (2019). Automatic Detection of Cervical Vertebral Landmarks for Fluoroscopic Joint Motion Analysis. In: Felsberg, M., Forssén, PE., Sintorn, IM., Unger, J. (eds) Image Analysis. SCIA 2019. Lecture Notes in Computer Science(), vol 11482. Springer, Cham. https://doi.org/10.1007/978-3-030-20205-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-20205-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20204-0
Online ISBN: 978-3-030-20205-7
eBook Packages: Computer ScienceComputer Science (R0)