Abstract
Radiologists routinely detect and size lesions in CT to stage cancer and assess tumor burden. To potentially aid their efforts, multiple lesion detection algorithms have been developed with a large public dataset called DeepLesion (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8 body part labels). However, this dataset contains missing measurements and lesion tags, and exhibits a severe imbalance in the number of lesions per label category. In this work, we utilize a limited subset of DeepLesion (6%, 1331 lesions, 1309 slices) containing lesion annotations and body part label tags to train a VFNet model to detect lesions and tag them. We address the class imbalance by conducting three experiments: 1) Balancing data by the body part labels, 2) Balancing data by the number of lesions per patient, and 3) Balancing data by the lesion size. In contrast to a randomly sampled (unbalanced) data subset, our results indicated that balancing the body part labels always increased sensitivity for lesions \(\ge \)1 cm for classes with low data quantities (Bone: 80% vs. 46%, Kidney: 77% vs. 61%, Soft Tissue: 70% vs. 60%, Pelvis: 83% vs. 76%). Similar trends were seen for three other models tested (FasterRCNN, RetinaNet, FoveaBox). Balancing data by lesion size also helped the VFNet model improve recalls for all classes in contrast to an unbalanced dataset. We also provide a structured reporting guideline for a “Lesions” subsection to be entered into the “Findings” section of a radiology report. To our knowledge, we are the first to report the class imbalance in DeepLesion, and have taken data-driven steps to address it in the context of joint lesion detection and tagging.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Eisenhauer, E., et al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur. J. Cancer 45(2), 228–247 (2009)
Schwartz, L., et al.: RECIST 1.1-update and clarification: from the RECIST committee. Eur. J. Cancer 62, 132–137 (2016)
Yan, K., et al.: Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in CT. IEEE TMI 40(10), 2759–2770 (2021)
Cai, J., et al.: Lesion harvester: iteratively mining unlabeled lesions and hard-negative examples at scale. IEEE TMI 40(1), 59–70 (2021)
Yang, J., et al.: AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12264, pp. 562–572. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59719-1_55
Yang, J., He, Y., Kuang, K., Lin, Z., Pfister, H., Ni, B.: Asymmetric 3D context fusion for universal lesion detection. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12905, pp. 571–580. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87240-3_55
Han, L., et al.: SATr: Slice Attention with Transformer for Universal Lesion Detection. arXiv (2022)
Cai, J., et al.: Deep lesion tracker: monitoring lesions in 4D longitudinal imaging studies. In: IEEE CVPR (2020)
Tang, W., et al.: Transformer Lesion Tracker. arXiv (2022)
Yan, K., et al.: Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology. In: IEEE CVPR (2019)
Yan, K., et al.: MULAN: multitask universal lesion analysis network for joint lesion detection, tagging, and segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 194–202. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_22
Setio, A.A.A., et al.: Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge. Med. Image Anal. 42, 1–13 (2017)
Bilic, P., et al.: The Liver Tumor Segmentation Benchmark (LiTS). CoRR (2019)
Roth, H.R., et al.: A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8673, pp. 520–527. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10404-1_65
Zhang, H., et al.: VarifocalNet: an IoU-aware dense object detector. In: IEEE CVPR, pp. 8514–8523 (2021)
Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE PAMI 39(6), 1137–1149 (2017)
Lin, T.Y., et al.: Focal loss for dense object detection. In: IEEE ICCV, pp. 2999–3007 (2017)
Kong, T., et al.: FoveaBox: Beyond Anchor-based Object Detector. arXiv (2019)
Tian, Z., et al.: FCOS: fully convolutional one-stage object detection. In: IEEE ICCV, pp. 9627–9636 (2019)
Zhang, S., et al.: Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection. CoRR (2019)
Solovyev, R., et al.: Weighted boxes fusion: ensembling boxes from different object detection models. Image Vis. Comput. 107, 104117 (2021)
Yan, K., et al.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3), 1–11 (2018)
Yan, K., et al.: Unsupervised body part regression via spatially self-ordering convolutional neural networks. In: IEEE ISBI, pp. 1022–1025 (2018)
Mattikalli, T., et al.: Universal lesion detection in CT scans using neural network ensembles. In: SPIE Medical Imaging: Computer-Aided Diagnosis, vol. 12033 (2022)
Acknowledgements
This work was supported by the Intramural Research Program of the National Institutes of Health (NIH) Clinical Center.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Erickson, P.D., Mathai, T.S., Summers, R.M. (2022). Class Imbalance Correction for Improved Universal Lesion Detection and Tagging in CT. In: Zamzmi, G., Antani, S., Bagci, U., Linguraru, M.G., Rajaraman, S., Xue, Z. (eds) Medical Image Learning with Limited and Noisy Data. MILLanD 2022. Lecture Notes in Computer Science, vol 13559. Springer, Cham. https://doi.org/10.1007/978-3-031-16760-7_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-16760-7_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16759-1
Online ISBN: 978-3-031-16760-7
eBook Packages: Computer ScienceComputer Science (R0)