Skip to main content

Class Imbalance Correction for Improved Universal Lesion Detection and Tagging in CT

  • Conference paper
  • First Online:
Medical Image Learning with Limited and Noisy Data (MILLanD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13559))

Included in the following conference series:

  • 520 Accesses

Abstract

Radiologists routinely detect and size lesions in CT to stage cancer and assess tumor burden. To potentially aid their efforts, multiple lesion detection algorithms have been developed with a large public dataset called DeepLesion (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8 body part labels). However, this dataset contains missing measurements and lesion tags, and exhibits a severe imbalance in the number of lesions per label category. In this work, we utilize a limited subset of DeepLesion (6%, 1331 lesions, 1309 slices) containing lesion annotations and body part label tags to train a VFNet model to detect lesions and tag them. We address the class imbalance by conducting three experiments: 1) Balancing data by the body part labels, 2) Balancing data by the number of lesions per patient, and 3) Balancing data by the lesion size. In contrast to a randomly sampled (unbalanced) data subset, our results indicated that balancing the body part labels always increased sensitivity for lesions \(\ge \)1 cm for classes with low data quantities (Bone: 80% vs. 46%, Kidney: 77% vs. 61%, Soft Tissue: 70% vs. 60%, Pelvis: 83% vs. 76%). Similar trends were seen for three other models tested (FasterRCNN, RetinaNet, FoveaBox). Balancing data by lesion size also helped the VFNet model improve recalls for all classes in contrast to an unbalanced dataset. We also provide a structured reporting guideline for a “Lesions” subsection to be entered into the “Findings” section of a radiology report. To our knowledge, we are the first to report the class imbalance in DeepLesion, and have taken data-driven steps to address it in the context of joint lesion detection and tagging.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Eisenhauer, E., et al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur. J. Cancer 45(2), 228–247 (2009)

    Article  Google Scholar 

  2. Schwartz, L., et al.: RECIST 1.1-update and clarification: from the RECIST committee. Eur. J. Cancer 62, 132–137 (2016)

    Article  Google Scholar 

  3. Yan, K., et al.: Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in CT. IEEE TMI 40(10), 2759–2770 (2021)

    Google Scholar 

  4. Cai, J., et al.: Lesion harvester: iteratively mining unlabeled lesions and hard-negative examples at scale. IEEE TMI 40(1), 59–70 (2021)

    Google Scholar 

  5. Yang, J., et al.: AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12264, pp. 562–572. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59719-1_55

    Chapter  Google Scholar 

  6. Yang, J., He, Y., Kuang, K., Lin, Z., Pfister, H., Ni, B.: Asymmetric 3D context fusion for universal lesion detection. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12905, pp. 571–580. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87240-3_55

    Chapter  Google Scholar 

  7. Han, L., et al.: SATr: Slice Attention with Transformer for Universal Lesion Detection. arXiv (2022)

    Google Scholar 

  8. Cai, J., et al.: Deep lesion tracker: monitoring lesions in 4D longitudinal imaging studies. In: IEEE CVPR (2020)

    Google Scholar 

  9. Tang, W., et al.: Transformer Lesion Tracker. arXiv (2022)

    Google Scholar 

  10. Yan, K., et al.: Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology. In: IEEE CVPR (2019)

    Google Scholar 

  11. Yan, K., et al.: MULAN: multitask universal lesion analysis network for joint lesion detection, tagging, and segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 194–202. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_22

    Chapter  Google Scholar 

  12. Setio, A.A.A., et al.: Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge. Med. Image Anal. 42, 1–13 (2017)

    Article  Google Scholar 

  13. Bilic, P., et al.: The Liver Tumor Segmentation Benchmark (LiTS). CoRR (2019)

    Google Scholar 

  14. Roth, H.R., et al.: A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8673, pp. 520–527. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10404-1_65

    Chapter  Google Scholar 

  15. Zhang, H., et al.: VarifocalNet: an IoU-aware dense object detector. In: IEEE CVPR, pp. 8514–8523 (2021)

    Google Scholar 

  16. Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE PAMI 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  17. Lin, T.Y., et al.: Focal loss for dense object detection. In: IEEE ICCV, pp. 2999–3007 (2017)

    Google Scholar 

  18. Kong, T., et al.: FoveaBox: Beyond Anchor-based Object Detector. arXiv (2019)

    Google Scholar 

  19. Tian, Z., et al.: FCOS: fully convolutional one-stage object detection. In: IEEE ICCV, pp. 9627–9636 (2019)

    Google Scholar 

  20. Zhang, S., et al.: Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection. CoRR (2019)

    Google Scholar 

  21. Solovyev, R., et al.: Weighted boxes fusion: ensembling boxes from different object detection models. Image Vis. Comput. 107, 104117 (2021)

    Article  Google Scholar 

  22. Yan, K., et al.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3), 1–11 (2018)

    Article  Google Scholar 

  23. Yan, K., et al.: Unsupervised body part regression via spatially self-ordering convolutional neural networks. In: IEEE ISBI, pp. 1022–1025 (2018)

    Google Scholar 

  24. Mattikalli, T., et al.: Universal lesion detection in CT scans using neural network ensembles. In: SPIE Medical Imaging: Computer-Aided Diagnosis, vol. 12033 (2022)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Intramural Research Program of the National Institutes of Health (NIH) Clinical Center.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tejas Sudharshan Mathai .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1172 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Erickson, P.D., Mathai, T.S., Summers, R.M. (2022). Class Imbalance Correction for Improved Universal Lesion Detection and Tagging in CT. In: Zamzmi, G., Antani, S., Bagci, U., Linguraru, M.G., Rajaraman, S., Xue, Z. (eds) Medical Image Learning with Limited and Noisy Data. MILLanD 2022. Lecture Notes in Computer Science, vol 13559. Springer, Cham. https://doi.org/10.1007/978-3-031-16760-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16760-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16759-1

  • Online ISBN: 978-3-031-16760-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics