Abstract
Cell detection is a common task in computational pathology, often fundamental for downstream tasks that can aid in predicting prognosis or treatment response. The Overlapped Cell on Tissue Dataset for Histopathology (OCELOT) challenge aimed to explore ways to improve automated cell detection algorithms by leveraging surrounding tissue information. We developed two cell detection algorithms for this challenge that both leverage surrounding tissue context to enhance their performance. The first is fed an additional input representing a cancer area probability heatmap, predicted from a tissue segmentation model. The second is fed the cancer area probability heatmap, in addition to a heatmap representing cell locations predicted from a separate model. Submitting our first algorithm, we achieved a mean F1 score of 74.73 on the challenge validation set, and second place with a mean F1 score of 72.21 on the challenge test set. Our algorithms do not require paired cell and tissue annotations to train, enabling their use to enhance existing cell detection models where paired annotations may not exist.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Atabansi, C.C., Nie, J., Liu, H., Song, Q., Yan, L., Zhou, X.: A survey of transformer applications for histopathological image analysis: new developments and future directions. Biomed. Eng. Online 22(1), 96 (2023)
Chen, R.J., et al.: Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16144–16155 (2022)
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009 (2022)
Kadota, K., et al.: A grading system combining architectural features and mitotic count predicts recurrence in stage i lung adenocarcinoma. Mod. Pathol. 25(8), 1117–1127 (2012)
Macenko, M., et al.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: from Nano to Macro, pp. 1107–1110. IEEE (2009)
Pai, R.K., et al.: Quantitative pathologic analysis of digitized images of colorectal carcinoma improves prediction of recurrence-free survival. Gastroenterology 163(6), 1531–1546 (2022)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035 (2019)
Ryu, J., et al.: OCELOT: overlapped cell on tissue dataset for histopathology. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 23902–23912 (2023)
Williams, D.S., et al.: Lymphocytic response to tumour and deficient DNA mismatch repair identify subtypes of stage ii/iii colorectal cancer associated with patient outcomes. Gut 68(3), 465–474 (2019). https://doi.org/10.1136/gutjnl-2017-315664
Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: simple and efficient design for semantic segmentation with transformers. Adv. Neural. Inf. Process. Syst. 34, 12077–12090 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Excluded Image IDs
The following ‘cell’ image IDs from the released training set were identified as containing varying degrees of under-annotated cells and excluded from cell detection model development. The corresponding ‘tissue’ images were still used to train the tissue segmentation model.
IDs: 051, 074, 079, 129, 135, 138, 140, 144, 147, 152, 168, 172, 181, 201, 223, 233, 244, 249, 251, 252, 255, 256, 263, 267, 279, 286, 292, 294, 307, 315, 323, 325, 334, 341, 345, 352, 376, 393, 396, 397.
Macenko normalisation on the ‘cell’ images failed for the following image IDs due to containing no tissue. These were excluded from cell detection model development. The corresponding ‘tissue’ images were still used to train the tissue segmentation model.
IDs: 042, 217, 392.
B Output Crop Margin
Illustration of the output crop margin applied to the tissue segmentation model. Given a \(512\times 512\) pixel input tile (a), predictions are retained for the inner \(384\times 384\) pixels (red box), within 64 pixels of the border (b). Red overlay corresponds to predicted cancer area, whilst blue corresponds to normal tissue or background. (Color figure online)
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Millward, J., He, Z., Nibali, A. (2024). Dense Prediction of Cell Centroids Using Tissue Context and Cell Refinement. In: Ahmadi, SA., Pereira, S. (eds) Graphs in Biomedical Image Analysis, and Overlapped Cell on Tissue Dataset for Histopathology. MICCAI 2023. Lecture Notes in Computer Science, vol 14373. Springer, Cham. https://doi.org/10.1007/978-3-031-55088-1_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-55088-1_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-55087-4
Online ISBN: 978-3-031-55088-1
eBook Packages: Computer ScienceComputer Science (R0)