Zusammenfassung
Modern, state-of-the-art deep learning approaches yield human like performance in numerous object detection and classification tasks. The foundation for their success is the availability of training datasets of substantially high quantity, which are expensive to create, especially in the field of medical imaging. Crowdsourcing has been applied to create large datasets for a broad range of disciplines. This study aims to explore the challenges and opportunities of crowd-algorithm collaboration for the object detection task of grading cytology whole slide images. We compared the classical crowdsourcing performance of twenty participants with their results from crowd-algorithm collaboration. All participants performed both modes in random order on the same twenty images. Additionally, we introduced artificial systematic flaws into the precomputed annotations to estimate a bias towards accepting precomputed annotations. We gathered 9524 annotations on 800 images from twenty participants organised into four groups in concordance to their level of expertise with cytology. The crowd-algorithm mode improved on average the participants’ classification accuracy by 7%, the mean average precision by 8% and the inter-observer Fleiss’ kappa score by 20%, and reduced the time spent by 31%. However, two thirds of the artificially modified false labels were not recognised as such by the contributors. This study shows that crowd-algorithm collaboration is a promising new approach to generate large datasets when it is ensured that a carefully designed setup eliminates potential biases.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Literatur
Ørting S, Doyle A, van Hilten MHA, et al. A survey of crowdsourcing in medical image analysis. arXiv preprint arXiv:190209159. 2019;.
Maier-Hein L, Ross T, Gröhl J, et al. Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In: Med Image Comput Comput Assist Interv. Springer; 2016. p. 616–623.
Ganz M, Kondermann D, Andrulis J, et al. Crowdsourcing for error detection in cortical surface delineations. Int J Comput Assist Radiol Surg. 2017;12(1):161–166.
Doucet MY, Viel L. Alveolar macrophage graded hemosiderin score from bronchoalveolar lavage in horses with exercise-induced pulmonary hemorrhage and controls. J Vet Intern Med. 2002;16(3):281–286.
Marzahl C, Aubreville M, Bertram CA, et al. Deep Learning-Based Quantification of Pulmonary Hemosiderophages in Cytology Slides. arXiv preprint arXiv:190804767. 2019;.
Irshad H, Montaser-Kouhsari L,Waltz G, et al. Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd. In: Pac Symp Biocomput. World Scientific; 2014. p. 294–305.
Golde DW, DrewWL, Klein HZ, et al. Occult pulmonary haemorrhage in leukaemia. Br Med J. 1975;2(5964):166–168.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature
About this paper
Cite this paper
Marzahl, C. et al. (2020). Is Crowd-Algorithm Collaboration an Advanced Alternative to Crowd-Sourcing on Cytology Slides?. In: Tolxdorff, T., Deserno, T., Handels, H., Maier, A., Maier-Hein, K., Palm, C. (eds) Bildverarbeitung für die Medizin 2020. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-29267-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-658-29267-6_5
Published:
Publisher Name: Springer Vieweg, Wiesbaden
Print ISBN: 978-3-658-29266-9
Online ISBN: 978-3-658-29267-6
eBook Packages: Computer Science and Engineering (German Language)