skip to main content
10.1145/2594776.2594788acmotherconferencesArticle/Chapter ViewAbstractPublication PagesgamifirConference Proceedingsconference-collections
research-article

Crowd-powered experts: helping surgeons interpret breast cancer images

Published: 13 April 2014 Publication History

Abstract

Crowdsourcing is often applied for the task of replacing the scarce or expensive labour of experts with that of untrained workers. In this paper, we argue, that this objective might not always be desirable, but that we should instead aim at leveraging the considerable work force of the crowd in order to support the highly trained expert. In this paper, we demonstrate this different paradigm on the example of detecting malignant breast cancer in medical images. We compare the effectiveness and efficiency of experts to that of crowd workers, finding significantly better performance at greater cost. In a second series of experiments, we show how the comparably cheap results produced by crowdsourcing workers can serve to make experts more efficient AND more effective at the same time.

References

[1]
Omar Alonso and Stefano Mizzaro. Can we get rid of trec assessors? using mechanical turk for relevance assessment. In Proceedings of the SIGIR 2009 Workshop on the Future of IR Evaluation, pages 15--16, 2009.
[2]
Seth Cooper, Firas Khatib, Adrien Treuille, Janos Barbero, Jeehyung Lee, Michael Beenen, Andrew Leaver-Fay, David Baker, Zoran Popović, et al. Predicting protein structures with a multiplayer online game. Nature, 466(7307), 2010.
[3]
Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. Pick-a-crowd: Tell me what you like, and i'll tell you what to do. In Proceedings of the 22nd international conference on World Wide Web, pages 367--374. ACM, 2013.
[4]
Carsten Eickhoff and Arjen P. de Vries. Increasing cheat robustness of crowdsourcing tasks. Information Retrieval, pages 1--17, 2013.
[5]
Carsten Eickhoff, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. Quality through flow and immersion: gamifying crowdsourced relevance assessments. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, pages 871--880. ACM, 2012.
[6]
Christopher G. Harris. You're hired! an examination of crowdsourcing incentive models in human resource tasks. In WSDM Workshop on Crowdsourcing for Search and Data Mining (CSDM), pages 15--18, 2011.
[7]
Panagiotis G. Ipeirotis. The new Demographics of Mechanical Turk. http://www.behind-the-enemy-lines.com/2010/03/new-demographics-of-mechanical-turk.html, 2010.
[8]
Aniket Kittur, Ed H. Chi, and Bongwon Suh. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 453--456. ACM, 2008.
[9]
Vivien Marx. Neuroscience waves to the crowd. Nature methods, 10(11):1069--1074, 2013.
[10]
Roswitha Pfragner and R. Ian Freshney. Culture of Human Tumor Cells. Culture of Specialized Cells. Wiley, 2003.
[11]
Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. Who are the crowdworkers?: shifting demographics in mechanical turk. In CHI'10 Extended Abstracts on Human Factors in Computing Systems, pages 2863--2872. ACM, 2010.
[12]
William H. Wolberg, W. Nick Street, and Olvi L. Mangasarian. Breast cytology diagnosis via digital image analysis. Analytical and Quantitative Cytology and Histology, 15(6):396--404, 1993.

Cited By

View all
  • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
  • (2023)Crowdsourcing and its applications to ophthalmologyExpert Review of Ophthalmology10.1080/17469899.2023.220093518:2(113-119)Online publication date: 12-Apr-2023
  • (2023)What is a related work? A typology of relationships in research literatureSynthese10.1007/s11229-022-03976-5201:1Online publication date: 9-Jan-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
GamifIR '14: Proceedings of the First International Workshop on Gamification for Information Retrieval
April 2014
68 pages
ISBN:9781450328920
DOI:10.1145/2594776
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • University of Essex
  • Technische Universitat Berlin: Technische Universitat Berlin
  • Microsoft Research: Microsoft Research

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 April 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. breast cancer
  2. cancer recognition
  3. crowdsourcing
  4. experts
  5. image annotation

Qualifiers

  • Research-article

Conference

GamifIR '14
Sponsor:
  • Technische Universitat Berlin
  • Microsoft Research

Acceptance Rates

GamifIR '14 Paper Acceptance Rate 14 of 18 submissions, 78%;
Overall Acceptance Rate 14 of 18 submissions, 78%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)29
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
  • (2023)Crowdsourcing and its applications to ophthalmologyExpert Review of Ophthalmology10.1080/17469899.2023.220093518:2(113-119)Online publication date: 12-Apr-2023
  • (2023)What is a related work? A typology of relationships in research literatureSynthese10.1007/s11229-022-03976-5201:1Online publication date: 9-Jan-2023
  • (2019)A Big Data Platform for Enhancing Life Imaging ActivitiesUtilizing Big Data Paradigms for Business Intelligence10.4018/978-1-5225-4963-5.ch002(39-71)Online publication date: 2019
  • (2018)An exploratory case study on letter-based, head-movement-driven communicationTechnology and Disability10.3233/TAD-16016329:4(153-161)Online publication date: 18-Apr-2018
  • (2018)Building a qualified annotation dataset for skin lesion analysis trough gamificationProceedings of the 2018 International Conference on Advanced Visual Interfaces10.1145/3206505.3206555(1-5)Online publication date: 29-May-2018
  • (2018)Cognitive Biases in CrowdsourcingProceedings of the Eleventh ACM International Conference on Web Search and Data Mining10.1145/3159652.3159654(162-170)Online publication date: 2-Feb-2018
  • (2018)Using Social Media for Biomonitoring: How Facebook, Twitter, Flickr and Other Social Networking Platforms Can Provide Large-Scale Biodiversity DataNext Generation Biomonitoring: Part 210.1016/bs.aecr.2018.06.001(133-168)Online publication date: 2018
  • (2017)Improving Consensus Scoring of Crowdsourced Data Using the Rasch Model: Development and Refinement of a Diagnostic InstrumentJournal of Medical Internet Research10.2196/jmir.798419:6(e222)Online publication date: 20-Jun-2017
  • (2016)Using Crowdsourcing for Scientific Analysis of Industrial Tomographic ImagesACM Transactions on Intelligent Systems and Technology10.1145/28973707:4(1-25)Online publication date: 12-Jul-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media