skip to main content
10.1145/2736277.2741097acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Groupsourcing: Team Competition Designs for Crowdsourcing

Published:18 May 2015Publication History

ABSTRACT

Many data processing tasks such as semantic annotation of images, translation of texts in foreign languages, and labeling of training data for machine learning models require human input, and, on a large scale, can only be accurately solved using crowd based online work. Recent work shows that frameworks where crowd workers compete against each other can drastically reduce crowdsourcing costs, and outperform conventional reward schemes where the payment of online workers is proportional to the number of accomplished tasks ("pay-per-task"). In this paper, we investigate how team mechanisms can be leveraged to further improve the cost efficiency of crowdsourcing competitions. To this end, we introduce strategies for team based crowdsourcing, ranging from team formation processes where workers are randomly assigned to competing teams, over strategies involving self-organization where workers actively participate in team building, to combinations of team and individual competitions. Our large-scale experimental evaluation with more than 1,100 participants and overall 5,400 hours of work spent by crowd workers demonstrates that our team based crowdsourcing mechanisms are well accepted by online workers and lead to substantial performance boosts.

References

  1. TREC Crowdsourcing Task. https://sites.google.com/site/treccrowd/home, 2013.Google ScholarGoogle Scholar
  2. GamifIR '14: Proceedings of the First International Workshop on Gamification for Information Retrieval, New York, NY, USA, 2014. ACM.Google ScholarGoogle Scholar
  3. O. Alonso and R. Baeza-Yates. Design and implementation of relevance assessments using crowdsourcing. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 153--164, Berlin, Heidelberg, 2011. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. O. Alonso and S. Mizzaro. Using crowdsourcing for trec relevance assessment. Information Processing & Management, 48(6):1053--1066, Nov. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. N. Archak. Money, glory and cheap talk: Analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, pages 21--30, New York, NY, SA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. N. Archak and A. Sundararajan. Optimal design of crowdsourcing contests. In Proceedings of the International Conference on Information Systems, ICIS 2009, Phoenix, Arizona, USA, 2009. Association for Information Systems.Google ScholarGoogle Scholar
  7. R. Cavallo and S. Jain. Efficient crowdsourcing contests. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2, AAMAS '12, pages 677--686, Richland, SC, 2012. International Foundation for Autonomous Agents and Multiagent Systems. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. Cavallo and S. Jain. Winner-take-all crowdsourcing contests with stochastic production. In Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing, Palm Springs, CA, USA, 2013. AAAI.Google ScholarGoogle Scholar
  9. D. DiPalantino and M. Vojnovic. Crowdsourcing and all-pay auctions. In Proceedings of the 10th ACM Conference on Electronic Commerce, EC '09, pages 119--128, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through ow and immersion: Gamifying crowdsourced relevance assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12, pages 871--880, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. He, M. Bron, L. Azzopardi, and A. de Vries. Studying user browsing behavior through gamified search tasks. In Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14, pages 49--52, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. H. Jiang and S. Matsubara. Improving crowdsourcing efficiency based on division strategy. In Proceedings of the 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, volume 2, pages 425--429, Los Alamitos, CA, USA, 2012. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Kazai. In search of quality in crowdsourcing for search engine evaluation. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 165--176, Berlin, Heidelberg, 2011. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. G. Kazai, J. Kamps, and N. Milic-Frayling. Worker types and personality traits in crowdsourcing relevance labels. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11, pages 1941--1944, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face Verification. In Proceedings of the 12th IEEE International Conference on Computer Vision, ICCV 2009, pages 365--372, Piscataway, NJ, USA, 2009. IEEE Computer Society.Google ScholarGoogle ScholarCross RefCross Ref
  16. W. Mason and D. J. Watts. Financial incentives and the "performance of crowds". SIGKDD Explorations Newsletter, 11(2):100--108, May 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. D. Pothineni, P. Mishra, A. Rasheed, and D. Sundararajan. Incentive design to mould online behavior: A game mechanics perspective. In Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14, pages 27--32, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Rokicki, S. Chelaru, S. Zerr, and S. Siersdorfer. Competitive game designs for improving the cost effectiveness of crowdsourcing. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, CIKM '14, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. N. Savage. Gaining wisdom from crowds. Communications of the ACM, 55(3):13--15, Mar. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. C. Tang, M. Cebrian, N. A. Giacobe, H.-W. Kim, T. Kim, and D. B. Wickert. Reflecting on the darpa red balloon challenge. Commununications of the ACM, 54(4):78--85, Apr. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. L. von Ahn and L. Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, pages 319--326, New York, NY, USA, 2004. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. L. von Ahn and L. Dabbish. Designing games with a purpose. Commununications of the ACM, 51(8):58--67, Aug. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. P. Welinder and P. Perona. Online crowdsourcing: Rating annotators and obtaining cost-effective labels. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 25--32, June 2010.Google ScholarGoogle ScholarCross RefCross Ref
  24. J. Yang, L. A. Adamic, and M. S. Ackerman. Crowdsourcing and knowledge sharing: Strategic user behavior on taskcn. In Proceedings of the 9th ACM Conference on Electronic Commerce, EC '08, pages 246--255, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M.-C. Yuen, I. King, and K.-S. Leung. A survey of crowdsourcing systems. In Privacy, Security, Risk and Trust (PASSAT), IEEE Third International Conference on Social Computing (SocialCom), PASSAT/SocialCom 2011, pages 766--773. IEEE, 2011.Google ScholarGoogle Scholar

Index Terms

  1. Groupsourcing: Team Competition Designs for Crowdsourcing

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        WWW '15: Proceedings of the 24th International Conference on World Wide Web
        May 2015
        1460 pages
        ISBN:9781450334693

        Copyright © 2015 Copyright is held by the International World Wide Web Conference Committee (IW3C2)

        Publisher

        International World Wide Web Conferences Steering Committee

        Republic and Canton of Geneva, Switzerland

        Publication History

        • Published: 18 May 2015

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        WWW '15 Paper Acceptance Rate131of929submissions,14%Overall Acceptance Rate1,899of8,196submissions,23%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader