ABSTRACT
Many data processing tasks such as semantic annotation of images, translation of texts in foreign languages, and labeling of training data for machine learning models require human input, and, on a large scale, can only be accurately solved using crowd based online work. Recent work shows that frameworks where crowd workers compete against each other can drastically reduce crowdsourcing costs, and outperform conventional reward schemes where the payment of online workers is proportional to the number of accomplished tasks ("pay-per-task"). In this paper, we investigate how team mechanisms can be leveraged to further improve the cost efficiency of crowdsourcing competitions. To this end, we introduce strategies for team based crowdsourcing, ranging from team formation processes where workers are randomly assigned to competing teams, over strategies involving self-organization where workers actively participate in team building, to combinations of team and individual competitions. Our large-scale experimental evaluation with more than 1,100 participants and overall 5,400 hours of work spent by crowd workers demonstrates that our team based crowdsourcing mechanisms are well accepted by online workers and lead to substantial performance boosts.
- TREC Crowdsourcing Task. https://sites.google.com/site/treccrowd/home, 2013.Google Scholar
- GamifIR '14: Proceedings of the First International Workshop on Gamification for Information Retrieval, New York, NY, USA, 2014. ACM.Google Scholar
- O. Alonso and R. Baeza-Yates. Design and implementation of relevance assessments using crowdsourcing. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 153--164, Berlin, Heidelberg, 2011. Springer-Verlag. Google ScholarDigital Library
- O. Alonso and S. Mizzaro. Using crowdsourcing for trec relevance assessment. Information Processing & Management, 48(6):1053--1066, Nov. 2012. Google ScholarDigital Library
- N. Archak. Money, glory and cheap talk: Analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, pages 21--30, New York, NY, SA, 2010. ACM. Google ScholarDigital Library
- N. Archak and A. Sundararajan. Optimal design of crowdsourcing contests. In Proceedings of the International Conference on Information Systems, ICIS 2009, Phoenix, Arizona, USA, 2009. Association for Information Systems.Google Scholar
- R. Cavallo and S. Jain. Efficient crowdsourcing contests. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2, AAMAS '12, pages 677--686, Richland, SC, 2012. International Foundation for Autonomous Agents and Multiagent Systems. Google ScholarDigital Library
- R. Cavallo and S. Jain. Winner-take-all crowdsourcing contests with stochastic production. In Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing, Palm Springs, CA, USA, 2013. AAAI.Google Scholar
- D. DiPalantino and M. Vojnovic. Crowdsourcing and all-pay auctions. In Proceedings of the 10th ACM Conference on Electronic Commerce, EC '09, pages 119--128, New York, NY, USA, 2009. ACM. Google ScholarDigital Library
- C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through ow and immersion: Gamifying crowdsourced relevance assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12, pages 871--880, New York, NY, USA, 2012. ACM. Google ScholarDigital Library
- J. He, M. Bron, L. Azzopardi, and A. de Vries. Studying user browsing behavior through gamified search tasks. In Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14, pages 49--52, NY, USA, 2014. ACM. Google ScholarDigital Library
- H. Jiang and S. Matsubara. Improving crowdsourcing efficiency based on division strategy. In Proceedings of the 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, volume 2, pages 425--429, Los Alamitos, CA, USA, 2012. IEEE Computer Society. Google ScholarDigital Library
- G. Kazai. In search of quality in crowdsourcing for search engine evaluation. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 165--176, Berlin, Heidelberg, 2011. Springer-Verlag. Google ScholarDigital Library
- G. Kazai, J. Kamps, and N. Milic-Frayling. Worker types and personality traits in crowdsourcing relevance labels. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11, pages 1941--1944, New York, NY, USA, 2011. ACM. Google ScholarDigital Library
- N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face Verification. In Proceedings of the 12th IEEE International Conference on Computer Vision, ICCV 2009, pages 365--372, Piscataway, NJ, USA, 2009. IEEE Computer Society.Google ScholarCross Ref
- W. Mason and D. J. Watts. Financial incentives and the "performance of crowds". SIGKDD Explorations Newsletter, 11(2):100--108, May 2010. Google ScholarDigital Library
- D. Pothineni, P. Mishra, A. Rasheed, and D. Sundararajan. Incentive design to mould online behavior: A game mechanics perspective. In Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14, pages 27--32, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- M. Rokicki, S. Chelaru, S. Zerr, and S. Siersdorfer. Competitive game designs for improving the cost effectiveness of crowdsourcing. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, CIKM '14, New York, NY, USA, 2014. ACM. Google ScholarDigital Library
- N. Savage. Gaining wisdom from crowds. Communications of the ACM, 55(3):13--15, Mar. 2012. Google ScholarDigital Library
- J. C. Tang, M. Cebrian, N. A. Giacobe, H.-W. Kim, T. Kim, and D. B. Wickert. Reflecting on the darpa red balloon challenge. Commununications of the ACM, 54(4):78--85, Apr. 2011. Google ScholarDigital Library
- L. von Ahn and L. Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, pages 319--326, New York, NY, USA, 2004. ACM. Google ScholarDigital Library
- L. von Ahn and L. Dabbish. Designing games with a purpose. Commununications of the ACM, 51(8):58--67, Aug. 2008. Google ScholarDigital Library
- P. Welinder and P. Perona. Online crowdsourcing: Rating annotators and obtaining cost-effective labels. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 25--32, June 2010.Google ScholarCross Ref
- J. Yang, L. A. Adamic, and M. S. Ackerman. Crowdsourcing and knowledge sharing: Strategic user behavior on taskcn. In Proceedings of the 9th ACM Conference on Electronic Commerce, EC '08, pages 246--255, New York, NY, USA, 2008. ACM. Google ScholarDigital Library
- M.-C. Yuen, I. King, and K.-S. Leung. A survey of crowdsourcing systems. In Privacy, Security, Risk and Trust (PASSAT), IEEE Third International Conference on Social Computing (SocialCom), PASSAT/SocialCom 2011, pages 766--773. IEEE, 2011.Google Scholar
Index Terms
- Groupsourcing: Team Competition Designs for Crowdsourcing
Recommendations
Competitive Game Designs for Improving the Cost Effectiveness of Crowdsourcing
CIKM '14: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge ManagementCrowd based online work is leveraged in a variety of applications such as semantic annotation of images, translation of texts in foreign languages, and labeling of training data for machine learning models. However, annotating large amounts of data ...
Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that ...
A Community Rather Than A Union: Understanding Self-Organization Phenomenon on MTurk and How It Impacts Turkers and Requesters
CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing SystemsThis paper aims to understand the self-organization phenomenon among the workers of Amazon Mechanical Turk (MTurk), a well-known crowdsourcing platform. Specifically, we explored 1) why MTurk workers self-organize into online communities (Turker ...
Comments