skip to main content
10.1145/2494188.2494210acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesi-knowConference Proceedingsconference-collections
research-article

Games with a Purpose or Mechanised Labour?: A Comparative Study

Authors Info & Claims
Published:04 September 2013Publication History

ABSTRACT

Mechanised labour and games with a purpose are the two most popular human computation genres, frequently employed to support research activities in fields as diverse as natural language processing, semantic web or databases. Research projects typically rely on either one or the other of these genres, and therefore there is a general lack of understanding of how these two genres compare and whether and how they could be used together to offset their respective weaknesses. This paper addresses these open questions. It first identifies the differences between the two genres, primarily in terms of cost, speed and result quality, based on existing studies in the literature. Secondly, it reports on a comparative study which involves performing the same task through both genres and comparing the results. The study's findings demonstrate that the two genres are highly complementary, which not only makes them suitable for different types of projects, but also opens new opportunities for building cross-genre human computation solutions that exploit the strengths of both genres simultaneously.

References

  1. J. Chamberlain, K. Fort, U. Kruschwitz, M. Lafourcade, and M. Poesio. Using Games to Create Language Resources: Successes and Limitations of the Approach. In I. Gurevych and K. Jungi, editors, The People's Web Meets NLP. Collaboratively Constructed Language Resources. Springer, 2013. To Appear.Google ScholarGoogle Scholar
  2. K. Eckert, M. Niepert, C. Niemann, C. Buckner, C. Allen, and H. Stuckenschmidt. Crowdsourcing the Assembly of Concept Hierarchies. In Proc. of the 10th Annual Joint Conference on Digital Libraries, JCDL '10, pages 139--148. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. K. Fort, G. Adda, and K. Cohen. Amazon Mechanical Turk: Gold Mine or Coal Mine? Computational Linguistics, 37(2):413 --420, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Kawrykow, G. Roumanis, A. Kam, D. Kwak, C. Leung, C. Wu, E. Zarour, and P. players. Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment. PLoS ONE, 7(3):e31362, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  5. A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing User Studies with Mechanical Turk. In Proc. of the 26th Conference on Human Factors in Computing Systems, pages 453--456, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. F. Laws, C. Scheible, and H. Schütze. Active Learning with Amazon Mechanical Turk. In Proc. of the Conf. on Empirical Methods in NLP, pages 1546--1556, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. W. Mason and D. J. Watts. Financial Incentives and the "Performance of Crowds". In Proc. of the ACM SIGKDD Workshop on Human Computation, HCOMP '09, pages 77--85. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. G. Parent and M. Eskenazi. Speaking to the Crowd: Looking at Past Achievements in Using Crowdsourcing for Speech and Predicting Future Challenges. In Proc. of INTERSPEECH, pages 3037--3040, 2011.Google ScholarGoogle Scholar
  9. M. Poesio, U. Kruschwitz, J. Chamberlain, L. Robaldo, and L. Ducceschi. Phrase Detectives: Utilizing Collective Intelligence for Internet-Scale Language Resource Creation. Transactions on Interactive Intelligent Systems, 2012. To Appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. J. Quinn and B. B. Bederson. Human Computation: A Survey and Taxonomy of a Growing Field. In Proc. of Human Factors in Computing Systems, pages 1403--1412, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. Sabou, K. Bontcheva, and A. Scharl. Crowdsourcing Research Opportunities: Lessons from Natural Language Processing. In Proc. of the 12th International Conference on Knowledge Management and Knowledge Technologies (iKNOW), Special Track on Research 2.0, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Scharl, M. Sabou, and M. Föls. Climate Quiz: a Web Application for Eliciting and Validating Knowledge from Social Networks. In Proc. of the 18th Brazilian symposium on Multimedia and the web, WebMedia '12, pages 189--192. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. K. Siorpaes and M. Hepp. Games with a Purpose for the Semantic Web. Intelligent Systems, IEEE, 23(3):50 --60, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and Fast---but is it Good?: Evaluating Non-Expert Annotations for Natural Language Tasks. In Proc. of EMNLP, pages 254--263, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Thaler, E. Simperl, and S. Wölger. An Experiment in Comparing Human-Computation Techniques. IEEE Internet Computing, 16(5):52--58, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. L. von Ahn. Human Computation. PhD thesis, Carnegie Mellon University, December 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. L. von Ahn. Games With a Purpose. Computer, 39(6):92 --94, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. A. Wang, C. Hoang, and M. Y. Kan. Perspectives on Crowdsourcing Annotations for Natural Language Processing. Language Resources and Evaluation, 47(1), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. G. Wohlgenannt, A. Weichselbraun, A. Scharl, and M. Sabou. Dynamic Integration of Multiple Evidence Sources for Ontology Learning. Journal of Information and Data Management, 3(3):243--254, 2012.Google ScholarGoogle Scholar
  20. L. Wolf, M. Knuth, J. Osterhoff, and H. Sack. RISQ! Renowned Individuals Semantic Quiz - a Jeopardy like Quiz Game for Ranking Facts. In Proc. of the 7th International Conference on Semantic Systems, I-Semantics '11, pages 71--78. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    i-Know '13: Proceedings of the 13th International Conference on Knowledge Management and Knowledge Technologies
    September 2013
    271 pages
    ISBN:9781450323000
    DOI:10.1145/2494188

    Copyright © 2013 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 4 September 2013

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    i-Know '13 Paper Acceptance Rate27of87submissions,31%Overall Acceptance Rate77of238submissions,32%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader