skip to main content
10.1145/3183713.3193563acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

Crowdsourcing Analytics With CrowdCur

Published:27 May 2018Publication History

ABSTRACT

We propose to demonstrate CrowdCur \xspace, a system that allows platform administrators, requesters, and workers to conduct various analytics of interest. CrowdCur \xspace includes a worker curation component that relies on explicit feedback elicitation to best capture workers' preferences, a task curation component that monitors task completion and aggregates their statistics, and an OLAP-style component to query and combine analytics by a worker, by task type, etc. Administrators can fine tune their system's performance. Requesters can compare platforms and better choose the set of workers to target. Workers can compare themselves to others and find tasks and requesters that suit them best.

References

  1. Sihem Amer-Yahia, Sofia Kleisarchaki, Naresh Kumar Kolloju, Laks VS Lakshmanan, and Ruben H Zamar . 2017. Exploring Rated Datasets with Rating Maps. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1411--1419. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sihem Amer-Yahia and Senjuti Basu Roy . 2016. Human factors in crowdsourcing. Proceedings of the VLDB Endowment (2016). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Adam Coates, Andrew Ng, and Honglak Lee . 2011. An Analysis of Single-Layer Networks in Unsupervised Feature Learning Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research), bibfieldeditorGeoffrey Gordon, David Dunson, and Miroslav Dudík (Eds.), Vol. Vol. 15. PMLR, Fort Lauderdale, FL, USA, 215--223. deftempurl%http://proceedings.mlr.press/v15/coates11a.html tempurlGoogle ScholarGoogle Scholar
  4. Mohammadreza Esfandiari, Senjuti Basu Roy, and Sihem Amer-Yahia . 2018. Eliciting Worker Preference for Task Completion. (2018). showeprint{arxiv}1801.03233Google ScholarGoogle Scholar
  5. Benjamin V Hanrahan, Jutta K Willamowski, Saiganesh Swaminathan, and David B Martin . 2015. TurkBench: Rendering the market for Turkers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1613--1616. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. L. C. Irani and M. Silberman . 2013. Turkopticon: interrupting worker invisibility in amazon mechanical turk SIGCHI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Prasanth Jayachandran, Karthik Tunga, Niranjan Kamat, and Arnab Nandi . 2014. Combining user interaction, speculative query execution and sampling in the DICE system. Proceedings of the VLDB Endowment Vol. 7, 13 (2014), 1697--1700. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Nicolas Kaufmann et almbox. . 2011 a. More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk. In AMCIS.Google ScholarGoogle Scholar
  9. Nicolas Kaufmann, Thimo Schulze, and Daniel Veit . 2011 b. More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk.. In AMCIS.Google ScholarGoogle Scholar
  10. Anand Kulkarni, Philipp Gutheim, Prayag Narula, David Rolnitzky, Tapan Parikh, and Björn Hartmann . 2012. Mobileworks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing Vol. 16, 5 (2012), 28--35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Behrooz Omidvar-Tehrani, Sihem Amer-Yahia, and Alexandre Termier . 2015. Interactive User Group Analysis. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015. 403--412. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Senjuti Basu Roy et almbox. . 2013. Crowds, not Drones: Modeling Human Factors in Interactive Crowdsourcing DBCrowd.Google ScholarGoogle Scholar
  13. Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang . 2015. Open domain question answering via semantic enrichment Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1045--1055. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Theano Development Team . 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints Vol. abs/1605.02688 (May . 2016). deftempurl%http://arxiv.org/abs/1605.02688 tempurlGoogle ScholarGoogle Scholar
  15. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol . 2008. Extracting and composing robust features with denoising autoencoders Proceedings of the 25th international conference on Machine learning. ACM, 1096--1103. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Crowdsourcing Analytics With CrowdCur

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in
                • Published in

                  cover image ACM Conferences
                  SIGMOD '18: Proceedings of the 2018 International Conference on Management of Data
                  May 2018
                  1874 pages
                  ISBN:9781450347037
                  DOI:10.1145/3183713

                  Copyright © 2018 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 27 May 2018

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • research-article

                  Acceptance Rates

                  SIGMOD '18 Paper Acceptance Rate90of461submissions,20%Overall Acceptance Rate785of4,003submissions,20%

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader