ABSTRACT
We propose to demonstrate CrowdCur \xspace, a system that allows platform administrators, requesters, and workers to conduct various analytics of interest. CrowdCur \xspace includes a worker curation component that relies on explicit feedback elicitation to best capture workers' preferences, a task curation component that monitors task completion and aggregates their statistics, and an OLAP-style component to query and combine analytics by a worker, by task type, etc. Administrators can fine tune their system's performance. Requesters can compare platforms and better choose the set of workers to target. Workers can compare themselves to others and find tasks and requesters that suit them best.
- Sihem Amer-Yahia, Sofia Kleisarchaki, Naresh Kumar Kolloju, Laks VS Lakshmanan, and Ruben H Zamar . 2017. Exploring Rated Datasets with Rating Maps. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1411--1419. Google ScholarDigital Library
- Sihem Amer-Yahia and Senjuti Basu Roy . 2016. Human factors in crowdsourcing. Proceedings of the VLDB Endowment (2016). Google ScholarDigital Library
- Adam Coates, Andrew Ng, and Honglak Lee . 2011. An Analysis of Single-Layer Networks in Unsupervised Feature Learning Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research), bibfieldeditorGeoffrey Gordon, David Dunson, and Miroslav Dudík (Eds.), Vol. Vol. 15. PMLR, Fort Lauderdale, FL, USA, 215--223. deftempurl%http://proceedings.mlr.press/v15/coates11a.html tempurlGoogle Scholar
- Mohammadreza Esfandiari, Senjuti Basu Roy, and Sihem Amer-Yahia . 2018. Eliciting Worker Preference for Task Completion. (2018). showeprint{arxiv}1801.03233Google Scholar
- Benjamin V Hanrahan, Jutta K Willamowski, Saiganesh Swaminathan, and David B Martin . 2015. TurkBench: Rendering the market for Turkers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1613--1616. Google ScholarDigital Library
- L. C. Irani and M. Silberman . 2013. Turkopticon: interrupting worker invisibility in amazon mechanical turk SIGCHI. Google ScholarDigital Library
- Prasanth Jayachandran, Karthik Tunga, Niranjan Kamat, and Arnab Nandi . 2014. Combining user interaction, speculative query execution and sampling in the DICE system. Proceedings of the VLDB Endowment Vol. 7, 13 (2014), 1697--1700. Google ScholarDigital Library
- Nicolas Kaufmann et almbox. . 2011 a. More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk. In AMCIS.Google Scholar
- Nicolas Kaufmann, Thimo Schulze, and Daniel Veit . 2011 b. More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk.. In AMCIS.Google Scholar
- Anand Kulkarni, Philipp Gutheim, Prayag Narula, David Rolnitzky, Tapan Parikh, and Björn Hartmann . 2012. Mobileworks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing Vol. 16, 5 (2012), 28--35. Google ScholarDigital Library
- Behrooz Omidvar-Tehrani, Sihem Amer-Yahia, and Alexandre Termier . 2015. Interactive User Group Analysis. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015. 403--412. Google ScholarDigital Library
- Senjuti Basu Roy et almbox. . 2013. Crowds, not Drones: Modeling Human Factors in Interactive Crowdsourcing DBCrowd.Google Scholar
- Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang . 2015. Open domain question answering via semantic enrichment Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1045--1055. Google ScholarDigital Library
- Theano Development Team . 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints Vol. abs/1605.02688 (May . 2016). deftempurl%http://arxiv.org/abs/1605.02688 tempurlGoogle Scholar
- Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol . 2008. Extracting and composing robust features with denoising autoencoders Proceedings of the 25th international conference on Machine learning. ACM, 1096--1103. Google ScholarDigital Library
Index Terms
- Crowdsourcing Analytics With CrowdCur
Recommendations
Task recommendation in crowdsourcing systems
CrowdKDD '12: Proceedings of the First International Workshop on Crowdsourcing and Data MiningIn crowdsourcing systems, tasks are distributed to networked people to complete such that a company's production cost can be greatly reduced. Obviously, it is not efficient that the amount of time for a worker spent on selecting a task is comparable ...
Crowdsourcing 101: putting the WSDM of crowds to work for you
WSDM '11: Proceedings of the fourth ACM international conference on Web search and data miningCrowdsourcing has emerged in recent years as an exciting new avenue for leveraging the tremendous potential and resources of today's digitally-connected, diverse, distributed workforce. Generally speaking, crowdsourcing describes outsourcing of tasks to ...
Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys
CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing SystemsCrowdsourcing is increasingly being used as a means to tackle problems requiring human intelligence. With the ever-growing worker base that aims to complete microtasks on crowdsourcing platforms in exchange for financial gains, there is a need for ...
Comments