Abstract
With the advent and growing use of crowdsourcing labor markets for a variety of applications, optimizing the quality of results produced is of prime importance. The quality of the results produced is typically a function of the performance of crowd workers. In this paper, we investigate the notion of treating crowd workers as ‘learners’ in a novel learning environment. This learning context is characterized by a short-lived learning phase and immediate application of learned concepts. We draw motivation from the desire of crowd workers to perform well in order to maintain a good reputation, while attaining monetary rewards successfully. Thus, we delve into training workers in specific microtasks of different types. We exploit (i) implicit training, where workers are provided training when they provide erraneous responses to questions with priorly known answers, and (ii) explicit training, where workers are required to go through a training phase before they attempt to work on the task itself. We evaluated our approach in 4 different types of microtasks with a total of 1200 workers, who were subjected to either one of the proposed training strategies or baseline case of no training. The results show that workers who undergo training depict an improvement in performance upto 5 %, and a reduction in the task completion time upto 41 %. Additionally, crowd training led to the elimination of malicious workers and a costs-benefit gain upto nearly 15 %.
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
A person who deploys a microtask in order to gather responses from the crowd. Also called a ‘requester’.
- 5.
- 6.
References
Do-Lenh, S., Jermann, P., Cuendet, S., Zufferey, G., Dillenbourg, P.: Task performance vs. learning outcomes: a study of a tangible user interface in the classroom. In: Wolpers, M., Kirschner, P.A., Scheffel, M., Lindstaedt, S., Dimitrova, V. (eds.) EC-TEL 2010. LNCS, vol. 6383, pp. 78–92. Springer, Heidelberg (2010)
Dow, S., Gerber, E., Wong, A.: A pilot study of using crowds in the classroom. In: 2013 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 227–236, Paris, France, 27 April - 2 May 2013 (2013)
Eickhoff, C., de Vries, A.P.: Increasing cheat robustness of crowdsourcing tasks. Inf. Retr. 16(2), 121–137 (2013)
Gadiraju, U., Kawase, R., Dietze, S.: A taxonomy of microtasks on the web. In: 25th ACM Conference on Hypertext and Social Media, HT 2014, pp. 218–223, Santiago, Chile, 1–4 September 2014 (2014)
Gadiraju, U., Kawase, R., Dietze, S., Demartini, G.: Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In: Proceedings of CHI Conference on Human Factors in Computing Systems, CHI 2015 (2015)
Goldman, S.A., Kearns, M.J.: On the complexity of teaching. In: Proceedings of the Fourth Annual Workshop on Computational Learning Theory, COLT 1991, pp. 303–314. Morgan Kaufmann Publishers Inc., San Francisco (1991)
Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010)
D.R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12–14 December 2011, Granada, Spain, pp. 1953–1961 (2011)
Kobren, A., Tan, C.H., Ipeirotis, P.G., Gabrilovich, E.: Getting more for less: optimized crowdsourcing with dynamic tasks and goals. In: Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence, Italy, 18–22 May, 2015, pp. 592–602 (2015)
Le, J., Edmonds, A., Hester, V., Biewald, L.: Ensuring quality in crowdsourced search relevance evaluation: the effects of training question distribution. In: SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation, pp. 21–26 (2010)
Lin, C.H., Mausam, M., Weld, D.S.: Crowdsourcing control: moving beyond multiple choice. In: Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, 14–18 August, 2012, pp. 491–500 (2012)
Marshall, C.C., Shipman, F.M.: Experiences surveying the crowd: reflections on methods, participation, and reliability. In: Proceedings of the 5th Annual ACM Web Science Conference, pp. 234–243. ACM (2013)
Oleson, D., Sorokin, A., Laughlin, G.P., Hester, V., Le, J., Biewald, L.: Programmatic gold: targeted and scalable quality assurance in crowdsourcing. In: Human computation, vol. 11, issue no. 11 (2011)
Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., Vukovic, M.: An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In: ICWSM (2011)
Singla, A., Bogunovic, I., Bartók, G., Karbasi, A., Krause, A.: On actively teaching the crowd to classify. In: NIPS Workshop on Data Driven Education (2013)
Vuurens, J., de Vries, A.P., Eickhoff, C.: How much spam can you take? an analysis of crowdsourcing results to increase accuracy. In: Proceedings of ACM SIGIR Workshop on Crowdsourcing for Information Retrieval (CIR 2011) (2011)
Welinder, P., Branson, S., Belongie, S., Perona, P.: The multidimensional wisdom of crowds. In: Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6–9 December 2010, Vancouver, Canada, pp. 2424–2432 (2010)
Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.R.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7–10 December 2009, Vancouver, Canada, pp. 2035–2043 (2009)
Xu, A., Rao, H., Dow, S.P., Bailey, B.P.: A classroom study of using crowd feedback in the iterative design process. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW 2015, Vancouver, Canada, 14–18 March 2015, pp. 1637–1648 (2015)
Zook, M., Graham, M., Shelton, T., Gorman, S.: Volunteered geographic information and crowdsourcing disaster relief: a case study of the haitian earthquake. World Med. Health Policy 2(2), 7–33 (2010)
Acknowledgements
This work has been carried out partially in the context of the DURAARK project, funded by the European Commission within the 7th Framework Programme (Grant Agreement no: 600908).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Gadiraju, U., Fetahu, B., Kawase, R. (2015). Training Workers for Improving Performance in Crowdsourcing Microtasks. In: Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, E. (eds) Design for Teaching and Learning in a Networked World. EC-TEL 2015. Lecture Notes in Computer Science(), vol 9307. Springer, Cham. https://doi.org/10.1007/978-3-319-24258-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-24258-3_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-24257-6
Online ISBN: 978-3-319-24258-3
eBook Packages: Computer ScienceComputer Science (R0)