Skip to main content

Training Workers for Improving Performance in Crowdsourcing Microtasks

  • Conference paper
  • First Online:
Design for Teaching and Learning in a Networked World (EC-TEL 2015)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9307))

Included in the following conference series:

Abstract

With the advent and growing use of crowdsourcing labor markets for a variety of applications, optimizing the quality of results produced is of prime importance. The quality of the results produced is typically a function of the performance of crowd workers. In this paper, we investigate the notion of treating crowd workers as ‘learners’ in a novel learning environment. This learning context is characterized by a short-lived learning phase and immediate application of learned concepts. We draw motivation from the desire of crowd workers to perform well in order to maintain a good reputation, while attaining monetary rewards successfully. Thus, we delve into training workers in specific microtasks of different types. We exploit (i) implicit training, where workers are provided training when they provide erraneous responses to questions with priorly known answers, and (ii) explicit training, where workers are required to go through a training phase before they attempt to work on the task itself. We evaluated our approach in 4 different types of microtasks with a total of 1200 workers, who were subjected to either one of the proposed training strategies or baseline case of no training. The results show that workers who undergo training depict an improvement in performance upto 5 %, and a reduction in the task completion time upto 41 %. Additionally, crowd training led to the elimination of malicious workers and a costs-benefit gain upto nearly 15 %.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    http://www.wired.co.uk/news/archive/2011-01/13/the-oxford-english-wiktionary.

  2. 2.

    http://edition.cnn.com/2014/08/17/tech/nasa-earth-images-help-needed/.

  3. 3.

    http://tinyurl.com/kl3mmme.

  4. 4.

    A person who deploys a microtask in order to gather responses from the crowd. Also called a ‘requester’.

  5. 5.

    http://www.crowdflower.com/.

  6. 6.

    http://www.captcha.net/.

References

  1. Do-Lenh, S., Jermann, P., Cuendet, S., Zufferey, G., Dillenbourg, P.: Task performance vs. learning outcomes: a study of a tangible user interface in the classroom. In: Wolpers, M., Kirschner, P.A., Scheffel, M., Lindstaedt, S., Dimitrova, V. (eds.) EC-TEL 2010. LNCS, vol. 6383, pp. 78–92. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  2. Dow, S., Gerber, E., Wong, A.: A pilot study of using crowds in the classroom. In: 2013 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 227–236, Paris, France, 27 April - 2 May 2013 (2013)

    Google Scholar 

  3. Eickhoff, C., de Vries, A.P.: Increasing cheat robustness of crowdsourcing tasks. Inf. Retr. 16(2), 121–137 (2013)

    Article  Google Scholar 

  4. Gadiraju, U., Kawase, R., Dietze, S.: A taxonomy of microtasks on the web. In: 25th ACM Conference on Hypertext and Social Media, HT 2014, pp. 218–223, Santiago, Chile, 1–4 September 2014 (2014)

    Google Scholar 

  5. Gadiraju, U., Kawase, R., Dietze, S., Demartini, G.: Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In: Proceedings of CHI Conference on Human Factors in Computing Systems, CHI 2015 (2015)

    Google Scholar 

  6. Goldman, S.A., Kearns, M.J.: On the complexity of teaching. In: Proceedings of the Fourth Annual Workshop on Computational Learning Theory, COLT 1991, pp. 303–314. Morgan Kaufmann Publishers Inc., San Francisco (1991)

    Google Scholar 

  7. Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010)

    Google Scholar 

  8. D.R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12–14 December 2011, Granada, Spain, pp. 1953–1961 (2011)

    Google Scholar 

  9. Kobren, A., Tan, C.H., Ipeirotis, P.G., Gabrilovich, E.: Getting more for less: optimized crowdsourcing with dynamic tasks and goals. In: Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence, Italy, 18–22 May, 2015, pp. 592–602 (2015)

    Google Scholar 

  10. Le, J., Edmonds, A., Hester, V., Biewald, L.: Ensuring quality in crowdsourced search relevance evaluation: the effects of training question distribution. In: SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation, pp. 21–26 (2010)

    Google Scholar 

  11. Lin, C.H., Mausam, M., Weld, D.S.: Crowdsourcing control: moving beyond multiple choice. In: Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, 14–18 August, 2012, pp. 491–500 (2012)

    Google Scholar 

  12. Marshall, C.C., Shipman, F.M.: Experiences surveying the crowd: reflections on methods, participation, and reliability. In: Proceedings of the 5th Annual ACM Web Science Conference, pp. 234–243. ACM (2013)

    Google Scholar 

  13. Oleson, D., Sorokin, A., Laughlin, G.P., Hester, V., Le, J., Biewald, L.: Programmatic gold: targeted and scalable quality assurance in crowdsourcing. In: Human computation, vol. 11, issue no. 11 (2011)

    Google Scholar 

  14. Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., Vukovic, M.: An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In: ICWSM (2011)

    Google Scholar 

  15. Singla, A., Bogunovic, I., Bartók, G., Karbasi, A., Krause, A.: On actively teaching the crowd to classify. In: NIPS Workshop on Data Driven Education (2013)

    Google Scholar 

  16. Vuurens, J., de Vries, A.P., Eickhoff, C.: How much spam can you take? an analysis of crowdsourcing results to increase accuracy. In: Proceedings of ACM SIGIR Workshop on Crowdsourcing for Information Retrieval (CIR 2011) (2011)

    Google Scholar 

  17. Welinder, P., Branson, S., Belongie, S., Perona, P.: The multidimensional wisdom of crowds. In: Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6–9 December 2010, Vancouver, Canada, pp. 2424–2432 (2010)

    Google Scholar 

  18. Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.R.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7–10 December 2009, Vancouver, Canada, pp. 2035–2043 (2009)

    Google Scholar 

  19. Xu, A., Rao, H., Dow, S.P., Bailey, B.P.: A classroom study of using crowd feedback in the iterative design process. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW 2015, Vancouver, Canada, 14–18 March 2015, pp. 1637–1648 (2015)

    Google Scholar 

  20. Zook, M., Graham, M., Shelton, T., Gorman, S.: Volunteered geographic information and crowdsourcing disaster relief: a case study of the haitian earthquake. World Med. Health Policy 2(2), 7–33 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

This work has been carried out partially in the context of the DURAARK project, funded by the European Commission within the 7th Framework Programme (Grant Agreement no: 600908).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ujwal Gadiraju .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Gadiraju, U., Fetahu, B., Kawase, R. (2015). Training Workers for Improving Performance in Crowdsourcing Microtasks. In: Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, E. (eds) Design for Teaching and Learning in a Networked World. EC-TEL 2015. Lecture Notes in Computer Science(), vol 9307. Springer, Cham. https://doi.org/10.1007/978-3-319-24258-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24258-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24257-6

  • Online ISBN: 978-3-319-24258-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics