Skip to main content

Effects of Increasing Working Opportunity on Result Quality in Labor-Intensive Crowdsourcing

  • Conference paper
  • First Online:
Information for a Better World: Normality, Virtuality, Physicality, Inclusivity (iConference 2023)

Abstract

When selecting workers in microtask crowdsourcing platforms, a common practice of requesters is to select qualified workers by looking at the evaluation results for the tasks in the past or by conducting qualifying tests for the tasks. This sometimes misses workers who may be able to complete some of the tasks. Increasing working opportunities for such workers has advantages not only for the workers but also for requesters because they obtain labor resources for faster completion of tasks. However, in general, an increase in the working opportunity and obtaining high-quality task results is a trade-off; if they choose workers whose skill levels are above a lower threshold to increase the number of workers, the quality of the task will be undermined. In this paper, we address the problem of improving the trade-off in labor-intensive crowdsourcing by exploring different task assignment strategies. For that purpose, we apply Item Response Theory to evaluate the skills of workers and the difficulty of tasks and devise an algorithm for assigning tasks in such a way that the variance in the number of tasks assigned among workers is minimized trying to take advantage of the potential parallelism of crowdsourcing. Second, we address the problem that the difficulty of the tasks is unknown in advance. We explore an approach that uses ML outputs for difficulty estimation. This paper reports on our experimental result, which shows the potential of this approach, and discusses when this approach is effective.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In our paper we use one parameter model of IRT. Therefore, each ICC is represented by \(\theta _{w_i}\) difficulty \(b_j\).

  2. 2.

    The AI model can be the result of a multi-model ensembles.

  3. 3.

    https://archive.ics.uci.edu/ml/datasets/News+Aggregator.

References

  1. Baker, F.B., Kim, S.H.: Item Response Theory: Parameter Estimation Techniques. CRC Press, Boca Raton (2004)

    Book  MATH  Google Scholar 

  2. Barrachina, S., et al.: Statistical approaches to computer-assisted translation. Comput. Linguist. 35(1), 3–28 (2009)

    Google Scholar 

  3. Basık, F., Gedik, B., Ferhatosmanoğlu, H., Wu, K.L.: Fair task allocation in crowdsourced delivery. IEEE Trans. Serv. Comput. 14(4), 1040–1053 (2018)

    Article  Google Scholar 

  4. Branson, S., et al.: Visual recognition with humans in the loop. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 438–451. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_32

    Chapter  Google Scholar 

  5. Cheng, P., Lian, X., Chen, L., Han, J., Zhao, J.: Task assignment on multi-skill oriented spatial crowdsourcing. IEEE Trans. Knowl. Data Eng. 28(8), 2201–2215 (2016)

    Article  Google Scholar 

  6. Duan, X., Tajima, K.: Improving multiclass classification in crowdsourcing by using hierarchical schemes. In: The World Wide Web Conference, pp. 2694–2700 (2019)

    Google Scholar 

  7. Fan, J., Li, G., Ooi, B.C., Tan, K.l., Feng, J.: iCrowd: an adaptive crowdsourcing framework. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pp. 1015–1030 (2015)

    Google Scholar 

  8. Hashimoto, H., Matsubara, M., Shiraishi, Y., Wakatsuki, D., Zhang, J., Morishima, A.: A task assignment method considering inclusiveness and activity degree. In: IEEE BigData 2018 (HMData), pp. 3498–3503 (2018)

    Google Scholar 

  9. Hettiachchi, D., Van Berkel, N., Kostakos, V., Goncalves, J.: CrowdCog: a cognitive skill based system for heterogeneous task assignment and recommendation in crowdsourcing. Proc. ACM Hum.-Comput. Interact. 4(CSCW2), 1–22 (2020)

    Article  Google Scholar 

  10. Ho, C.J., Vaughan, J.: Online task assignment in crowdsourcing markets. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 26, pp. 45–51 (2012)

    Google Scholar 

  11. Huang, Y., Huang, Y., Xue, N., Bigham, J.P.: Leveraging complementary contributions of different workers for efficient crowdsourcing of video captions. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4617–4626 (2017)

    Google Scholar 

  12. Khan, A.R., Garcia-Molina, H.: CrowdDQS: dynamic question selection in crowdsourcing systems. In: Proceedings of the 2017 ACM International Conference on Management of Data, pp. 1447–1462 (2017)

    Google Scholar 

  13. Kurve, A., Miller, D.J., Kesidis, G.: Multicategory crowdsourcing accounting for variable task difficulty, worker skill, and worker intention. IEEE Trans. Knowl. Data Eng. 27(3), 794–809 (2014)

    Article  Google Scholar 

  14. Liu, Q., Abdessalem, T., Wu, H., Yuan, Z., Bressan, S.: Cost minimization and social fairness for spatial crowdsourcing tasks. In: Navathe, S.B., Wu, W., Shekhar, S., Du, X., Wang, X.S., Xiong, H. (eds.) DASFAA 2016. LNCS, vol. 9642, pp. 3–17. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-32025-0_1

    Chapter  Google Scholar 

  15. Mavridis, P., Gross-Amblard, D., Miklós, Z.: Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. In: Proceedings of the 25th International Conference on World Wide Web, pp. 843–853 (2016)

    Google Scholar 

  16. Ortiz-Martínez, D.: Online learning for statistical machine translation. Comput. Linguist. 42(1), 121–161 (2016). https://app.dimensions.ai/details/publication/pub.1032814773

  17. Peris, l., Domingo, M., Casacuberta, F.: Interactive neural machine translation. Comput. Speech Lang. 45, 201–220 (2017). https://app.dimensions.ai/details/publication/pub.1011205507

  18. Shin, I., Kim, D.J., Cho, J.W., Woo, S., Park, K., Kweon, I.S.: Labor: labeling only if required for domain adaptive semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8588–8598 (2021)

    Google Scholar 

  19. Wah, C., Branson, S., Perona, P., Belongie, S.: Multiclass recognition and part localization with humans in the loop. In: 2011 International Conference on Computer Vision, pp. 2524–2531. IEEE (2011)

    Google Scholar 

  20. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)

Download references

Acknowledgment

This research was approved by the IRB of the University of Tsukuba. This work was partially supported by JSPS KAKENHI Grant Numbers 22H00508, 21H03552, and 22K17944.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kanta Negishi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Negishi, K., Ito, H., Matsubara, M., Morishima, A. (2023). Effects of Increasing Working Opportunity on Result Quality in Labor-Intensive Crowdsourcing. In: Sserwanga, I., et al. Information for a Better World: Normality, Virtuality, Physicality, Inclusivity. iConference 2023. Lecture Notes in Computer Science, vol 13971. Springer, Cham. https://doi.org/10.1007/978-3-031-28035-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-28035-1_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-28034-4

  • Online ISBN: 978-3-031-28035-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics