Skip to main content

Vote Aggregation Techniques in the Geo-Wiki Crowdsourcing Game: A Case Study

  • Conference paper
  • First Online:
Analysis of Images, Social Networks and Texts (AIST 2016)

Abstract

The Cropland Capture game (CCG) aims to map cultivated lands using around 170000 satellite images. The contribution of the paper is threefold: (a) we improve the quality of the CCG’s dataset, (b) we benchmark state-of-the-art algorithms designed for an aggregation of votes in a crowdsourcing-like setting and compare the results with machine learning algorithms, (c) we propose an explanation for surprisingly similar accuracy of all examined algorithms. To accomplish (a), we detect image duplicates using the perceptual hash function pHash. In addition, using a blur detection algorithm, we filter out unidentifiable images. In part (c), we suggest that if all workers are accurate, the task assignment in the dataset is highly irregular, then state-of-the-art algorithms perform on a par with Majority Voting. We increase the estimated consistency with expert opinions from 77% to 91% and up to 96% if we restrict our attention to images with more than 9 votes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/ashwin90/Penalty-based-clustering.

References

  1. Chatterjee, S., Bhattacharyya, M.: A biclustering approach for crowd judgment analysis. In: Proceedings of the Second ACM IKDD Conference on Data Sciences. pp. 118–119. ACM (2015)

    Google Scholar 

  2. Comber, A., Brunsdon, C., See, L., Fritz, S., McCallum, I.: Comparing expert and non-expert conceptualisations of the land: an analysis of crowdsourced land cover data. In: Tenbrink, T., Stell, J., Galton, A., Wood, Z. (eds.) COSIT 2013. LNCS, vol. 8116, pp. 243–260. Springer, Heidelberg (2013). doi:10.1007/978-3-319-01790-7_14

    Chapter  Google Scholar 

  3. Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Stat. 28, 20–28 (1979)

    Article  Google Scholar 

  4. Dempster, A.P., et al.: Maximum likelihood from incomplete data via the EM algorithm. JRSS Ser. B 39, 1–38 (1977)

    MathSciNet  MATH  Google Scholar 

  5. Jagabathula, S., et al.: Reputation-based worker filtering in crowdsourcing. In: Advances in Neural Information Processing Systems, pp. 2492–2500 (2014)

    Google Scholar 

  6. Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems, pp. 1953–1961 (2011)

    Google Scholar 

  7. Khattak, F.K., Salleb-Aouissi, A.: Improving crowd labeling through expert evaluation. In: 2012 AAAI Spring Symposium Series (2012)

    Google Scholar 

  8. Kim, H.C., Ghahramani, Z.: Bayesian classifier combination. In: International conference on Artificial Intelligence and Statistics, pp. 619–627 (2012)

    Google Scholar 

  9. Liu, Q., Peng, J., Ihler, A.T.: Variational inference for crowdsourcing. In: Advances in Neural Information Processing Systems, pp. 692–700 (2012)

    Google Scholar 

  10. Moreno, P.G., Teh, Y.W., Perez-Cruz, F., Artés-Rodríguez, A.: Bayesian nonparametric crowdsourcing. arXiv preprint arXiv:1407.5017 (2014)

  11. Pareek, H., Ravikumar, P.: Human boosting. In: Proceedings of the 30th International Conference on Machine Learning (ICML2013), pp. 338–346 (2013)

    Google Scholar 

  12. Raykar, V.C.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. JMLR 13, 491–518 (2012)

    MathSciNet  MATH  Google Scholar 

  13. Raykar, V.C., et al.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)

    MathSciNet  Google Scholar 

  14. Salk, C.F., Sturn, T., See, L., Fritz, S., Perger, C.: Assessing quality of volunteer crowdsourcing contributions: lessons from the cropland capture game. Int. J. Digit. Earth 9, 410–426 (2015)

    Article  Google Scholar 

  15. See, L., et al.: Building a hybrid land cover map with crowdsourcing and geographically weighted regression. ISPRS J. Photogramm. Remote Sens. 103, 48–56 (2015)

    Article  Google Scholar 

  16. Sheshadri, A., Lease, M.: Square: a benchmark for research on computing crowd consensus. In: First AAAI Conference on Human Computation and Crowdsourcing (2013)

    Google Scholar 

  17. Simpson, E., Roberts, S., Psorakis, I., Smith, A.: Dynamic Bayesian combination of multiple imperfect classifiers. In: Guy, T.V., Karny, M., Wolpert, D. (eds.) Decision Making and Imperfection, pp. 1–35. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  18. Tong, H., Li, M., Zhang, H., Zhang, C.: Blur detection for digital images using wavelet transform. In: 2004 IEEE International Conference on Multimedia and Expo, ICME 2004, vol. 1, pp. 17–20. IEEE (2004)

    Google Scholar 

  19. Zauner, C.: Implementation and benchmarking of perceptual image hash functions. Ph.D. thesis (2010)

    Google Scholar 

  20. Zhu, X., et al.: Co-training as a human collaboration policy. In: AAAI (2011)

    Google Scholar 

Download references

Acknowledgments

This research was supported by Russian Science Foundation, grant no. 14-11-00109, and the EU-FP7 funded ERC CrowdLand project, grant no. 617754.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Artem Baklanov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Baklanov, A. et al. (2017). Vote Aggregation Techniques in the Geo-Wiki Crowdsourcing Game: A Case Study. In: Ignatov, D., et al. Analysis of Images, Social Networks and Texts. AIST 2016. Communications in Computer and Information Science, vol 661. Springer, Cham. https://doi.org/10.1007/978-3-319-52920-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-52920-2_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-52919-6

  • Online ISBN: 978-3-319-52920-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics