Skip to main content

Effective Result Inference for Context-Sensitive Tasks in Crowdsourcing

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9642))

Included in the following conference series:

Abstract

Effective result inference is an important crowdsourcing topic as workers may return incorrect results. Existing inference methods assign each task to multiple workers and aggregate the results from these workers to infer the final answer. However, these methods are rather ineffective for context-sensitive tasks (\(\mathtt{CSTs}\)), e.g., handwriting recognition, due to the following reasons. First, each \(\mathtt{CST}\) is rather hard and workers usually cannot correctly answer a whole \(\mathtt{CST}\). Thus a task-level inference strategy cannot achieve high-quality results. Second, a \(\mathtt{CST}\) should not be divided into multiple subtasks because the subtasks are correlated with each other under certain contexts. So a subtask-level inference strategy cannot achieve high-quality results as it neglects the correlation between subtasks. Thus it calls for an effective result inference method for \(\mathtt{CSTs}\). To address this challenge, this paper proposes a smart assembly model (\(\mathtt{SAM}\)), which can assemble workers’ complementary answers in the granularity of subtasks without losing the context information. Furthermore, we devise an iterative decision model based on the partially observable Markov decision process, which can decide whether we need to ask more workers to get better results. Experimental results show that our method outperforms state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.zhubajie.com.

  2. 2.

    http://www.cs.cmu.edu/trey/zmdp/.

References

  1. Bernstein, M.S., Little, G., Miller, R.C., Hartmann, B., Ackerman, M.S., Karger, D.R., Crowell, D., Panovich, K.: Soylent: a word processor with a crowd inside. In: UIST, pp. 313–322. ACM (2010)

    Google Scholar 

  2. Boim, R., Greenshpan, O., Milo, T., Novgorodov, S., Polyzotis, N., Tan, W.C.: Asking the right questions in crowd data sourcing. In: ICDE (2012)

    Google Scholar 

  3. Dai, P., Lin, C.H., Weld, D.S., et al.: Pomdp-based control of workflows for crowdsourcing. Artif. Intell. 202, 52–85 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dredze, M., Talukdar, P.P., Crammer, K.: Sequence learning from data with multiple labels. In: Workshop Co-Chairs, p. 39 (2009)

    Google Scholar 

  5. Feng, J., Li, G., Wang, H., Feng, J.: Incremental quality inference in crowdsourcing. In: Bhowmick, S.S., Dyreson, C.E., Jensen, C.S., Lee, M.L., Muliantara, A., Thalheim, B. (eds.) DASFAA 2014, Part II. LNCS, vol. 8422, pp. 453–467. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  6. Ipeirotis, P.G., Gabrilovich, E.: Quizz: targeted crowdsourcing with a billion (potential) users. In: WWW, pp. 143–154 (2014)

    Google Scholar 

  7. Law, E., Ahn, L.V.: Human computation. Synth. Lect. Artif. Intell. Mach. Learn. 5(3), 21–22 (2011)

    Google Scholar 

  8. Law, E., Zhang, H.: Towards large-scale collaborative planning: answering high-level search queries using human computation. In: AAAI (2011)

    Google Scholar 

  9. Lee, K., Tamilarasan, P., Caverlee, J.: Crowdturfers, campaigns, and social media: tracking and revealing crowdsourced manipulation of social media. In: ICWSM (2013)

    Google Scholar 

  10. Liu, Q., Peng, J., Ihler, A.T.: Variational inference for crowdsourcing. In: NIPS, pp. 692–700. Curran Associates, Inc. (2012)

    Google Scholar 

  11. Liu, X., Lu, M., Ooi, B.C., Shen, Y., Wu, S., Zhang, M.: Cdas: a crowdsourcing data analytics system. PVLDB 5(10), 1040–1051 (2012)

    Google Scholar 

  12. Rodrigues, F., Pereira, F., Ribeiro, B.: Sequence labeling with multiple annotators. Mach. Learn. 95(2), 165–181 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Salek, M., Bachrach, Y., Key, P.: Hotspotting-a probabilistic graphical model for image object localization through crowdsourcing. In: AAAI (2013)

    Google Scholar 

  14. Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? improving data quality and data mining using multiple, noisy labelers. In: SIGKDD. ACM (2008)

    Google Scholar 

  15. Sheshadri, A., Lease, M.: Square: a benchmark for research on computing crowd consensus. In: HCOMP (2013)

    Google Scholar 

  16. Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (2008)

    Google Scholar 

  17. Tran-Thanh, L., Huynh, T.D., Rosenfeld, A., Ramchurn, S.D., Jennings, N.R.: Crowdsourcing complex workflows under budget constraints. In: AAAI (2015)

    Google Scholar 

  18. Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., Blum, M.: reCAPTCHA: human-based character recognition via web security measures. Science 321(5895), 1465–1468 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  19. Young, H.P.: Condorcet’s theory of voting. Mathématiques et Sci. Humaines 111, 45–59 (1990)

    MATH  Google Scholar 

  20. Zheng, Y., Wang, J., Li, G., Cheng, R., Feng, J.: Qasca: a quality-aware task assignment system for crowdsourcing applications. In: SIGMOD (2015)

    Google Scholar 

Download references

Acknowledgment

This work was supported partly by China 973 program (2015CB358700, 2014CB340304), and National Natural Science Foundation of China (61370057). We thank Prof. Yongyi Mao from University of Ottawa for his valuable suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hailong Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Fang, Y., Sun, H., Li, G., Zhang, R., Huai, J. (2016). Effective Result Inference for Context-Sensitive Tasks in Crowdsourcing. In: Navathe, S., Wu, W., Shekhar, S., Du, X., Wang, X., Xiong, H. (eds) Database Systems for Advanced Applications. DASFAA 2016. Lecture Notes in Computer Science(), vol 9642. Springer, Cham. https://doi.org/10.1007/978-3-319-32025-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-32025-0_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-32024-3

  • Online ISBN: 978-3-319-32025-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics