Skip to main content

Comparing Human and Algorithm Performance on Estimating Word-Based Semantic Similarity

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8852))

Abstract.

Understanding natural language is an inherently complex task for computer algorithms. Crowdsourcing natural language tasks such as semantic similarity is therefore a promising approach. In this paper, we investigate the performance of crowdworkers and compare them to offline contributors as well as to state of the art algorithms. We will illustrate that algorithms do outperform single human contributors but still cannot compete with results gathered from groups of contributors. Furthermore, we will demonstrate that this effect is persistent across different contributor populations. Finally, we give guidelines for easing the challenge of collecting word based semantic similarity data from human contributors.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Feng, J., Zhou, Y., Martin, T.: Sentence similarity based on relevance. In: Proceedings of IPMU 2008, pp. 832–839 (2008)

    Google Scholar 

  2. Navigli, R.: Word Sense Disambiguation: A Survey. ACM Computing Surveys 41(2), Article 10 (2009)

    Google Scholar 

  3. Krause, M., Porzel, R.: It is about time: time aware quality management for interactive systems with humans in the loop. In: CHI 2013 EA. ACM Press, Paris, France (2013)

    Google Scholar 

  4. Strube, M., Ponzetto, S.P.: WikiRelate! computing semantic relatedness using wikipedia. In: AAAI, pp. 1419–1424 (2006)

    Google Scholar 

  5. Yang, D., Powers, D.M.W.: Measuring semantic similarity in the taxonomy of WordNet. In: Conferences in Research and Practice in Information Technology, vol. 38, pp. 315–322 (2005)

    Google Scholar 

  6. Resnik, P.: Using information content to evaluate semantic similarity in a taxonomy. In: Proceedings of IJCAI 1995 (1995)

    Google Scholar 

  7. Radinsky, K., Agichtein, E., Gabrilovich, E., Markovitch, S.: A word at a time: computing word relatedness using temporal semantic analysis. In: Proceedings of WWW 2011, pp. 337–346 (2011)

    Google Scholar 

  8. Tschirsich, M., Hintz, G.: Leveraging crowdsourcing for paraphrase recognition. In: Proceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse, pp. 205–213 (2013)

    Google Scholar 

  9. Williams, J.D., Melamed, I.D., Alonso, T., Hollister, B., Wilpon, J.: Crowd-sourcing for difficult transcription of speech. In: 2011 IEEE ASRU Workshop, pp. 535–540 (2011)

    Google Scholar 

  10. Hara, K., Le, V., Froehlich, J.: Combining crowdsourcing and google street view to identify street-level accessibility problems. In: Proceedings of CHI 2013, pp. 631–640 (2013)

    Google Scholar 

  11. Radinsky, K., Agichtein, E., Gabrilovich, E., Markovitch, S.: A word at a time: computing word relatedness using temporal semantic analysis. In: Proceedings of WWW 2011, pp. 337–346 (2011)

    Google Scholar 

  12. Wang, J., Ipeirotis, P., Provost, F.: Quality-Based Pricing for Crowdsourced Workers. NYU Working Paper No. 2451/31833 (2013)

    Google Scholar 

  13. Oleson, D., Sorokin, A., Laughlin, G., Hester, V., Le, J., Biewald, L.: Programmatic Gold: Targeted and Scalable Quality Assurance in Crowdsourcing. Human Computation: Papers from the 2011 AAAI Workshop, pp. 43–48 (2011)

    Google Scholar 

  14. Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., Ruppin, E.: Placing Search in Context : The Concept Revisited. ACM Transactions on Information Systems 20(1), 116–131 (2002)

    Article  Google Scholar 

  15. Gabrilovich, E., Markovitch, S.: Computing Semantic Relatedness Using Wikipedia-based Explicit Semantic Analysis. IJCAI 7, 1606–1611 (2007)

    Google Scholar 

  16. Krippendorff, K.: Estimating the Reliability, Systematic Error and Random Error of Interval Data. Educational and Psychological Measurement 30(1), 61–70 (1970)

    Article  Google Scholar 

  17. Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, p. 614 (2008)

    Google Scholar 

  18. Krause, M.: GameLab: A tool suit to support designers of systems with Homo Ludens in the loop. In: HComp 2013: Works in Progress and Demonstration Abstracts, pp. 38–39. AAAI Press, Palm Springs (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nils Batram .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Batram, N., Krause, M., Dehaye, PO. (2015). Comparing Human and Algorithm Performance on Estimating Word-Based Semantic Similarity. In: Aiello, L., McFarland, D. (eds) Social Informatics. SocInfo 2014. Lecture Notes in Computer Science(), vol 8852. Springer, Cham. https://doi.org/10.1007/978-3-319-15168-7_55

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-15168-7_55

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-15167-0

  • Online ISBN: 978-3-319-15168-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics