Skip to main content

A Comparison of Domain Experts and Crowdsourcing Regarding Concept Relevance Evaluation in Ontology Learning

  • Conference paper
  • First Online:
Book cover Multi-disciplinary Trends in Artificial Intelligence (MIWAI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10053))

Abstract

Ontology learning helps to bootstrap and simplify the complex and expensive process of ontology construction by semi-automatical-ly generating ontologies from data. As other complex machine learning or NLP tasks, such systems always produce a certain ratio of errors, which make manually refining and pruning the resulting ontologies necessary. Here, we compare the use of domain experts and paid crowdsourcing for verifying domain ontologies. We present extensive experiments with different settings and task descriptions in order to raise the rating quality the task of relevance assessment of new concept candidates generated by the system. With proper task descriptions and settings, crowd workers can provide quality similar to human experts. In case of unclear task descriptions, crowd workers and domain experts often have a very different interpretation of the task at hand – we analyze various types of discrepancy in interpretation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://aic.ai.wu.ac.at/~wohlg/miwai2016.

References

  1. von Ahn, L., Dabbish, L.: Designing games with a purpose. Commun. ACM 51(8), 58–67 (2008). http://doi.acm.org/10.1145/1378704.1378719

    Google Scholar 

  2. Cheatham, M., Hitzler, P.: Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark. In: Mika, P., Tudorache, T., Bernstein, A., Welty, C., Knoblock, C., Vrandečić, D., Groth, P., Noy, N., Janowicz, K., Goble, C. (eds.) ISWC 2014. LNCS, vol. 8797, pp. 33–48. Springer, Heidelberg (2014). doi:10.1007/978-3-319-11915-1_3

    Google Scholar 

  3. Demartini, G., Difallah, D.E., Cudré-Mauroux, P.: ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: Proceedings of the 21st International Conference on World Wide Web, pp. 469–478. ACM (2012)

    Google Scholar 

  4. Eckert, K., Niepert, M., Niemann, C., Buckner, C., Allen, C., Stuckenschmidt, H.: Crowdsourcing the assembly of concept hierarchies. In: Proceedings of the 10th Annual Joint Conference on Digital Libraries, JCDL 2010, pp. 139–148. ACM (2010)

    Google Scholar 

  5. Gil, Y.: Interactive knowledge capture in the new millennium: how the semantic web changed everything. Knowl. Eng. Rev. 26(1), 45–51 (2011)

    Article  Google Scholar 

  6. Howe, J.: Crowdsourcing: why the power of the crowd is driving the future of business (2009). http://crowdsourcing.typepad.com/

  7. Krause, M., Smeddinck, J.: Human computation games: a survey. In: Proceedings of 19th European Signal Processing Conference (EUSIPCO 2011) (2011)

    Google Scholar 

  8. Landis, J., Koch, G.: The measurement of observer agreement for categorical data. Biometrics 33(1), 159–174 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  9. Liu, W., Weichselbraun, A., Scharl, A., Chang, E.: Semi-automatic ontology extension using spreading activation. J. Univ. Knowl. Manag. (1), 50–58 (2005)

    Google Scholar 

  10. Noy, N.F., Mortensen, J., Musen, M.A., Alexander, P.R.: Mechanical turk as an ontology engineer? Using microtasks as a component of an ontology-engineering workflow. In: Proceedings of the 5th Annual ACM Web Science Conference, WebSci 2013, pp. 262–271 (2013)

    Google Scholar 

  11. Sarasua, C., Simperl, E., Noy, N.F.: CrowdMap: crowdsourcing ontology alignment with microtasks. In: Cudré-Mauroux, P., Heflin, J., Sirin, E., Tudorache, T., Euzenat, J., Hauswirth, M., Parreira, J.X., Hendler, J., Schreiber, G., Bernstein, A., Blomqvist, E. (eds.) ISWC 2012. LNCS, vol. 7649, pp. 525–541. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35176-1_33

    Chapter  Google Scholar 

  12. Savenkov, V., Wohlgenannt, G.: Similarity metrics in ontology evolution. In: Klinov, P., Mourmotsev, D. (eds.) KESW 2015, Posters and Position papers. Moscow, Russia, October 2015

    Google Scholar 

  13. Wohlgenannt, G.: Leveraging and Balancing Heterogeneous Sources of Evidence in Ontology Learning. In: Gandon, F., Sabou, M., Sack, H., d’Amato, C., Cudré-Mauroux, P., Zimmermann, A. (eds.) ESWC 2015. LNCS, vol. 9088, pp. 54–68. Springer, Heidelberg (2015). doi:10.1007/978-3-319-18818-8_4

    Chapter  Google Scholar 

  14. Wohlgenannt, G., Belk, S., Schett, M.: Computing Semantic Association: Comparing Spreading Activation and Spectral Association for Ontology Learning. In: Ramanna, S., Lingras, P., Sombattheera, C., Krishna, A. (eds.) MIWAI 2013. LNCS (LNAI), vol. 8271, pp. 317–328. Springer, Heidelberg (2013). doi:10.1007/978-3-642-44949-9_29

    Chapter  Google Scholar 

  15. Wohlgenannt, G., Sabou, M., Hanika, F.: Crowd-based ontology engineering with the uComp protege plugin. Semantic Web J. (SWJ) p. Accepted/Scheduled for publication (2015)

    Google Scholar 

  16. Wohlgenannt, G., Weichselbraun, A., Scharl, A., Sabou, M.: Dynamic integration of multiple evidence sources for ontology learning. J. Inf. Data Manag. 3(3), 243–254 (2012)

    Google Scholar 

Download references

Acknowledgments

The work presented in this paper was created based on results from project uComp. uComp received the funding support of EPSRC EP/K017896/1, FWF 1097-N23, and ANR-12-CHRI-0003-03, in the framework of the CHIST-ERA ERA-NET.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gerhard Wohlgenannt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Wohlgenannt, G. (2016). A Comparison of Domain Experts and Crowdsourcing Regarding Concept Relevance Evaluation in Ontology Learning. In: Sombattheera, C., Stolzenburg, F., Lin, F., Nayak, A. (eds) Multi-disciplinary Trends in Artificial Intelligence. MIWAI 2016. Lecture Notes in Computer Science(), vol 10053. Springer, Cham. https://doi.org/10.1007/978-3-319-49397-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49397-8_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49396-1

  • Online ISBN: 978-3-319-49397-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics