Abstract
Open Information Extraction (OIE) systems extract relational tuples from text without requiring to specify in advance the relations of interest. Systems perform well on widely used metrics such as precision and yield, but a close look at systems output shows a general lack of informativeness in facts deemed correct.
We propose a new evaluation protocol, based on question answering, that is closer to text understanding and end user needs. Extracted information is judged upon its capacity to automatically answer questions about the source text. As a showcase for our protocol, we devise a small corpus of question/answer pairs, and evaluate available state-of-the-art OIE systems on it. Performance-wise, our results are in line with previous findings. Furthermore, we are able to estimate recall for the task, which is novel. We distribute our annotated data and automatic evaluation program.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
For automatic language modelling purposes, on the other hand, extracted facts are a great source of learning material, as demonstrated in [8].
- 2.
- 3.
- 4.
This was added to the distributed software since publication - see https://github.com/knowitall/ollie.
- 5.
Although they could refer to an American actor and a 1984 Atari video game.
- 6.
The sentence is from https://en.wikipedia.org/wiki/Janet_Wu_(WCVB). Incidentally, https://en.wikipedia.org/wiki/Janet_Wu_(WHDH) also is an American television reporter who worked in the Boston area. We consider this to be an ironic coincidence, but stand by our arbitrary line.
- 7.
Ruth Gabriel is a Spanish actress.
- 8.
We examine the impact of this factor in Sect. 5.2.
- 9.
Words differing by 1 or less of their characters are considered to match, as Mrs. and Mr. in Fig. 5.
- 10.
See Table 2 as the exact figure is directly dependant on the matching threshold.
- 11.
References
Akbik, A., Löser, A.: Kraken: N-ary facts in open information extraction. In: Proceedings of Joint Workshop on Automatic Knowledge Base Construction and Web-Scale Knowledge Extraction, AKBC-WEKEX 2012, pp. 52–56. Association for Computational Linguistics, Stroudsburg (2012). http://dl.acm.org/citation.cfm?id=2391200.2391210
Bird, S., Klein, E., Loper, E.: Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly, Beijing (2009). http://www.nltk.org/book
Del Corro, L., Gemulla, R.: ClausIE: clause-based open information extraction. In: Proceedings of 22nd International Conference on World Wide Web, WWW 2013, pp. 355–366. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva (2013). http://dl.acm.org/citation.cfm?id=2488388.2488420
Fader, A., Zettlemoyer, L., Etzioni, O.: Open question answering over curated and extracted knowledge bases. In: Proceedings of 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2014, pp. 1156–1165. ACM, New York (2014). http://doi.acm.org/10.1145/2623330.2623677
Schmitz, M., Bart, R., Soderland, S., Etzioni, O.: Open language learning for information extraction. In: Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL) (2012)
Mesquita, F., Schmidek, J., Barbosa, D.: Effectiveness and efficiency of open relation extraction. In: Proceedings of 2013 Conference on Empirical Methods in Natural Language Processing, pp. 447–457. Association for Computational Linguistics, October 2013
Soderland, S., Gilmer, J., Bart, R., Etzioni, O., Weld, D.S.: Open information extraction to KBP relations in 3 hours. In: Proceedings of 6th Text Analysis Conference, TAC 2013, 18–19 November 2013, Gaithersburg, Maryland, USA. NIST (2013). http://www.nist.gov/tac/publications/2013/participant.papers/UWashington.TAC2013.proceedings.pdf
Stanovsky, G., Dagan, I., Mausam: Open IE as an intermediate structure for semantic tasks. In: Proceedings of 53rd Annual Meeting of the Association for Computational Linguistics and 7th International Joint Conference on Natural Language Processing, Short Papers, vol. 2, pp. 303–308. Association for Computational Linguistics, Beijing, July 2015. http://www.aclweb.org/anthology/P15-2050
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Léchelle, W., Langlais, P. (2018). An Informativeness Approach to Open IE Evaluation. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2016. Lecture Notes in Computer Science(), vol 9624. Springer, Cham. https://doi.org/10.1007/978-3-319-75487-1_40
Download citation
DOI: https://doi.org/10.1007/978-3-319-75487-1_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75486-4
Online ISBN: 978-3-319-75487-1
eBook Packages: Computer ScienceComputer Science (R0)