Abstract
While general conversational intelligence (GCI) can be considered one of the core aspects of artificial general intelligence (AGI), there currently exists minimal overlap between the disciplines of AGI and natural language processing (NLP). Only a few AGI architectures can comprehend and generate natural language, and most NLP systems rely either on hardcoded, specialized rules and frameworks that cannot generalize to the various complex domains of human language or on heavily trained deep neural network models that cannot be interpreted, controlled, or made sense of. In this paper, we propose an interpretable “Contextual Generator” architecture for question answering (QA), built as an extension of the recently published “Generator” algorithm for sentence generation, that produces grammatically valid answers to queries structured as lists of seed words. We demonstrate the potential for this architecture to perform automated, closed-domain QA by detailing results on queries from SingularityNET’s “small world” POC-English corpus and from the Stanford Question Answering Dataset. Overall, our work may bring a greater degree of GCI to proto-AGI NLP pipelines. The proposed QA architecture is open-source and can be found on GitHub under the MIT License at https://github.com/aigents/aigents-java-nlp.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lian, R., et al.: Syntax-semantic mapping for general intelligence: language comprehension as hypergraph homomorphism, language generation as constraint satisfaction. In: Bach, J., Goertzel, B., Iklé, M. (eds.) AGI 2012. LNCS (LNAI), vol. 7716, pp. 158–167. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35506-6_17
Ramesh, V., Kolonin, A.: Natural Language Generation Using Link Grammar for General Conversational Intelligence. arXiv:2105.00830 [cs.CL] (2021)
Glushchenko, A., Suarez, A., Kolonin, A., Goertzel, B., Baskov, O.: Programmatic link grammar induction for unsupervised language learning. In: Hammer, P., Agrawal, P., Goertzel, B., Iklé, M. (eds.) AGI 2019. LNCS (LNAI), vol. 11654, pp. 111–120. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27005-6_11
Sleator, D., Temperley, D.: Parsing english with a link grammar. In: Proceedings of the Third International Workshop on Parsing Technologies, pp. 277–292. Association for Computational Linguistics, Netherlands (1993)
Gatt, A., Krahmer, E.: Survey of the state of the art in natural language generation: core tasks, applications and evaluation. J. AI Res. (JAIR) 61(1), 75–170 (2018)
Mellish, C., Dale, R.: Evaluation in the context of natural language generation. Comput. Speech Lang. 10(2), 349–373 (1998)
Weaver, W.: Translation. In: Locke, W., Booth, A. (eds.) Machine Translation of Languages: Fourteen Essays, Technology Press of the Massachusetts Institute of Technology (1955)
Ramesh, V., Kolonin, A.: Interpretable natural language segmentation based on link grammar. In: 2020 Science and Artificial Intelligence conference (S.A.I.ence), pp. 25–32. IEEE, Novosibirsk (2020)
Freitag, M., Roy, S.: Unsupervised natural language generation with denoising autoencoders. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3922–3929. Association for Computational Linguistics, Belgium (2018)
Devlin, J., et al.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805v2 [cs.CL] (2019)
Sanh, V., et al.: DistilBERT: a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv:1910.01108v4 [cs.CL] (2020)
Yamada, I., et al.: LUKE: deep contextualized entity representations with entity-aware self-attention. arXiv:2010.01057 [cs.CL] (2020)
Clark, K., et al.: ELECTRA: pre-training text encoders as discriminators rather than generators. arXiv.2003.10555 [cs.CL] (2020)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv:1907.11692 [cs.CL] (2019)
Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461 [cs.CL] (2019)
Lewis, P., et al.: Unsupervised question answering by cloze translation. arXiv:1906.04980v2 [cs.CL] (2019)
Perez, E., et al.: Unsupervised question decomposition for question answering. arXiv:2002.09758v3 [cs.CL] (2020)
Powers, D.: Applications and explanations of Zipf’s law. In: New Methods in Language Processing and Computational Natural Language Learning, pp. 151–160. Association for Computational Linguistics (1998)
Papineni, K., et al.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics, USA (2002)
Sitikhu, P., et al.: A comparison of semantic similarity methods for maximum human interpretability. arXiv:1910.09129 [cs.IR] (2019)
Klakow, D., Peters, J.: Testing the correlation of word error rate and perplexity. Speech Commun. 38(1), 19–28 (2002)
Snover, M., et al.: A study of translation edit rate with targeted human annotation. In: Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pp. 223–231. Association for Machine Translation in the Americas, USA (2006)
Rajpurkar, P., et al.: SQuAD: 100,000+ questions for machine comprehension of text. arXiv:1606.05250 [cs.CL] (2016)
Kolonin, A.: Personal analytics for societies and businesses: with aigents online platform. In: 2017 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), pp. 272–275. IEEE, Novosibirsk (2017)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Ramesh, V., Kolonin, A. (2022). Unsupervised Context-Driven Question Answering Based on Link Grammar. In: Goertzel, B., Iklé, M., Potapov, A. (eds) Artificial General Intelligence. AGI 2021. Lecture Notes in Computer Science(), vol 13154. Springer, Cham. https://doi.org/10.1007/978-3-030-93758-4_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-93758-4_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93757-7
Online ISBN: 978-3-030-93758-4
eBook Packages: Computer ScienceComputer Science (R0)