Skip to main content

Enhancements on a Pipeline Approach for Abstract Meaning Representation Parsing

  • Conference paper
  • First Online:
Engineering Applications of Neural Networks (EANN 2022)

Abstract

Abstract Meaning Representation (AMR) parsing attempts to extract a structured representation of a sentence’s meaning. This paper enhances an existing processing pipeline for AMR parsing, inspired by state-of-the-art solutions in dependency parsing. It enhances the existing Concept Identification module by using Pointer-Generator Networks. A further considerable improvement of this module is brought by the use of embeddings. An alternative approach is provided through Transformers, an architecture which needs large data-sets to accurately predict concepts. For predicting the relations between concepts, the proposed pipeline combines the two Heads Selection and now trainable Arcs Labelling tasks into a joint Relation Identification module, which enhances the overall performance of edge prediction. The improvements made to this AMR parser have resulted in a completely trainable model that can be improved further with end-to-end training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Banarescu, L., et al.: Abstract meaning representation for sembanking. In: Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pp. 178–186. Association for Computational Linguistics, Sofia, Bulgaria (2013). https://www.aclweb.org/anthology/W13-2322

  3. Batiz, O.B., Helmer, R.P., Pop, R., Macicasan, F., Lemnaru, C.: Concept identification with sequence-to-sequence models in abstract meaning representation parsing. In: 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 83–90 (2020)

    Google Scholar 

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the ACL: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019

    Google Scholar 

  5. Dozat, T., Manning, C.D.: Deep biaffine attention for neural dependency parsing. CoRR abs/1611.01734 (2016). http://arxiv.org/abs/1611.01734

  6. Dozat, T., Manning, C.D.: Simpler but more accurate semantic dependency parsing. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 484–490. Association for Computational Linguistics, Melbourne, Australia, July 2018

    Google Scholar 

  7. Kolomiyets, O., Bethard, S., Moens, M.F.: Extracting narrative timelines as temporal dependency structures. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 88–97. Association for Computational Linguistics, Jeju Island, Korea, July 2012

    Google Scholar 

  8. Konstas, I., Iyer, S., Yatskar, M., Choi, Y., Zettlemoyer, L.: Neural AMR: sequence-to-sequence models for parsing and generation. In: Proceedings of the 55th Annual Meeting of the ACL (Volume 1: Long Papers), pp. 146–157. Association for Computational Linguistics, Vancouver, Canada, July 2017

    Google Scholar 

  9. Liu, L., Liu, X., Gao, J., Chen, W., Han, J.: Understanding the difficulty of training transformers. CoRR abs/2004.08249 (2020). https://arxiv.org/abs/2004.08249

  10. Miao, Y., Blunsom, P.: Language as a latent variable: Discrete generative models for sentence compression. Empirical Methods in Natural Language Processings (2016)

    Google Scholar 

  11. van Noord, R., Toral, A., Bos, J.: Character-level representations improve DRS-based semantic parsing even in the age of BERT. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pp. 4587–4603, November 2020

    Google Scholar 

  12. Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)

    Google Scholar 

  13. Pop, R., Macicasan, F., Lemnaru, C.: A two stage approach for AMR parsing using the concept inference order. In: 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 91–98 (2020)

    Google Scholar 

  14. Popel, M., Bojar, O.: Training tips for the transformer model. CoRR abs/1804.00247 (2018)

    Google Scholar 

  15. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 1073–1083 (2017)

    Google Scholar 

  16. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017)

    Google Scholar 

  17. Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015)

    Google Scholar 

  18. Wang, C., Xue, N., Pradhan, S.: A transition-based algorithm for AMR parsing. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 366–375. Association for Computational Linguistics, Denver, Colorado, May-Jun 2015

    Google Scholar 

  19. Zhang, X., Cheng, J., Lapata, M.: Dependency parsing as head selection. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 665–676. Association for Computational Linguistics, Valencia, Spain, Apr 2017

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandru Frasie .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Frasie, A., Vezan, NT., Marian, G., Macicasan, F., Lemnaru, C. (2022). Enhancements on a Pipeline Approach for Abstract Meaning Representation Parsing. In: Iliadis, L., Jayne, C., Tefas, A., Pimenidis, E. (eds) Engineering Applications of Neural Networks. EANN 2022. Communications in Computer and Information Science, vol 1600. Springer, Cham. https://doi.org/10.1007/978-3-031-08223-8_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08223-8_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08222-1

  • Online ISBN: 978-3-031-08223-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics