Skip to main content
Log in

Improving AMR parsing by exploiting the dependency parsing as an auxiliary task

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Abstract meaning representations (AMRs) represent sentence semantics as rooted labeled directed acyclic graphs. Though there is a strong correlation between the AMR graph of a sentence and its corresponding dependency tree, the recent neural network AMR parsers do neglect the exploitation of dependency structure information. In this paper, we explore a novel approach to exploiting dependency structures for AMR parsing. Unlike traditional pipeline models, we treat dependency parsing as an auxiliary task for AMR parsing under the multi-task learning framework by sharing neural network parameters and selectively extracting syntactic representation by the attention mechanism. Particularly, to balance the gradients and focus on the AMR parsing task, we present a new dynamical weighting scheme in the loss function. The experimental results on the LDC2015E86 and LDC2017T10 dataset show that our dependency-auxiliary AMR parser significantly outperforms the baseline and its pipeline counterpart, and demonstrate that the neural AMR parsers can be greatly boosted with the help of effective methods of integrating syntax.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. https://stanfordnlp.github.io/CoreNLP/

  2. Since BERT is not used in our parser, we quote the score of the model without BERT embedding in [26] for direct comparison in Table 6.

References

  1. Ballesteros M, Al-Onaizan Y (2017) Amr parsing using stack-lstms. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp 1269–1275

  2. Banarescu L, Bonial C, Cai S, Georgescu M, Griffitt K, Hermjakob U, Knight K, Koehn P, Palmer M, Schneider N (2013) Abstract meaning representation for sembanking. In: Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pp 178–186

  3. Cheng X, Roth D (2013) Relational inference for wikification. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp 1787–1796

  4. Damonte M, Cohen SB, Satta G (2017) An incremental parser for abstract meaning representation. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pp 536–546

  5. Dozat T, Manning CD (2017) Deep biaffine attention for neural dependency parsing. In: Proceedings of the 5th International Conference on Learning Representations

  6. Dozat T, Qi P, Manning C D (2017) Stanford’s graph-based neural dependency parser at the conll 2017 shared task. In: Proceedings of the CoNLL 2017 Shared Task, Multilingual Parsing from Raw Text to Universal Dependencies, pp 20–30

  7. Flanigan J, Thomson S, Carbonell J, Dyer C, Smith N A (2014) A discriminative graph-based parser for the abstract meaning representation. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol 1, pp 1426–1436

  8. Foland W, Martin JH (2017) Abstract meaning representation parsing using lstm recurrent neural networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp 463–472

  9. Goodman J, Vlachos A, Naradowsky J (2016) Noise reduction and targeted exploration in imitation learning for abstract meaning representation parsing. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, vol 1, pp 1–11

  10. Hershcovich D, Abend O, Rappoport A (2018) Multitask parsing across semantic representations. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp 373–385

  11. Konstas I, Iyer S, Yatskar M, Choi Y, Zettlemoyer L (2017) Neural amr: Sequence-to-sequence models for parsing and generation. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp 146–157

  12. Kuncoro A, Ballesteros M, Kong L, Dyer C, Neubig G, Smith NA (2017) What do recurrent neural network grammars learn about syntax? In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pp 1249–1258

  13. Lyu C, Titov I (2018) Amr parsing as graph prediction with latent alignment. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp 397–407

  14. Peters M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pp 2227–2237

  15. Puzikov Y, Kawahara D, Kurohashi S (2016) M2l at semeval-2016 task 8: Amr parsing with neural networks. In: Proceedings of the 10th international workshop on semantic evaluation (semeval-2016), pp 1154–1159

  16. Qiu J, Liu Y, Chai Y, Si Y, Su S, Wang L, Wu Y (2019) Dependency-based local attention approach to neural machine translation. Comput Mater Cont 59(2):547–562

    Google Scholar 

  17. Strubell E, Verga P, Andor D, Weiss D, McCallum A (2018) Linguistically-informed self-attention for semantic role labeling. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp 5027–5038

  18. van Noord R, Bos J (2017) Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. Comput Linguist Netherlands J 7:93–108

    Google Scholar 

  19. Wang C, Xue N, Pradhan S (2015a) A transition-based algorithm for amr parsing. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics, pp 366–375

  20. Wang C, Xue N, Pradhan S (2015b) Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pp 857–862

  21. Wang C, Xue N (2017) Getting the most out of amr parsing. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 1257–1268

  22. Wang S, Zhang L, Zhang Y, Sun J, Pang C, Tian G, Cao N (2018) Natural language semantic construction based on cloud database. Comput Mater Cont 57(3):603–619

    Google Scholar 

  23. Werling K, Angeli G, Manning C D (2015) Robust subgraph generation improves abstract meaning representation parsing. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, vol 1, pp 982–991

  24. Xu K, Wu L, Wang Z, Yu M, Chen L, Sheinin V (2018) Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp 918–924

  25. Yang K, Wang Y, Zhang W, Yao J, Le Y (2019) Keyphrase generation based on self-attention mechanism. Comput Mater Cont 61(2):569–581

    Google Scholar 

  26. Zhang S, Ma X, Duh K, Durme BV (2019) Amr parsing as sequence-to-graph transduction. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics

  27. Zhou J, Xu F, Uszkoreit H, Weiguang Q, Li R, Gu Y (2016) Amr parsing with an incremental joint model. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 680–689

Download references

Acknowledgements

This research is supported by projects 61772278 and 61472191 under the National Science Fundatiaon of China, project 18BYY127 under the National Social Science Foundation of China,project for Jiangsu Higher Institutions’ Excellent Innovative Team for Philosophy and Social Sciences (2017STD006).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Junsheng Zhou or Weiguang Qu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, T., Zhou, J., Qu, W. et al. Improving AMR parsing by exploiting the dependency parsing as an auxiliary task. Multimed Tools Appl 80, 30827–30838 (2021). https://doi.org/10.1007/s11042-020-09967-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09967-3

Keywords

Navigation