Skip to main content
Log in

Exploration and comparison of diverse approaches for integrating syntactic knowledge into AMR parsing

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Abstract Meaning Representation (AMR) parsing aims to translate sentences to semantic AMR graphs and has recently been empowered by pre-trained Transformer models (e.g., BART). We argue that explicitly encoding syntactic knowledge is beneficial for AMR parsing, since the AMR graph of a sentence has similar substructures to those of its corresponding syntactic dependency tree. However, the effect of integrating syntactic dependency knowledge into pre-trained Transformer-based AMR parsers, as well as how to better infuse them, remains unclear. Therefore, we conduct a systematic study to explore the utility of incorporating dependency trees into pre-trained Transformers for AMR parsing. Specifically, we propose and compare two distinct syntax-infused AMR parsers for injecting dependency structures: the Syntax-Aware Self-Attention (SASA) network, which extends the self-attention mechanism of the Transformer encoder with syntax-aware constraints, and the Syntax-Aware Graph Attention (SAGAT) network, which augments the pre-trained Transformer by encoding the syntax with a graph attention network. Extensive experiments conducted on different benchmarks demonstrate that both proposed syntax-infused AMR parsers achieve remarkable and explainable improvements. Specifically, SASA significantly outperforms the strong baseline by up to 1.2% and 0.9% Smatch scores on the AMR2.0 and AMR3.0 datasets, respectively. Furthermore, the proposed models alleviate the performance degradation caused by long-distance dependencies (LDDs), particularly in complex and long sentences. In addition, our proposed models achieve a new state-of-the-art performance level on out-of-distribution and low-resource benchmarks. The source code is available at https://github.com/Hi-ANONYMOUS/Syntax-Aware_AMR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The data that support the findings of this study are available from Linguistic Data Consortium (LDC) but restrictions apply to the availability of these data, which were used under licence for the current study, and so are not publicly available. The data are available in https://catalog.ldc.upenn.edu/LDC2017T10 and https://catalog.ldc.upenn.edu/LDC2020T02, with permission of LDC.

Notes

  1. https://stanfordnlp.github.io/stanza/depparse.html

  2. https://catalog.ldc.upenn.edu/LDC2017T10

  3. https://catalog.ldc.upenn.edu/LDC2020T02

  4. https://amr.isi.edu/download/2016-03-14/amr-release-test-bio.txt

  5. https://amr.isi.edu/download/amr-bank-struct-v3.0.txt

References

  1. Banarescu L, Bonial C, Cai S et al (2013) Abstract meaning representation for sembanking. In: Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pp 178–186

  2. El-Kassas WS, Salama CR, Rafea AA et al (2021) Automatic text summarization: a comprehensive survey. Expert Syst Appl 165:113679

    Article  Google Scholar 

  3. Zhang Z, Ji H (2021) Abstract meaning representation guided graph encoding and decoding for joint information extraction. In: Proceedings of the 2021 conference of the north American chapter of the association for computational linguistics: human language technologies, pp 39–49

  4. Song L, Gildea D, Zhang Y et al (2019) Semantic neural machine translation using amr. Trans Assoc Comput Linguist 7:19–31

    Article  Google Scholar 

  5. Bai X, Song L, Zhang Y (2022) Semantic-based pre-training for dialogue understanding. In: Proceedings of the 29th international conference on computational linguistics, pp 592–607

  6. Abdelaziz I, Ravishankar S, Kapanipathi P, et al (2021) A semantic parsing and reasoning-based approach to knowledge base question answering. In: Proceedings of the AAAI conference on artificial intelligence, pp 15985–15987

  7. Wang C, Xue N, Pradhan S (2015) A transition-based algorithm for amr parsing. In: Proceedings of the 2015 conference of the north American chapter of the association for computational linguistics: human language technologies, pp 366–375

  8. Peng X, Song L, Gildea D et al (2018) Sequence-to-sequence models for cache transition systems. In: Proceedings of the 56th annual meeting of the association for computational linguistics (vol 1: Long Papers), pp 1842–1852

  9. Damonte M, Cohen SB, Satta G (2017) An incremental parser for abstract meaning representation. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics: vol 1, Long Papers, pp 536–546

  10. Peng X, Gildea D, Satta G (2018) Amr parsing with cache transition systems. In: Proceedings of the AAAI conference on artificial intelligence

  11. Guo Z, Lu W (2018) Better transition-based amr parsing with a refined search space. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 1712–1722

  12. Naseem T, Shah A, Wan H et al (2019) Rewarding smatch: transition-based amr parsing with reinforcement learning. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 4586–4592

  13. Astudillo RF, Ballesteros M, Naseem T et al (2020) Transition-based parsing with stack-transformers. Findings of the association for computational linguistics: EMNLP 2020:1001–1007

    Google Scholar 

  14. Zhou J, Naseem T, Astudillo RF et al (2021a) Amr parsing with action-pointer transformer. In: Proceedings of the 2021 conference of the north American chapter of the association for computational linguistics: human language technologies, pp 5585–5598

  15. Zhou J, Naseem T, Astudillo RF et al (2021b) Structure-aware fine-tuning of sequence-to-sequence transformers for transition-based amr parsing. In: Proceedings of the 2021 conference on empirical methods in natural language processing, pp 6279–6290

  16. Drozdov A, Zhou J, Florian R et al (2022) Inducing and using alignments for transition-based amr parsing. In: Proceedings of the 2022 conference of the north American chapter of the association for computational linguistics: human language technologies, pp 1086–1098

  17. Cai D, Lam W (2020) Amr parsing via graph-sequence iterative inference. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 1290–1301

  18. Foland W, Martin JH (2017) Abstract meaning representation parsing using lstm recurrent neural networks. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 1: Long Papers), pp 463–472

  19. Zhang S, Ma X, Duh K et al (2019) Broad-coverage semantic parsing as transduction. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp 3786–3798

  20. Cai D, Lam W (2019) Core semantic first: a top-down approach for amr parsing. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp 3799–3809

  21. Zhang S, Ma X, Duh K et al (2019) Amr parsing as sequence-to-graph transduction. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 80–94

  22. van Noord R, Bos J (2017) Neural semantic parsing by character-based translation: experiments with abstract meaning representations. Comput Linguist Neth J7:93–108

    Google Scholar 

  23. Konstas I, Iyer S, Yatskar M et al (2017) Neural amr: Sequence-to-sequence models for parsing and generation. In: Proceedings of the 55th Annual meeting of the association for computational linguistics (vol 1: Long Papers), pp 146–157

  24. Xu D, Li J, Zhu M et al (2020) Improving amr parsing with sequence-to-sequence pre-training. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pp 2501–2511

  25. Bai X, Chen Y, Zhang Y (2022) Graph pre-training for amr parsing and generation. In: Proceedings of the 60th annual meeting of the association for computational linguistics (vol 1: Long Papers), pp 6001–6015

  26. Chen L, Wang P, Xu R et al (2022) Atp: Amrize then parse! enhancing amr parsing with pseudoamrs. Findings of the association for computational linguistics: NAACL 2022:2482–2496

    Google Scholar 

  27. Bevilacqua M, Blloshmi R, Navigli R (2021) One spring to rule them both: symmetric amr semantic parsing and generation without a complex pipeline. In: Proceedings of the AAAI conference on artificial intelligence, pp 12564–12573

  28. Yu C, Gildea D (2022) Sequence-to-sequence amr parsing with ancestor information. In: Proceedings of the 60th annual meeting of the association for computational linguistics (vol 2: Short Papers), pp 571–577

  29. Cheng Z, Li Z, Zhao H (2022) Bibl: Amr parsing and generation with bidirectional bayesian learning. In: Proceedings of the 29th International conference on computational linguistics, pp 5461–5475

  30. Wang P, Chen L, Liu T et al (2022) Hierarchical curriculum learning for AMR parsing. In: Proceedings of the 60th annual meeting of the association for computational linguistics (vol 2: Short Papers). Association for Computational Linguistics, Dublin, pp 333–339

  31. Lewis M, Liu Y, Goyal N et al (2020) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 7871–7880

  32. Ge D, Li J, Zhu M et al (2019) Modeling source syntax and semantics for neural amr parsing. In: IJCAI, pp 4975–4981

  33. Wu T, Zhou J, Qu W et al (2021) Improving amr parsing by exploiting the dependency parsing as an auxiliary task. Multimed Tools Appl 80:30827–30838

    Article  Google Scholar 

  34. Sataer Y, Shi C, Gao M et al (2023) Integrating syntactic and semantic knowledge in amr parsing with heterogeneous graph attention network. In: ICASSP 2023-2023 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 1–5

  35. Zhou Q, Zhang Y, Ji D et al (2020) Amr parsing with latent structural information. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 4306–4319

  36. Tsvetkov Y (2016) Linguistic knowledge in data-driven natural language processing. PhD thesis, Georgia Institute of Technology

  37. Liu Y, Ryskin R, Futrell R et al (2022) A verb-frame frequency account of constraints on long-distance dependencies in english. Cognition 222:104902

    Article  Google Scholar 

  38. Groschwitz J, Lindemann M, Fowlie M et al (2018) Amr dependency parsing with a typed semantic algebra. In: Proceedings of the 56th annual meeting of the association for computational linguistics (vol 1: Long Papers), pp 1831–1841

  39. Xiang C, Zhang J, Li F et al (2022) A semantic and syntactic enhanced neural model for financial sentiment analysis. Inf Process & Manag 59(4):102943

    Article  Google Scholar 

  40. Zhang D, Zhu Z, Kang S et al (2021) Syntactic and semantic analysis network for aspect-level sentiment classification. Appl Intell 51(8):6136–6147

    Article  Google Scholar 

  41. Zhao Z, Tang M, Zhao F et al (2023) Incorporating semantics, syntax and knowledge for aspect based sentiment analysis. Appl Intell 53(12):16138–16150

    Article  Google Scholar 

  42. Zhou J, Huang JX, Hu QV et al (2020) Sk-gcn: modeling syntax and knowledge via graph convolutional network for aspect-level sentiment classification. Knowl-Based Syst 205:106292

  43. Xiao L, Xue Y, Wang H et al (2022) Exploring fine-grained syntactic information for aspect-based sentiment classification with dual graph neural networks. Neurocomputing 471:48–59

    Article  Google Scholar 

  44. Li Z, Parnow K, Zhao H (2022) Incorporating rich syntax information in grammatical error correction. Inf Process & Manag 59(3):102891

    Article  Google Scholar 

  45. Zhang Z, Wu Y, Zhou J et al (2020) Sg-net: syntax guided transformer for language representation. IEEE Trans Pattern Anal Mach Intell

  46. Ren L, Xu B, Lin H et al (2022) An attention network via pronunciation, lexicon and syntax for humor recognition. Appl Intell 52(3):2690–2702

    Article  Google Scholar 

  47. Li D, Yan L, Yang J et al (2022) Dependency syntax guided bert-bilstm-gam-crf for chinese ner. Expert Syst Appl 196:116682

    Article  Google Scholar 

  48. Hu G, Lu G, Zhao Y (2021) Fss-gcn: a graph convolutional networks with fusion of semantic and structure for emotion cause analysis. Knowl-Based Syst 212:106584

  49. Huang B, Zhang J, Ju J et al (2023) Crf-gcn: an effective syntactic dependency model for aspect-level sentiment analysis. Knowl-Based Syst 260:110125

  50. Zouhar V, Meister C, Gastaldi J et al (2023) A formal perspective on byte-pair encoding. Findings of the association for computational linguistics: ACL 2023:598–614

    Google Scholar 

  51. Garbin C, Zhu X, Marques O (2020) Dropout vs. batch normalization: an empirical study of their impact to deep learning. Multimed Tools Appl 79:12777–12815

    Article  Google Scholar 

  52. Vaswani A, Shazeer N, Parmar N et al (2017) Attention is all you need. Adv Neural Inf Process Syst 30

  53. Sun Z, Zhu Q, Xiong Y et al (2020) Treegen: a tree-based transformer architecture for code generation. In: Proceedings of the AAAI conference on artificial intelligence, pp 8984–8991

  54. Lai T, Tran QH, Bui T et al (2019) A gated self-attention memory network for answer selection. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp 5953–5959

  55. Xie Y, Zhang Y, Gong M et al (2020) Mgat: multi-view graph attention networks. Neural Netw 132:180–189

    Article  Google Scholar 

  56. Zhou J, Cui G, Hu S et al (2020) Graph neural networks: a review of methods and applications. AI Open 1:57–81

    Article  Google Scholar 

  57. Xu J, Li Z, Du B et al (2020) Reluplex made more practical: leaky relu. In: 2020 IEEE symposium on computers and communications (ISCC). IEEE, pp 1–7

  58. Cai S, Knight K (2013) Smatch: an evaluation metric for semantic feature structures. In: Proceedings of the 51st annual meeting of the association for computational linguistics (vol 2: Short Papers), pp 748–752

  59. Wolf T, Debut L, Sanh V et al (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp 38–45

  60. Li BZ, Min S, Iyer S et al (2020) Efficient one-pass end-to-end entity linking for questions. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pp 6433–6441

  61. Yedida R, Saha S, Prashanth T (2021) Lipschitzlr: using theoretically computed adaptive learning rates for fast convergence. Appl Intell 51:1460–1478

    Article  Google Scholar 

  62. Raganato A, Tiedemann J (2018) An analysis of encoder representations in transformer-based machine translation. In: Proceedings of the 2018 EMNLP workshop BlackboxNLP: analyzing and interpreting neural networks for NLP, The Association for Computational Linguistics

  63. Voita E, Talbot D, Moiseev F et al (2019) Analyzing multi-head self-attention: specialized heads do the heavy lifting, the rest can be pruned. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 5797–5808

  64. Xu M, Yang B, Wong DF et al (2022) Multi-view self-attention networks. Knowl-Based Syst 241:108268

  65. Yang B, Tu Z, Wong DF et al (2018) Modeling localness for self-attention networks. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4449–4458

  66. Carter R, McCarthy M (2006) Cambridge grammar of english: a comprehensive guide

Download references

Acknowledgements

This research is sponsored by the Foundation of Science and Technology Research Project (Grant No. GJJ2200868) from Jiangxi Education Department.

Author information

Authors and Affiliations

Authors

Contributions

Yikemaiti Sataer: Conceptualization, Methodology, Software, Writing-Original Draft. Zhiqiang Gao: Validation, Supervision, Writing-Review. Yunlong Fan: Resources, Investigation. Bin Li: Formal analysis, Validation. Miao Gao: Writing-Editing, Visualization. Chuanqi Shi: Resources, Software.

Corresponding author

Correspondence to Zhiqiang Gao.

Ethics declarations

Conflicts of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sataer, Y., Gao, Z., Fan, Y. et al. Exploration and comparison of diverse approaches for integrating syntactic knowledge into AMR parsing. Appl Intell 53, 30757–30777 (2023). https://doi.org/10.1007/s10489-023-05120-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-05120-3

Keywords

Navigation