Abstract
Supervised spelling error correction models have achieved outstanding performances on rich-source languages. However, these models are difficult to directly apply to Vietnamese spelling correction due to the corpus scarcity. To address this issue, we first construct a basic high-quality Vietnamese Spelling Correction (ViSC) corpus via automatic speech recognition (ASR) generation and human annotation. Then, we propose a part-of-speech and confusion-set double-constrained method to mimic the practical error distribution and use them as external knowledge to guide the large language models (LLMs) to construct diverse pseudo data. Finally, we exploit pseudo corpora to pre-train and ViSC corpus to fine-tune spelling error correction models. Experiments on the benchmark dataset show that our proposed corpus construction method consistently outperforms various baselines, leading to state-of-the-art results on all Vietnamese-specific pre-trained language model-enhanced spelling correction models. Detailed analysis demonstrates that part-of-speech and confusion-set are complementary and significant in controlling a stable and diverse corpus generation. In-depth comparison experiments reveal that the proper utilization of pseudo corpus is essential for improving Vietnamese spelling error correction. Besides, we release our codes and constructed corpus at https://github.com/DarkFanta3y/VSEC_corpus to facilitate future research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bryant, C., Felice, M., Andersen, Ø.E., Briscoe, T.: The BEA-2019 shared task on grammatical error correction. In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 52–75 (2019)
Bryant, C., Yuan, Z., Qorib, M.R., Cao, H., Ng, H.T., Briscoe, T.: Grammatical error correction: a survey of the state of the art. In: Computational Linguistics, pp. 643–701 (2023)
Choe, Y.J., Ham, J., Park, K., Yoon, Y.: A neural grammatical error correction system built on better pre-training and sequential transfer learning. arXiv preprint arXiv:1907.01256 (2019)
Fan, Y., Jiang, F., Li, P., Li, H.: GrammarGPT: exploring open-source LLMs for native Chinese grammatical error correction with supervised fine-tuning. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds.) NLPCC 2023. LNCS, vol. 14304, pp. 69–80. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-44699-3_7
Fang, T., et al.: Is chatGPT a highly fluent grammatical error correction system? A comprehensive evaluation. CoRR (2023)
Grundkiewicz, R., Junczys-Dowmunt, M., Heafield, K.: Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 252–263 (2019)
Kaneko, M., Mita, M., Kiyono, S., Suzuki, J., Inui, K.: Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. In: Proceedings of ACL, pp. 4248–4254 (2020)
Kantor, Y., et al.: Learning to combine grammatical error corrections. In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, BEA@ACL 2019, Florence, Italy, 2 August 2019, pp. 139–148 (2019)
Li, Y., et al.: On the (in)effectiveness of large language models for Chinese text correction. CoRR (2023)
Lichtarge, J., Alberti, C., Kumar, S., Shazeer, N., Parmar, N., Tong, S.: Corpora generation for grammatical error correction. In: Proceedings of AACL, pp. 3291–3301 (2019)
Ma, S., et al.: Linguistic rules-based corpus generation for native Chinese grammatical error correction. In: Proceedings of EMNLP Findings, pp. 576–589 (2022)
Malmi, E., Krause, S., Rothe, S., Mirylenka, D., Severyn, A.: Encode, tag, realize: high-precision text editing. In: Proceedings of EMNLP, pp. 5053–5064 (2019)
Mukherjee, S., Awadallah, A.H.: Uncertainty-aware self-training for text classification with few labels. arXiv preprint arXiv:2006.15315 (2020)
Ng, H.T., Wu, S.M., Briscoe, T., Hadiwinoto, C., Susanto, R.H., Bryant, C.: The CoNLL-2014 shared task on grammatical error correction. In: Proceedings of CoNLL, pp. 1–14 (2014)
Nguyen, D.Q., Nguyen, A.T.: PhoBERT: pre-trained language models for vietnamese. arXiv preprint arXiv:2003.00744 (2020)
Omelianchuk, K., Atrasevych, V., Chernodub, A.N., Skurzhanskyi, O.: Gector - grammatical error correction: tag, not rewrite. In: Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, BEA@ACL 2020, Online, 10 July 2020, pp. 163–170 (2020)
Pedler, J., Mitton, R.: A large list of confusion sets for spellchecking assessed against a corpus of real-word errors. In: Proceedings of LREC (2010)
Reynolds, L., McDonell, K.: Prompt programming for large language models: Beyond the few-shot paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–7 (2021)
Stahlberg, F., Kumar, S.: Seq2Edits: sequence transduction using span-level edit operations. In: Proceedings of EMNLP, pp. 5147–5159 (2020)
Tran, N.L., Le, D.M., Nguyen, D.Q.: BARTpho: pre-trained sequence-to-sequence models for Vietnamese. In: Proceedings of Interspeech, pp. 1751–1755 (2022)
Vaswani, A., et al.: Attention is all you need. In: Proceedings of NeurIPS, pp. 5998–6008 (2017)
Wang, D., Song, Y., Li, J., Han, J., Zhang, H.: A hybrid approach to automatic corpus generation for Chinese spelling check. In: Proceedings of EMNLP, pp. 2517–2527 (2018)
Wang, Y., Wang, Y., Dang, K., Liu, J., Liu, Z.: A comprehensive survey of grammatical error correction. ACM Trans. Intell. Syst. Technol. (TIST) 12(5), 1–51 (2021)
Yang, Z., Hu, Z., Dyer, C., Xing, E.P., Berg-Kirkpatrick, T.: Unsupervised text style transfer using language models as discriminators. In: Proceedings of NeurIPS, pp. 7298–7309 (2018)
Ye, J., Li, Y., Li, Y., Zheng, H.: MixEdit: revisiting data augmentation and beyond for grammatical error correction. In: Proceedings of EMNLP Findings, pp. 10161–10175 (2023)
Yuan, Z., Briscoe, T.: Grammatical error correction using neural machine translation. In: Proceedings of AACL, pp. 380–386 (2016)
Zhang, Y., Cui, L., Cai, D., Huang, X., Fang, T., Bi, W.: Multi-task instruction tuning of llama for specific scenarios: a preliminary study on writing assistance. arXiv preprint arXiv:2305.13225 (2023)
Zhang, Y., et al.: MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction. In: Proceedings of AACL, pp. 3118–3130 (2022)
Zhang, Y., et al.: NaSGEC: a multi-domain Chinese grammatical error correction dataset from native speaker texts. In: Proceedings of ACL Findings, pp. 9935–9951 (2023)
Zhang, Y., Zhang, B., Li, Z., Bao, Z., Li, C., Zhang, M.: SynGEC: syntax-enhanced grammatical error correction with a tailored GEC-oriented parser. In: Proceedings of EMNLP, pp. 2518–2531 (2022)
Acknowledgments
This work was supported by the National Natural Science Foundation of China (U21B2027, 62366027, 62266028, 62306129), Yunnan Provincial Major Science and Technology Special Plan Projects (202103AA080015, 202202AD080003, 202203AA080004), Yunnan Fundamental Research Projects (202401BC070021, 202301AS070047, 202401CF070121), Kunming University of Science and Technology’s “Double First-rate” Construction Joint Project (202201BE070001-021, 202301BE070001-027), Yunnan High and New Technology Industry Project (201606).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, Y., Chen, X., Liu, X., Dong, L., Yu, Z., Mao, C. (2025). Part-of-Speech and Confusion-Set Constrained Language Model for Vietnamese Spelling Correction Corpus Construction. In: Wong, D.F., Wei, Z., Yang, M. (eds) Natural Language Processing and Chinese Computing. NLPCC 2024. Lecture Notes in Computer Science(), vol 15362. Springer, Singapore. https://doi.org/10.1007/978-981-97-9440-9_15
Download citation
DOI: https://doi.org/10.1007/978-981-97-9440-9_15
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-9439-3
Online ISBN: 978-981-97-9440-9
eBook Packages: Computer ScienceComputer Science (R0)