Abstract
We discuss the importance of domain specific language model in statistical machine translation system. Both the structures and phrase selection are not the same for different domains. So, the language model trained with the general domain data or other domain data can not provide better accuracy. Moreover, there may have some specific focus in different texts of the same domain. Hence, the language model trained with data from the default domain may not yield significant output. In this paper, we learn our system dynamically based on the better matches with the input text. Instead of directly selecting pre-trained language model we prepare the prioritized language model according to the situation. The proposed model is evaluated for Hindi-English translation. It shows a significant improvement on the translated output in terms of the BLEU score. Our evaluation shows that automated domain adoption to predict better language model improves the translation quality.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Banerjee, P., Du, J., Li, B., Kumar Naskar, S., Way, A., van Genabith, J.: Combining multi-domain statistical machine translation models using automatic classifiers. In: AMTA 9th Conference of the Association for Machine Translation in the Americas, USA (2010)
Xiong, D., Zhang, M., Li, H.: Enhancing language models in statistical machine translation with backward n-grams and mutual information triggers. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pp. 1288–1297 (2011)
Sethy, A., Georgiou, P.G., Narayanan, S.S.: Building topic specific language models from webdata using competitive models (2005)
Brants, T., Popat, A.C., Xu, P., Och, F.J., Dean, J.: Large language models in machine translation. In: Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Language Learning (2007)
Gavrila, M., Vertan, C.: Training data in statistical machine translation - the more, the better? In: Proceedings of Recent Advances in Natural Language Processing Hissar, Bulgaria, 12–14 September 2011, pp. 551–556 (2011)
Luong, T., Kayser, M., Manning, C.D.: Deep neural language models for machine translation. In: Proceedings of the 19th Conference on Computational Natural Language Learning, CoNLL 2015, Beijing, China, 30–31 July 2015, pp. 305–309 (2015)
Lembersky, G., Ordan, N., Wintner, S.: Language models for machine translation: original vs. translated texts. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (2011)
Huerta, J.M.: An information-retrieval approach to language modeling: applications to social data. In: Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media, pp. 7–8 (2010)
Sorkey, A.J., Conrad, S.A.: Medical transcription with dynamic language models. US Patent 10,658,074, 19 May 2020
Delasalles, E., Lamprier, S., Denoyer, L.: Dynamic neural language models. In: Gedeon, T., Wong, K.W., Lee, M. (eds.) ICONIP 2019. LNCS, vol. 11955, pp. 282–294. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-36718-3_24
Rosset, C., Xiong, C., Phan, M., Song, X., Bennett, P., Tiwary, S.: Knowledge-aware language model pretraining. arXiv preprint arXiv:2007.00655 (2020)
Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, vol. 1. Association for Computational Linguistics, pp. 48–54 (2003)
Hiemstra, D.: A probabilistic justification for using tf \(\times \) idf term weighting in information retrieval. Int. J. Digit. Libr. 3(2), 131–139 (2000)
https://www.wikidata.org/wiki/wikidata:database_download (2020)
https://www.ling.upenn.edu/courses/fall_2003/ling001/penn_treebank_pos.html (2020)
De Marneffe, M.C., MacCartney, B., Manning, C.D., et al.: Generating typed dependency parses from phrase structure parses. In: Proceedings of LREC, Genoa Italy, vol. 6, pp. 449–454 (2006)
Koehn, P.: Europarl: a parallel corpus for statistical machine translation. In: Proceedings of MT Summit X, Phuket, Thailand, pp. 79–86 (2005)
https://dumps.wikimedia.org/ (2020)
https://anoopkunchukuttan.github.io/indic_nlp_library/ (2020)
Klein, D., Manning, C.D.: Accurate unlexicalized parsing (2003)
Bojar, O., et al.: Hindencorp-Hindi-English and Hindi-only corpus for machine translation. In: LREC, pp. 3550–3555 (2014)
Khapra, M.M., Kulkarni, A., Sohoney, S., Bhattacharyya, P.: All words domain adapted WSD: finding a middle ground between supervision and unsupervision. In: Conference of Association of Computational Linguistics (ACL 2010) (2010)
Jha, G.N.: The TDIL program and the Indian Language Corpora Initiative (ILCI). In: LREC (2010)
Chiang, D.: Hierarchical phrase-based translation. Comput. Linguist. 33(2), 201–228 (2007)
Kunchukuttan, A., Mehta, P., Bhattacharyya, P.: The IIT Bombay English-Hindi parallel corpus. arXiv preprint arXiv:1710.02855 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Declaration of Competing Interest
The authors declare that they have no known competing nancial interests or personal relationships that could have appeared to influence the work reported in this paper.
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this paper
Cite this paper
Banik, D., Ekbal, A., Bhattacharyya, P. (2023). Two-Phased Dynamic Language Model: Improved LM for Automated Language Translation. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2019. Lecture Notes in Computer Science, vol 13451. Springer, Cham. https://doi.org/10.1007/978-3-031-24337-0_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-24337-0_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-24336-3
Online ISBN: 978-3-031-24337-0
eBook Packages: Computer ScienceComputer Science (R0)