Abstract
Pre-trained language models (PLMs) for Tagalog can be categorized into two kinds: monolingual models and multilingual models. However, existing monolingual models are only trained in small-scale Wikipedia corpus and multilingual models fail to deal with Tagalog-specific knowledge needed for various downstream tasks. We train three existing models on a much larger corpus: BERT-uncased-base, ELECTRA-uncased-base and RoBERTa-base. At the pre-training stage, we construct a large-scale news text corpusĀ for pre-trainingĀ in addition to the existing open-source corpora. Experimental results show that our pre-trained models achieve consistently competitive results in various Tagalog-specific natural language processing (NLP) tasks including part-of-speech (POS) tagging, hate speech classification, dengue classification and natural language inference (NLI). Among them, POS tagging dataset is a self-constructed dataset aiming to alleviate the insufficient labeled resource for Tagalog. We will release all pre-trained models and datasets to the community, hoping to facilitate the future development of Tagalog NLP applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Devlin, J., Chang, M.W., Lee K., Toutanova K.: BERT: pre-training of deep bidirectional transformers for language understanding, In: Proceedings of NAACLHLT 2019, pp. 4171ā4186 (2019)
Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z., Wang, S., Hu, G.: Pre-training with whole word masking for Chinese BERT. CORR (2019)
de Vries, W., van Cranenburgh, A., Bisazza, A., Caselli, T., van Noord, G., Nissim, M.: BERTje: a Dutch BERT model. CORR (2019)
Vu, X.S., Vu, T., Tran, S.N., Jiang, L.: ETNLP: a visual-aided systematic approach to select pre-trained embeddings for a downstream task. In: Proceedings of the International Conference on Recent Advances in Natural Language Processing, pp. 1285ā1294 (2019)
Martin, L., et al.: CamemBERT: a tasty French language model. In: Annual Meeting of the Association for Computational Linguistics, pp. 7203ā7219 (2020)
Lample, G., Conneau A.: Cross-lingual language model pretraining. CORR (2019)
Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. In: Annual Meeting of the Association for Computational Linguistics, pp. 8440ā8451 (2020)
Cruz, J.C.B., Cheng, C.: Evaluating language model finetuning techniques for low-resource languages. CORR (2019)
Cruz, J.C.B., Cheng, C.: Establishing baselines for text classification in low-resource languages. CORR (2020)
Cruz, J.C.B., Resabal, J.K., Lin, J., Velasco, D. J., Cheng, C.: Investigating the true performance of transformers in low-resource languages: A case study in automatic corpus creation. CORR (2020)
Xue, L., et al.: mT5: a massively multilingual pre-trained text-to-text transformer. CORR (2021)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. CORR (2019)
Cheng, C., Rabo, S.: TPOST: a template-based, n-gram part-of-speech tagger for Tagalog. J. Res. Sci. Comput. Eng. 3(1) (2004)
Reyes, C.D.E., Suba, K.R.S., Razon, A.R., Naval Jr., P.C.: SVPOSTA part-of-speech tagger for Tagalog using support vector machines. In: Proceedings of the 11th Philippine Computing Science Congress (2011)
Olivo, J.F.T., Hari, P.J.T., dela Fuente, M.B.: CRFPOST: part-of-speech tagger for Filipino texts using conditional random fields. In: Proceedings of the 2nd International Conference on Algorithms, Computing and Artificial Intelligence, pp. 444ā449 (2019)
Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: ELECTRA: pre-training text encoders as discriminators rather than generator. In: Proceedings of International Conference on Learning Representations (2020)
Vaswani, A., et al. Attention is all you need. CORR (2017)
SuƔrez, P.O., Romary, L., Sagot, B.: A monolingual approach to contextualized word embeddings for mid-resource languages. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
Grave, E., Bojanowski, P., Gupta, P., Joulin, A., ikolov, T.: Learning word vectors for 157 languages. In: Proceedings of the 11th Language Resources and Evaluation Conference, European Language Resource Association (2018)
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., et al.: Google's neural machine translation system: bridging the gap between human and machine translation. CORR (2016)
Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. CORR (2015)
Acknowledgement
This work was supported by the National Natural Science Foundation of China (No. 61572145), the Major Projects of Guangdong Education Department for Foundation Research and Applied Research (No. 2017KZDXM031) and National Social Science Foundation of China (No. 17CTQ045). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Jiang, S., Fu, Y., Lin, X., Lin, N. (2021). Pre-trained Language Models for Tagalog with Multi-source Data. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13028. Springer, Cham. https://doi.org/10.1007/978-3-030-88480-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-88480-2_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88479-6
Online ISBN: 978-3-030-88480-2
eBook Packages: Computer ScienceComputer Science (R0)