Abstract
Despite different applications, transformer-based language models, like BERT and GPT, learn about language by predicting missing parts of text. BERT is pretrained in Masked Language Modelling and GPT generates text from a given sequence. We explore such models for answering cloze questions in Portuguese, following different approaches. When options are not considered, the largest BERT model, trained exclusively for Portuguese, is the most accurate. But when selecting the best option, top performance is achieved by computing the most probable sentence, and GPT-2 fine-tuned for Portuguese beats BERT.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
In any case, we empirically checked that, in order to have make a noticeable difference, this number would have to be at least one order of magnitude higher.
- 6.
References
Baptista, J., Costa, N., Guerra, J., Zampieri, M., Cabral, M., Mamede, N.: P-AWL: academic word list for Portuguese. In: Pardo, T.A.S., Branco, A., Klautau, A., Vieira, R., de Lima, V.L.S. (eds.) PROPOR 2010. LNCS (LNAI), vol. 6001, pp. 120–123. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-12320-7_15
Brown, T.B., et al.: Language models are few-shot learners. arXiv:2005.14165 preprint (2020)
Correia, R., Baptista, J., Eskenazi, M., Mamede, N.: Automatic generation of cloze question stems. In: Caseli, H., Villavicencio, A., Teixeira, A., PerdigĂ£o, F. (eds.) PROPOR 2012. LNCS (LNAI), vol. 7243, pp. 168–178. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28885-2_19
Correia, R., Baptista, J., Mamede, N., Trancoso, I., Eskenazi, M.: Automatic generation of cloze question distractors. In: Second Language Studies: Acquisition, Learning, Education and Technology (2010)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. ACL (2019)
Donahue, C., Lee, M., Liang, P.: Enabling language models to fill in the blanks. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2492–2501. ACL (2020)
Ethayarajh, K.: How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 55–65 (2019)
Ettinger, A.: What BERT is not: lessons from a new suite of psycholinguistic diagnostics for language models. Trans. Assoc. Comput. Linguist. 8, 34–48 (2020)
Goldberg, Y.: Assessing BERT’s syntactic abilities. arXiv:1901.05287 (2019)
Gonçalo Oliveira, H.: A survey on Portuguese lexical knowledge bases: contents, comparison and combination. Information 9(2), 34 (2018)
Gonçalo Oliveira, H., AntĂ³n PĂ©rez, L., Costa, H., Gomes, P.: Uma rede lĂ©xico-semĂ¢ntica de grandes dimensões para o portuguĂªs, extraĂda a partir de dicionĂ¡rios electrĂ³nicos. LinguamĂ¡tica 3(2), 23–38 (2011)
Jiang, Y., et al.: Improving Machine Reading Comprehension with single-choice decision and transfer learning. arXiv:2011.03292 preprint (2020)
Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale reading comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794 (2017)
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv:1909.11942 preprint (2019)
Petroni, F., et al.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473. ACL (2019)
Pilehvar, M.T., Jurgens, D., Navigli, R.: Align, disambiguate and walk: a unified approach for measuring semantic similarity. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pp. 1341–1351. ACL (2013)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016)
Rocha, P.A., Santos, D.: CETEMPĂºblico: Um corpus de grandes dimensões de linguagem jornalĂstica portuguesa. In: V Encontro para o processamento computacional da lĂngua portuguesa escrita e falada, PROPOR 2000, pp. 131–140. ICMC/USP (2000)
Rodrigo, A., Peñas, A., Miyao, Y., Hovy, E.H., Kando, N.: Overview of CLEF QA Entrance Exams task 2015. In: Working Notes of CLEF 2015 - Conference and Labs of the Evaluation forum. CEUR-WS (2015)
Rodrigues, R.C., da Silva, J.R., de Castro, P.V.Q., da Silva, N.F.F., da Silva Soares, A.: Multilingual transformer ensembles for Portuguese natural language tasks. In: Proceedings of the the ASSIN 2 Shared Task: Evaluating Semantic Textual Similarity and Textual Entailment in Portuguese, vol. 2583. CEUR-WS.org (2020)
Rodrigues, R., Couto, P., Rodrigues, I.: IPR: the semantic textual similarity and recognizing textual entailment systems. In: Proceedings of the the ASSIN 2 Shared Task: Evaluating Semantic Textual Similarity and Textual Entailment in Portuguese, vol. 2583. CEUR-WS.org (2020)
Schick, T., SchĂ¼tze, H.: Exploiting cloze-questions for few-shot text classification and natural language inference. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pp. 255–269. ACL (2021)
Silva, A., Marques, C., Baptista, J., Ferreira, A., Mamede, N.: REAP.PT serious games for learning Portuguese. In: Caseli, H., Villavicencio, A., Teixeira, A., PerdigĂ£o, F. (eds.) PROPOR 2012. LNCS (LNAI), vol. 7243, pp. 248–259. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28885-2_29
Souza, F., Nogueira, R., Lotufo, R.: BERTimbau: pretrained BERT models for Brazilian Portuguese. In: Cerri, R., Prati, R.C. (eds.) BRACIS 2020. LNCS (LNAI), vol. 12319, pp. 403–417. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61377-8_28
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Acknowledgement
This work was partially funded by: the project SmartEDU (CENTRO-01-0247-FEDER-072620), co-financed by the European Regional Development Fund (FEDER), through Portugal 2020 (PT2020), and by the Regional Operational Programme Centro 2020; and national funds through the FCT – Foundation for Science and Technology, I.P., within the scope of the project CISUC – UID/CEC/00326/2020 and by the European Social Fund, through the Regional Operational Program Centro 2020.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Gonçalo Oliveira, H. (2021). Answering Fill-in-the-Blank Questions in Portuguese with Transformer Language Models. In: Marreiros, G., Melo, F.S., Lau, N., Lopes Cardoso, H., Reis, L.P. (eds) Progress in Artificial Intelligence. EPIA 2021. Lecture Notes in Computer Science(), vol 12981. Springer, Cham. https://doi.org/10.1007/978-3-030-86230-5_58
Download citation
DOI: https://doi.org/10.1007/978-3-030-86230-5_58
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86229-9
Online ISBN: 978-3-030-86230-5
eBook Packages: Computer ScienceComputer Science (R0)