Abstract
Language models trained with Bidirectional Encoder Representations from Transformers (BERT) have demonstrated remarkable results in various Natural Language Processing (NLP) tasks. However, the legal domain poses specific challenges for NLP due to its highly specialized language, which includes technical vocabulary, formal style, frequent use of law citations and semantics based on vast knowledge. Therefore, pretrained language models on a generic corpus may not be suitable for performing specific legal domain tasks. They lack the necessary expertise to understand the nuances of legal language, leading to inaccuracies and inconsistencies. This work describes the development of a specialized language model, LegalBert-pt, for the legal domain in Portuguese. The model was pretrained on a large and diverse corpus of Brazilian legal texts and is now open-source and customizable for specific tasks. Experiments were conducted to evaluate the pretrained model’s effectiveness in the legal domain, both intrinsically and in two specific tasks: named-entity recognition and text classification. The results indicate that using LegalBert-pt outperforms the generic language model in all tasks, emphasizing the importance of specialization in achieving effective results for specific tasks in the legal domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aguiar, A., Silveira, R., Pinheiro, V., Furtado, V., Neto, J.A.: Text classification in legal documents extracted from lawsuits in Brazilian courts. In: Anais da X Brazilian Conference on Intelligent Systems, SBC, Porto Alegre, RS, Brasil (2021). https://sol.sbc.org.br/index.php/bracis/article/view/19093
Aguiar, A., Silveira, R., Furtado, V., Pinheiro, V., Neto, J.A.M.: Using topic modeling in classification of Brazilian lawsuits. In: Pinheiro, V., et al. (eds.) PROPOR 2022. LNCS (LNAI), vol. 13208, pp. 233–242. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98305-5_22
Luz de Araujo, P.H., de Campos, T.E., de Oliveira, R.R.R., Stauffer, M., Couto, S., Bermejo, P.: LeNER-Br: a dataset for named entity recognition in Brazilian legal text. In: Villavicencio, A., et al. (eds.) PROPOR 2018. LNCS (LNAI), vol. 11122, pp. 313–323. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99722-3_32
Luz de Araujo, P.H., de Campos, T.E., Ataides Braz, F., Correia da Silva, N.: VICTOR: a dataset for Brazilian legal documents classification. In: Proceedings of the Twelfth Language Resources and Evaluation Conference, pp. 1449–1458. European Language Resources Association, Marseille (2020). https://aclanthology.org/2020.lrec-1.181
Beltagy, I., Lo, K., Cohan, A.: Scibert: a pretrained language model for scientific text. arXiv preprint arXiv:1903.10676 (2019)
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., Androutsopoulos, I.: Legal-bert: the muppets straight out of law school. arXiv preprint arXiv:2010.02559 (2020)
Chalkidis, I., et al.: Lexglue: a benchmark dataset for legal language understanding in english (2022)
Chinchor, N., Sundheim, B.M.: Muc-5 evaluation metrics. In: Fifth Message Understanding Conference (MUC-5): Proceedings of a Conference Held in Baltimore, Maryland, 25–27 August 1993 (1993)
Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Feng, Z., et al.: Codebert: a pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155 (2020)
Jain, D., Borah, M.D., Biswas, A.: Summarization of legal documents: where are we now and the way forward. Comput. Sci. Rev. 40, 100388 (2021)
Kalyan, K.S., Rajasekharan, A., Sangeetha, S.: Ammus: a survey of transformer-based pretrained models in natural language processing. arXiv preprint arXiv:2108.05542 (2021)
Kudo, T., Richardson, J.: Sentencepiece: a simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226 (2018)
Lee, J.: Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020)
Legal-bertimbau-base. https://huggingface.co/rufimelo/Legal-BERTimbau-base
Licari, D., Comandè, G.: Italian-legal-bert: a pre-trained transformer language model for Italian law (2022)
Lin, T., Wang, Y., Liu, X., Qiu, X.: A survey of transformers. AI Open 3, 111–132 (2022). https://doi.org/10.1016/j.aiopen.2022.10.001
Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
Brito, M., et al.: Cdjur-br - a golden collection of legal document from Brazilian justice with fine-grained named entities. arXiv preprint arXiv:2023.49053 (2023)
Meister, C., Cotterell, R.: Language model evaluation beyond perplexity. arXiv preprint arXiv:2106.00085 (2021)
Nguyen, T.S., Nguyen, L.M., Tojo, S., Satoh, K., Shimazu, A.: Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts. Artif. Intell. Law 26, 169–199 (2018)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
Paul, S., Mandal, A., Goyal, P., Ghosh, S.: Pre-training transformers on indian legal text. arXiv preprint arXiv:2209.06049 (2022)
Peters, M.E., et al.: Deep contextualized word representations (2018)
Polo, F., et al.: Legalnlp - natural language processing methods for the Brazilian legal language. In: Anais do XVIII Encontro Nacional de Inteligência Artificial e Computacional, pp. 763–774. SBC, Porto Alegre (2021). https://doi.org/10.5753/eniac.2021.18301. https://sol.sbc.org.br/index.php/eniac/article/view/18301
Sang, E.F., Veenstra, J.: Representing text chunks. arXiv preprint arXiv:cs/9907006 (1999)
Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 (2015)
Shao, Y., et al.: Bert-pli: modeling paragraph-level interactions for legal case retrieval. In: IJCAI, pp. 3501–3507 (2020)
Sistema de gestão de tabelas processuais unificadas. https://www.cnj.jus.br/sgt/consulta_publica_assuntos.php. Accessed 09 Aug 2022
Souza, F., Nogueira, R., Lotufo, R.: BERTimbau: pretrained BERT models for Brazilian Portuguese. In: Cerri, R., Prati, R.C. (eds.) BRACIS 2020. LNCS (LNAI), vol. 12319, pp. 403–417. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61377-8_28
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 1–11 (2017)
Viegas, C.F.O.: Jurisbert: transformer-based model for embedding legal texts (2022)
Wang, Z., Wang, P., Huang, L., Sun, X., Wang, H.: Incorporating hierarchy into text encoder: a contrastive learning approach for hierarchical text classification. arXiv preprint arXiv:2203.03825 (2022)
Xiao, C., Hu, X., Liu, Z., Tu, C., Sun, M.: Lawformer: a pre-trained language model for Chinese legal long documents. AI Open 2, 79–84 (2021)
Yang, Y., Uy, M.C.S., Huang, A.: Finbert: a pretrained language model for financial communications. arXiv preprint arXiv:2006.08097 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Silveira, R., Ponte, C., Almeida, V., Pinheiro, V., Furtado, V. (2023). LegalBert-pt: A Pretrained Language Model for the Brazilian Portuguese Legal Domain. In: Naldi, M.C., Bianchi, R.A.C. (eds) Intelligent Systems. BRACIS 2023. Lecture Notes in Computer Science(), vol 14197. Springer, Cham. https://doi.org/10.1007/978-3-031-45392-2_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-45392-2_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45391-5
Online ISBN: 978-3-031-45392-2
eBook Packages: Computer ScienceComputer Science (R0)