Skip to main content

Fine-Tuned Transformer Model for Sentiment Analysis

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12275))

Abstract

Sentiment analysis is one of important natural language processing (NLP) tasks. The transformer-based language models have become new baseline in sentiment analysis tasks in recent years. These models trained by efficient unsupervised learning methods have pushed accuracy of sentiment analysis tasks to a higher level. In this paper, we propose a set of novel transformation methods to tackle a sentiment analysis task. Specifically, we design four novel transformations in our transformer-models. For comparison purpose, the new data manipulation method is implemented on two language models (BERT and RoBERTa) separately. The nonlinear transformation method is evaluated on the large-scale real world dataset (Weibo sentiment analysis dataset (https://www.datafountain.cn/competitions/423)). The experiment results demonstrate the superiority of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/brightmart/roberta_zh.

References

  1. Devlin, J., Chang, W.M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  2. Liu, Y., et al.: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  3. Lan, Z., et al.: Albert: a lite BERT for self-supervised learning of language representations. In: International Conference on Learning Representations (2020)

    Google Scholar 

  4. Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: ELECTRA: pre-training text encoders as discriminators rather than generators. In: International Conference on Learning Representations (2020)

    Google Scholar 

  5. Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642 (2013)

    Google Scholar 

  6. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  7. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  8. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)

    Article  Google Scholar 

  9. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  10. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2014)

    Google Scholar 

  11. Yang, Z., et al.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  12. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)

  13. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019)

  14. Xie, Q., et al.: Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848 (2019)

  15. Chen, Y., et al.: A2-Nets: double attention networks. In: Advances in Neural Information Processing Systems 32 (2018)

    Google Scholar 

  16. Zhao, G., et al.: MUSE: parallel multi-scale attention for sequence to sequence learning. arXiv preprint arXiv:1911.09483 (2019)

  17. Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 (2014)

  18. Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. arXiv preprint arXiv:1906.08101 (2019)

  19. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2017)

    Google Scholar 

Download references

Acknowledgement

This research was supported by National Natural Science Foundation of China (Grants No. 61877051), and Natural Science Foundation Project of CQ, China (Grants No. cstc2018jscx-msyb1042, and cstc2018jscx-msybX0273). The authors would like to thank anonymous reviewers for their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, S., Shuai, P., Zhang, X., Chen, S., Li, L., Liu, M. (2020). Fine-Tuned Transformer Model for Sentiment Analysis. In: Li, G., Shen, H., Yuan, Y., Wang, X., Liu, H., Zhao, X. (eds) Knowledge Science, Engineering and Management. KSEM 2020. Lecture Notes in Computer Science(), vol 12275. Springer, Cham. https://doi.org/10.1007/978-3-030-55393-7_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-55393-7_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-55392-0

  • Online ISBN: 978-3-030-55393-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics