Skip to main content

RAST: A Reward Augmented Model for Fine-Grained Sentiment Transfer

  • Conference paper
  • First Online:
Book cover Natural Language Processing and Chinese Computing (NLPCC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13028))

Abstract

In this paper, we propose a novel model RAST (Reward Augmented Sentiment Transfer) for fine-grained sentiment transfer. Existing methods usually suffer from two major drawbacks, i.e., blurre d sentiment distinction and unsatisfactory content preservation. Considering the above issues, we design two kinds of rewards to better control sentiment and content. Specially, we develop a pairwise comparative discriminator that enforces to generate sentences with clear distinctions for different sentiment intensities. Moreover, we utilize an effective sampling strategy to obtain pseudo-parallel sentences with minor changes on the input sentence to enhance content preservation. Experiments on a benchmark dataset show that the proposed model outperforms several competitive approaches.

X. Hu and H. Zhang—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dai, N., Liang, J., Qiu, X., Huang, X.: Style transformer: unpaired text style transfer without disentangled latent representation. In: Proceedings of ACL, pp. 5997–6007 (2019)

    Google Scholar 

  2. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL, pp. 4171–4186 (2019)

    Google Scholar 

  3. Fu, Z., Tan, X., Peng, N., Zhao, D., Yan, R.: Style transfer in text: exploration and evaluation. In: Proceedings of AAAI, pp. 663–670 (2018)

    Google Scholar 

  4. Heafield, K.: KenLM: faster and smaller language model queries. In: Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT@EMNLP 2011, Edinburgh, Scotland, UK, 30–31 July 2011, pp. 187–197 (2011)

    Google Scholar 

  5. John, V., Mou, L., Bahuleyan, H., Vechtomova, O.: Disentangled representation learning for non-parallel text style transfer. In: Proceedings of ACL, pp. 424–434 (2019)

    Google Scholar 

  6. Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: 1995 International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1995, Detroit, Michigan, USA, 08–12 May 1995, pp. 181–184 (1995)

    Google Scholar 

  7. Lample, G., Subramanian, S., Smith, E.M., Denoyer, L., Ranzato, M., Boureau, Y.: Multiple-attribute text rewriting. In: Proceedings of ICLR (2019)

    Google Scholar 

  8. Li, J., Jia, R., He, H., Liang, P.: Delete, retrieve, generate: a simple approach to sentiment and style transfer. In: Proceedings of NAACL, pp. 1865–1874 (2018)

    Google Scholar 

  9. Liao, Y., Bing, L., Li, P., Shi, S., Lam, W., Zhang, T.: QuaSE: sequence editing under quantifiable guidance. In: Proceedings of EMNLP, pp. 3855–3864 (2018)

    Google Scholar 

  10. Liu, D., Fu, J., Zhang, Y., Pal, C., Lv, J.: Revision in continuous space: fine-grained control of text style transfer. CoRR (2019)

    Google Scholar 

  11. Logeswaran, L., Lee, H., Bengio, S.: Content preserving text generation with attribute controls. In: Advances in NIPS, pp. 5108–5118 (2018)

    Google Scholar 

  12. Luo, F., et al.: Towards fine-grained text sentiment transfer. In: Proceedings of ACL, pp. 2013–2022 (2019)

    Google Scholar 

  13. Luo, F., et al.: A dual reinforcement learning framework for unsupervised text style transfer. In: Proceedings of IJCAI, pp. 5116–5122 (2019)

    Google Scholar 

  14. Norouzi, M., et al.: Reward augmented maximum likelihood for neural structured prediction. In: Advances in NIPS, pp. 1723–1731 (2016)

    Google Scholar 

  15. Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of ACL, pp. 311–318 (2002)

    Google Scholar 

  16. Shen, T., Lei, T., Barzilay, R., Jaakkola, T.S.: Style transfer from non-parallel text by cross-alignment. In: Advances in NIPS, pp. 6830–6841 (2017)

    Google Scholar 

  17. Sudhakar, A., Upadhyay, B., Maheswaran, A.: “Transforming” delete, retrieve, generate approach for controlled text style transfer. In: Proceedings of EMNLP, pp. 3267–3277 (2019)

    Google Scholar 

  18. Wang, K., Hua, H., Wan, X.: Controllable unsupervised text attribute transfer via editing entangled latent representation. In: Advances in NIPS, pp. 11034–11044 (2019)

    Google Scholar 

  19. Wu, X., Zhang, T., Zang, L., Han, J., Hu, S.: Mask and infill: applying masked language model to sentiment transfer. In: Proceedings of IJCAI, pp. 5271–5277 (2019)

    Google Scholar 

  20. Xu, J., et al.: Unpaired sentiment-to-sentiment translation: a cycled reinforcement learning approach. In: Proceedings of ACL, pp. 979–988 (2018)

    Google Scholar 

  21. Zhang, R., Guo, J., Fan, Y., Lan, Y., Xu, J., Cheng, X.: Learning to control the specificity in neural response generation. In: Proceedings of ACL, pp. 1108–1117 (2018)

    Google Scholar 

  22. Zhou, H., Huang, M., Zhang, T., Zhu, X., Liu, B.: Emotional chatting machine: emotional conversation generation with internal and external memory. In: Proceedings of AAAI, pp. 730–739 (2018)

    Google Scholar 

Download references

Ackowledgments

The work was supported in part by the National Science Foundation of China under Grant No. 61872369, Beijing Academy of Artificial Intelligence (BAAI), and the National Science Foundation of the United States of America under Grant No. IIS-1747614.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, X., Zhang , H., Zhao, W.X., Li, Y., Gao, J., Wen, JR. (2021). RAST: A Reward Augmented Model for Fine-Grained Sentiment Transfer. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13028. Springer, Cham. https://doi.org/10.1007/978-3-030-88480-2_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88480-2_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88479-6

  • Online ISBN: 978-3-030-88480-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics