Abstract
In this paper, we propose a novel model RAST (Reward Augmented Sentiment Transfer) for fine-grained sentiment transfer. Existing methods usually suffer from two major drawbacks, i.e., blurre d sentiment distinction and unsatisfactory content preservation. Considering the above issues, we design two kinds of rewards to better control sentiment and content. Specially, we develop a pairwise comparative discriminator that enforces to generate sentences with clear distinctions for different sentiment intensities. Moreover, we utilize an effective sampling strategy to obtain pseudo-parallel sentences with minor changes on the input sentence to enhance content preservation. Experiments on a benchmark dataset show that the proposed model outperforms several competitive approaches.
X. Hu and H. Zhang—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dai, N., Liang, J., Qiu, X., Huang, X.: Style transformer: unpaired text style transfer without disentangled latent representation. In: Proceedings of ACL, pp. 5997–6007 (2019)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL, pp. 4171–4186 (2019)
Fu, Z., Tan, X., Peng, N., Zhao, D., Yan, R.: Style transfer in text: exploration and evaluation. In: Proceedings of AAAI, pp. 663–670 (2018)
Heafield, K.: KenLM: faster and smaller language model queries. In: Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT@EMNLP 2011, Edinburgh, Scotland, UK, 30–31 July 2011, pp. 187–197 (2011)
John, V., Mou, L., Bahuleyan, H., Vechtomova, O.: Disentangled representation learning for non-parallel text style transfer. In: Proceedings of ACL, pp. 424–434 (2019)
Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: 1995 International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1995, Detroit, Michigan, USA, 08–12 May 1995, pp. 181–184 (1995)
Lample, G., Subramanian, S., Smith, E.M., Denoyer, L., Ranzato, M., Boureau, Y.: Multiple-attribute text rewriting. In: Proceedings of ICLR (2019)
Li, J., Jia, R., He, H., Liang, P.: Delete, retrieve, generate: a simple approach to sentiment and style transfer. In: Proceedings of NAACL, pp. 1865–1874 (2018)
Liao, Y., Bing, L., Li, P., Shi, S., Lam, W., Zhang, T.: QuaSE: sequence editing under quantifiable guidance. In: Proceedings of EMNLP, pp. 3855–3864 (2018)
Liu, D., Fu, J., Zhang, Y., Pal, C., Lv, J.: Revision in continuous space: fine-grained control of text style transfer. CoRR (2019)
Logeswaran, L., Lee, H., Bengio, S.: Content preserving text generation with attribute controls. In: Advances in NIPS, pp. 5108–5118 (2018)
Luo, F., et al.: Towards fine-grained text sentiment transfer. In: Proceedings of ACL, pp. 2013–2022 (2019)
Luo, F., et al.: A dual reinforcement learning framework for unsupervised text style transfer. In: Proceedings of IJCAI, pp. 5116–5122 (2019)
Norouzi, M., et al.: Reward augmented maximum likelihood for neural structured prediction. In: Advances in NIPS, pp. 1723–1731 (2016)
Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of ACL, pp. 311–318 (2002)
Shen, T., Lei, T., Barzilay, R., Jaakkola, T.S.: Style transfer from non-parallel text by cross-alignment. In: Advances in NIPS, pp. 6830–6841 (2017)
Sudhakar, A., Upadhyay, B., Maheswaran, A.: “Transforming” delete, retrieve, generate approach for controlled text style transfer. In: Proceedings of EMNLP, pp. 3267–3277 (2019)
Wang, K., Hua, H., Wan, X.: Controllable unsupervised text attribute transfer via editing entangled latent representation. In: Advances in NIPS, pp. 11034–11044 (2019)
Wu, X., Zhang, T., Zang, L., Han, J., Hu, S.: Mask and infill: applying masked language model to sentiment transfer. In: Proceedings of IJCAI, pp. 5271–5277 (2019)
Xu, J., et al.: Unpaired sentiment-to-sentiment translation: a cycled reinforcement learning approach. In: Proceedings of ACL, pp. 979–988 (2018)
Zhang, R., Guo, J., Fan, Y., Lan, Y., Xu, J., Cheng, X.: Learning to control the specificity in neural response generation. In: Proceedings of ACL, pp. 1108–1117 (2018)
Zhou, H., Huang, M., Zhang, T., Zhu, X., Liu, B.: Emotional chatting machine: emotional conversation generation with internal and external memory. In: Proceedings of AAAI, pp. 730–739 (2018)
Ackowledgments
The work was supported in part by the National Science Foundation of China under Grant No. 61872369, Beijing Academy of Artificial Intelligence (BAAI), and the National Science Foundation of the United States of America under Grant No. IIS-1747614.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, X., Zhang , H., Zhao, W.X., Li, Y., Gao, J., Wen, JR. (2021). RAST: A Reward Augmented Model for Fine-Grained Sentiment Transfer. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13028. Springer, Cham. https://doi.org/10.1007/978-3-030-88480-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-88480-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88479-6
Online ISBN: 978-3-030-88480-2
eBook Packages: Computer ScienceComputer Science (R0)