Skip to main content
Log in

Mitigating sentimental bias via a polar attention mechanism

  • Regular Paper
  • Published:
International Journal of Data Science and Analytics Aims and scope Submit manuscript

Abstract

Fairness in machine learning has received increasing attention in recent years. This study focuses on a particular type of machine learning fairness, namely sentimental bias, in text sentiment analysis. Sentimental bias occurs on words (or phrases) when they are distributed distinctly in positive and negative corpora. It results in that an excessively proportion of words carry negative/positive sentiment in learned models. This study proposed a new attention mechanism, called polar attention, to mitigate sentimental biases. It consists of two modules, namely polar flipping and distance measurement. The first module explicitly models word sentimental polarity and can prevent that neutral words flip positively or negatively. The second module is used to attend negative/positive words. In the experiments, three benchmark data sets are used, and supplementary testing sets are compiled. Experimental results verify the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. https://github.com/Tju-AI/two-stage-labeling-for-the-sentiment-orientations

References

  1. Vaswani, A. et al.: Attention is all you need. In: NIPS2017, pp. 5998–6008 (2017)

  2. Maas, A.L. et al.: Learning word vectors for sentiment analysis. In: ACL2011, ACL, pp. 142–150 (2011)

  3. Dwork, C. et al.: Fairness through awareness. In: ITCS2012, ACM, pp. 214–226 (2012)

  4. Goodfellow, I. et al.: Generative adversarial nets. In: NIPS2014, pp. 2672–2680 (2014)

  5. Zhao, J. et al.: Learning gender-neutral word embeddings. arXiv:1809.01496 (2018)

  6. Tai, K.S. et al.: Improved semantic representations from tree-structured long short-term memory networks. arXiv:1503.00075 (2015)

  7. Dixon, L. et al.: Measuring and mitigating unintended bias in text classification. In: AAAI2018, ACM, pp. 67–73 (2018)

  8. Liu, L.T. et al.: Delayed impact of fair machine learning. arXiv:1803.04383 (2018)

  9. Hardt, M. et al.: Equality of opportunity in supervised learning. In: NIPS2016, pp. 3315–3323 (2016)

  10. Zhou, P. et al.: Text classification improved by integrating bidirectional lstm with two-dimensional max pooling. arXiv:1611.06639 (2016)

  11. Qian, Q. et al.: Linguistically regularized lstms for sentiment classification. arXiv:1611.03949 (2016)

  12. Socher, R. et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: EMNLP, pp. 1631–1642 (2013)

  13. Bai, S. et al.: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271 (2018)

  14. Bolukbasi, T. et al.: Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In: NIPS2016, pp. 4349–4357 (2016)

  15. Dieterich, W. et al.: Compas risk scales: demonstrating accuracy equity and predictive parity. Northpoint Inc (2016)

  16. Binns, R.: Fairness in machine learning: lessons from political philosophy. arXiv:1712.03586 (2017)

  17. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91 (2018)

  18. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  19. Edizel, B., Bonchi, F., Hajian, S., Panisson ,A., Tassa, T.: Fairecsys: mitigating algorithmic bias in recommender systems. Int. J. Data Sci. Anal. pp. 1–17 (2019)

  20. Feldman, M., Friedler, SA., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 259–268 (2015)

  21. Johnson, R., Zhang, T.: Deep pyramid convolutional neural networks for text categorization. ACL 1, 562–570 (2017)

    Google Scholar 

  22. Joseph, M., Kearns, M., Morgenstern, JH., Roth, A.: Fairness in learning: Classic and contextual bandits. In: Advances in Neural Information Processing Systems, pp. 325–333 (2016)

  23. Pang, B., Lee, L.: Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: ACL, pp. 115–124 (2005)

  24. Wu, O., Yang, T., Li, M., Li, M.: hot lexicon embedding-based two-level lstm for sentiment analysis (2018)

  25. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, RR., Le, QV.: Xlnet: Generalized autoregressive pretraining for language understanding. In: Advances in neural information processing systems, pp. 5754–5764 (2019)

  26. Zafar, MB., Valera, I., Gomez Rodriguez, M., Gummadi, KP.: Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. In: WWW2017, pp. 1171–1180 (2017)

  27. Zou, J., Schiebinger, L.: Design ai so that it’s fair. Nature 559(7714), 324–326 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by Tianjin NFS (19JCZDJC31300), AI Key Project of Tianjin (19ZXZNGX0050), and Frontier Science and Technology Innovation Projict (2019QY2404).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ou Wu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, T., Yao, R., Yin, Q. et al. Mitigating sentimental bias via a polar attention mechanism. Int J Data Sci Anal 11, 27–36 (2021). https://doi.org/10.1007/s41060-020-00231-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41060-020-00231-3

Keywords

Navigation