Skip to main content
Log in

Word Embedding Composition for Data Imbalances in Sentiment and Emotion Classification

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Text classification often faces the problem of imbalanced training data. This is true in sentiment analysis and particularly prominent in emotion classification where multiple emotion categories are very likely to produce naturally skewed training data. Different sampling methods have been proposed to improve classification performance by reducing the imbalance ratio between training classes. However, data sparseness and the small disjunct problem remain obstacles in generating new samples for minority classes when the data are skewed and limited. Methods to produce meaningful samples for smaller classes rather than simple duplication are essential in overcoming this problem. In this paper, we present an oversampling method based on word embedding compositionality which produces meaningful balanced training data. We first use a large corpus to train a continuous skip-gram model to form a word embedding model maintaining the syntactic and semantic integrity of the word features. Then, a compositional algorithm based on recursive neural tensor networks is used to construct sentence vectors based on the word embedding model. Finally, we use the SMOTE algorithm as an oversampling method to generate samples for the minority classes and produce a fully balanced training set. Evaluation results on two quite different tasks show that the feature composition method and the oversampling method are both important in obtaining improved classification results. Our method effectively addresses the data imbalance issue and consequently achieves improved results for both sentiment and emotion classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. http://tcci.ccf.org.cn/conference/2013/dldoc/evsam02.zip.

  2. http://tcci.ccf.org.cn/conference/2013/dldoc/evdata02.zip.

  3. http://ml.nec-labs.com/senna/.

  4. https://code.google.com/p/word2vec/.

  5. http://tcci.ccf.org.cn/conference/2013/dldoc/evres02.

  6. http://www.psych.ualberta.ca/~westburylab/downloads/westburylab.wikicorp.download.html.

  7. http://nlp.stanford.edu/software/corenlp.shtml.

  8. http://tcci.ccf.org.cn/conference/2013/dldoc/evres02.

References

  1. Allan K. Linguistic meaning, vol. 2. London & New York: Routledge & Kegan Paul; 1986.

    Google Scholar 

  2. Barua S, Islam M, Yao X, Murase K, et al. MWMOTE—majority weighted minority oversampling technique for imbalanced data set learning. IEEE Trans Knowl Data Eng. 2014;26(2):405–25.

    Article  Google Scholar 

  3. Bengio Y. Neural net language models. Scholarpedia. 2008;3(1):3881.

  4. Bengio Y, Ducharme R, Vincent P. A neural probabilistic language model. J Mach Learn Res. 2003;3:1137–55.

    Google Scholar 

  5. Bengio Y, Schwenk H, Senécal JS, Morin F, Gauvain JL. Neural probabilistic language models. In: Innovations in machine learning. Berlin: Springer; 2006. p. 137–86.

  6. Blunsom P, Grefenstette E, Kalchbrenner N, et al. A convolutional neural network for modelling sentences. In: Proceedings of ACL. 2014.

  7. Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C. Safe-level-smote: safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In: Advances in knowledge discovery and data mining. Berlin: Springer; 2009. p. 475–82.

  8. Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C. DBSMOTE: density-based synthetic minority over-sampling technique. Appl Intell. 2012;36(3):664–84.

    Article  Google Scholar 

  9. Cai Q, He H, Man H. Imbalanced evolving self-organizing learning. Neurocomputing. 2014;133:258–70.

    Article  Google Scholar 

  10. Cambria E, Hussain A, Havasi C, Eckl C. Common sense computing: from the society of mind to digital intuition and beyond. In: Biometric ID management and multimodal communication. Berlin: Springer; 2009. p. 252–59.

  11. Cambria E, Hussain A, Havasi C, Eckl C. Sentic computing: exploitation of common sense for the development of emotion-sensitive systems. In: Development of multimodal interfaces: active listening and synchrony. Berlin: Springer; 2010. p. 148–56.

  12. Cambria E, Hussain A, Havasi C, Eckl C. SenticSpace: visualizing opinions and sentiments in a multi-dimensional vector space. In: Knowledge-based and intelligent information and engineering systems. Berlin: Springer; 2010. p. 385–93.

  13. Cambria E, Olsher D, Rajagopal D. SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment analysis. AAAI. 2014;1515–21.

  14. Cambria E, Schuller B, Xia Y, Havasi C. New avenues in opinion mining and sentiment analysis. IEEE Intell Syst. 2013;28(2):15–21.

    Article  Google Scholar 

  15. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. Smote: synthetic minority over-sampling technique. J Artif Intell Res. 2002;16:321–57.

    Google Scholar 

  16. Chawla NV, Cieslak DA, Hall LO, Joshi A. Automatically countering imbalance and its empirical relationship to cost. Data Mining Knowl Discov. 2008;17(2):225–52.

    Article  Google Scholar 

  17. Chawla NV, Japkowicz N, Kotcz A. Editorial: special issue on learning from imbalanced data sets. ACM SIGKDD Explor Newsl. 2004;6(1):1–6.

    Article  Google Scholar 

  18. Chen T, Xu R, Lu Q, Liu B, Xu J, Yao L, He Z. A sentence vector based over-sampling method for imbalanced emotion classification. In: Computational linguistics and intelligent text processing. Berlin: Springer; 2014. p. 62–72.

  19. Collobert R, Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of ICML. ACM; 2008. p. 160–7.

  20. Das D, Bandyopadhyay S. Sentence-level emotion and valence tagging. Cogn Comput. 2012;4(4):420–35.

    Article  Google Scholar 

  21. Ester M, Kriegel HP, Sander J, Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of KDD. 1996. p. 226–31.

  22. Grassi M, Cambria E, Hussain A, Piazza F. Sentic web: a new paradigm for managing social media affective information. Cogn Comput. 2011;3(3):480–9.

    Article  Google Scholar 

  23. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. The weka data mining software: an update. ACM SIGKDD Explor Newsl. 2009;11(1):10–8.

    Article  Google Scholar 

  24. Han H, Wang WY, Mao BH. Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: Advances in intelligent computing. Berlin: Springer; 2005. p. 878–87.

  25. He H, Bai Y, Garcia EA, Li S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In: Proceedings of IJCNN. IEEE; 2008. p. 1322–8.

  26. Hinton GE. Learning distributed representations of concepts. In: Proceedings of CogSci, vol 1. Amherst, MA; 1986. p. 12.

  27. Jo T, Japkowicz N. Class imbalances versus small disjuncts. ACM SIGKDD Explor Newsl. 2004;6(1):40–9.

    Article  Google Scholar 

  28. Levy R, Manning C. Is it harder to parse chinese, or the chinese treebank?. In: Proceedings of ACL, vol 1. ACL; 2003.p. 439–46.

  29. López V, Fernández A, García S, Palade V, Herrera F. An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf Sci. 2013;250:113–41.

    Article  Google Scholar 

  30. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).

  31. Mikolov T, Karafiát M, Burget L, Cernockỳ J, Khudanpur S. Recurrent neural network based language model. In: Proceedings of INTERSPEECH. 2010. p. 1045–8.

  32. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. 2013. p. 3111–9.

  33. Mnih A, Hinton GE. A scalable hierarchical distributed language model. In: Advances in neural information processing systems. 2009. p. 1081–8.

  34. Pang B, Lee L. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In: Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics; 2005. p. 115–24.

  35. Pang B, Lee L, Vaithyanathan S. Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of EMNLP, vol 10. ACL; 2002. p. 79–86.

  36. Pelletier FJ. The principle of semantic compositionality. Topoi. 1994;13(1):11–24.

    Article  Google Scholar 

  37. Přibil J, Přibilová A. GMM-based evaluation of emotional style transformation in czech and slovak. Cogn Comput. 2014;6(4):928–939.

  38. Shaoul C. The westbury lab wikipedia corpus. Edmonton: University of Alberta; 2010.

    Google Scholar 

  39. Socher R, Perelygin A, Wu JY, Chuang J, Manning CD, Ng AY, Potts C. Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of EMNLP. CiteSeer; 2013. p. 1631–42.

  40. Sun R. Moral judgment, human motivation, and neural networks. Cogn Comput. 2013;5(4):566–79.

    Article  Google Scholar 

  41. Sun Y, Kamel MS, Wong AK, Wang Y. Cost-sensitive boosting for classification of imbalanced data. Pattern Recogn. 2007;40(12):3358–78.

    Article  Google Scholar 

  42. Tang Y, Zhang YQ, Chawla NV, Krasser S. Svms modeling for highly imbalanced classification. IEEE Trans Syst Man Cybern Part B Cybern. 2009;39(1):281–8.

    Article  Google Scholar 

  43. Yang Q, Wu X. 10 challenging problems in data mining research. Int J Inf Technol Decis Mak. 2006;5(04):597–604.

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61300112, 61370165, 61203378), Natural Science Foundation of Guangdong Province S2013010014475, MOE Specialized Research Fund for the Doctoral Program of Higher Education 20122302120070, Open Projects Program of National Laboratory of Pattern Recognition, Shenzhen International Co-operation Research Funding GJHZ20120613110641217, Shenzhen Development and Reform Commission Grant No.[2014]1507, Shenzhen Peacock Plan Research Grant KQCX20140521144507925 and Baidu Collaborate Research Funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qin Lu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, R., Chen, T., Xia, Y. et al. Word Embedding Composition for Data Imbalances in Sentiment and Emotion Classification. Cogn Comput 7, 226–240 (2015). https://doi.org/10.1007/s12559-015-9319-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-015-9319-y

Keywords

Navigation