Skip to main content

Gender-Specific Classifiers in Phoneme Recognition and Academic Emotion Detection

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9950))

Included in the following conference series:

Abstract

Gender-specific classifiers are shown to outperform general classifiers. In calibrated experiments designed to demonstrate this, two sets of data were used to build male-specific and female-specific classifiers. The first dataset is used to predict vowel phonemes based on speech signals, and the second dataset is used to predict negative emotions based on brainwave (EEG) signals. A Multi-Layered-Perceptron (MLP) is first trained as a general classifier, where all data from both male and female users are combined. This general classifier recognizes vowel phonemes with a baseline accuracy of 91.09 %, while that for EEG signals has an average baseline accuracy of 58.70 %. The experiments show that the performance significantly improves when the classifiers are trained to be gender-specific – that is, there is a separate classifier for male users, and a separate classifier for female users. For the vowel phoneme recognition dataset, the average accuracy increases to 94.20 % and 95.60 %, for male only users and female-only users, respectively. As for the EEG dataset, the accuracy increases to 65.33 % for male-only users and to 70.50 % for female-only users. Performance rates using recall and precision show the same trend. A further probe is done using SOM to visualize the distribution of the sub-clusters among male and female users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agustin, N.: Using self-organizing maps and regression to solve the acoustic-to-articulatory inversion as input to a visual articulatory feedback system, DLSU (2014)

    Google Scholar 

  2. Wrench, A.: MOCHA-TIMIT. www.cstr.ed.ac.uk/research/projects/artic/mocha.html

  3. Deng, L., Li, X.: Machine learning paradigms for speech recognition: an overview. IEEE Trans. Audio Speech Lang. Process. 21(5), 1060–1089 (2013)

    Article  Google Scholar 

  4. Pekrun, R., Goetx, T., Titz, W., Perry, R.P.: Academic emotions in students’ self-regulated learning and achievement: a program of qualitative and quantitative research. Educ. Psychol. 37(2), 91–105 (2002)

    Article  Google Scholar 

  5. Azcarraga, J., Suarez, M.: Recognizing student emotions using brainwaves and mouse behavior data. Int. J. Distance Educ. Technol. 11(2), 1–15 (2013)

    Article  Google Scholar 

  6. Azcarraga, J., Marcos, N., Suarez, M.: Modelling EEG signals for the prediction of academic emotions. In: Workshop on Utilizing EEG Input in Intelligent Tutoring Systems of the 12th International Conference on Intelligent Tutoring Systems (2014)

    Google Scholar 

  7. Azcarraga, J.: Analysis and visualization of EEG data towards academic emotion recognition, Ph.D. dissertation, DLSU-Manila (2014)

    Google Scholar 

  8. Emotiv EPOC Headset. http://www.emotiv.com

  9. Kangas, J.A., Kohonen, T.K., Laaksonen, J.T.: Variants of self-organizing maps. IEEE Trans. Neural Netw. 1(1), 93–99 (1990)

    Article  Google Scholar 

  10. Kohonen, T., Somervuo, P.: Self-organizing maps of symbol strings. Neurocomputing 21(1), 19–30 (1998)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Judith Azcarraga .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Azcarraga, A., Talavera, A., Azcarraga, J. (2016). Gender-Specific Classifiers in Phoneme Recognition and Academic Emotion Detection. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9950. Springer, Cham. https://doi.org/10.1007/978-3-319-46681-1_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46681-1_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46680-4

  • Online ISBN: 978-3-319-46681-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics