Skip to main content

Emotion-Aided Multi-modal Personality Prediction System

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1793))

Included in the following conference series:

  • 819 Accesses

Abstract

Cyber forensics, personalized services, and recommender systems require the development of automatic personality prediction systems. Current paper works on developing a multi-modal personality prediction system from videos considering three different modalities, text, audio and video. The emotional state of a user helps in revealing the personality. Based on this cue, we have developed an emotion-aided personality prediction system in a multi-modal setting. Using the IBM tone analyzer, the existing ChaLearn-2017 dataset is enriched with emotion labels and those are used as an additional feature set in the proposed neural architecture for automatic personality prediction. Different features from video, audio, and text are extracted using CNN architectures and finally, the emotion labels are concatenated with the extracted feature set before feeding them to the sigmoid layer. For experimentation purposes, our enriched dataset is used. From the obtained results, it can be concluded that the concatenation of emotion labels as an additional feature set yields comparative results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.ibm.com/watson/services/tone-analyzer/.

References

  1. Güçlütürk, Y., et al.: Multimodal first impression analysis with deep residual networks. IEEE Trans. Affect. Comput. 9(3), 316–329 (2017)

    Article  Google Scholar 

  2. Güçlütürk, Y., Güçlü, U., van Gerven, M.A.J., van Lier, R.: Deep impression: audiovisual deep residual networks for multimodal apparent personality trait recognition. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 349–358. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_28

    Chapter  Google Scholar 

  3. Gürpınar, F., Kaya, H., Salah, A.A.: Combining deep facial and ambient features for first impression estimation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 372–385. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_30

    Chapter  Google Scholar 

  4. Kampman, O., Barezi, E.J., Bertero, D., Fung, P.: Investigating audio, video, and text fusion methods for end-to-end automatic personality prediction. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 606–611 (2018)

    Google Scholar 

  5. Mehta, Y., Majumder, N., Gelbukh, A., Cambria, E.: Recent trends in deep learning based personality detection. Artif. Intell. Rev. 53(4), 2313–2339 (2019). https://doi.org/10.1007/s10462-019-09770-z

    Article  Google Scholar 

  6. Naumann, L.P., Vazire, S., Rentfrow, P.J., Gosling, S.D.: Personality judgments based on physical appearance. Pers. Soc. Psychol. Bull. 35(12), 1661–1671 (2009)

    Article  Google Scholar 

  7. Olivola, C.Y., Todorov, A.: Fooled by first impressions? reexamining the diagnostic value of appearance-based inferences. J. Exp. Soc. Psychol. 46(2), 315–324 (2010)

    Article  Google Scholar 

  8. Polzehl, T., Moller, S., Metze, F.: Automatically assessing personality from speech. In: 2010 IEEE Fourth International Conference on Semantic Computing, pp. 134–140. IEEE (2010)

    Google Scholar 

  9. Shao, Z., Song, S., Jaiswal, S., Shen, L., Valstar, M., Gunes, H.: Personality recognition by modelling person-specific cognitive processes using graph representation. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 357–366 (2021)

    Google Scholar 

  10. Subramaniam, A., Patel, V., Mishra, A., Balasubramanian, P., Mittal, A.: Bi-modal first impressions recognition using temporally ordered deep audio and stochastic visual features. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 337–348. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_27

    Chapter  Google Scholar 

  11. Suman, C., Chaudhari, R., Saha, S., Kumar, S., Bhattacharyya, P.: Investigations in emotion aware multimodal gender prediction systems from social media data. IEEE Trans. Comput. Soc. Syst. PP(99), 1–10 (2022). https://doi.org/10.1109/TCSS.2022.3158605

  12. Suman, C., Naman, A., Saha, S., Bhattacharyya, P.: A multimodal author profiling system for tweets. IEEE Trans. Comput. Soc. Syst. 8(6), 1407–1416 (2021). https://doi.org/10.1109/TCSS.2021.3082942

    Article  Google Scholar 

  13. Suman, C., Raj, A., Saha, S., Bhattacharyya, P.: Authorship attribution of microtext using capsule networks. IEEE Trans. Comput. Soc. Syst. 9(4), 1038–1047 (2022). https://doi.org/10.1109/TCSS.2021.3067736

    Article  Google Scholar 

  14. Suman, C., Saha, S., Bhattacharyya, P.: An attention-based multimodal Siamese architecture for tweet-user verification. IEEE Transactions on Computational Social Systems, pp. 1–9 (2022). https://doi.org/10.1109/TCSS.2022.3192909

  15. Suman, C., Saha, S., Gupta, A., Pandey, S.K., Bhattacharyya, P.: A multi-modal personality prediction system. Knowl.-Based Syst. 236, 107715 (2022)

    Article  Google Scholar 

  16. Tellamekala, M.K., Giesbrecht, T., Valstar, M.: Apparent personality recognition from uncertainty-aware facial emotion predictions using conditional latent variable models. In: 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1–8. IEEE (2021)

    Google Scholar 

  17. Ventura, C., Masip, D., Lapedriza, A.: Interpreting CNN models for apparent personality trait regression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 55–63 (2017)

    Google Scholar 

  18. Verduyn, P., Brans, K.: The relationship between extraversion, neuroticism and aspects of trait affect. Personality Individ. Differ. 52(6), 664–669 (2012)

    Article  Google Scholar 

  19. Wei, X.S., Zhang, C.L., Zhang, H., Wu, J.: Deep bimodal regression of apparent personality traits from short video sequences. IEEE Trans. Affect. Comput. 9(3), 303–315 (2017)

    Article  Google Scholar 

  20. Yang, T., Yang, F., Ouyang, H., Quan, X.: Psycholinguistic tripartite graph network for personality detection. arXiv preprint arXiv:2106.04963 (2021)

  21. Zhang, L., Peng, S., Winkler, S.: PersEmoN: a deep network for joint analysis of apparent personality, emotion and their relationship. IEEE Trans. Affect. Comput. 13, 298–305 (2019)

    Google Scholar 

Download references

Acknowledgement

Dr. Sriparna Saha gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya Ph.D. Scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chanchal Suman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suman, C., Saha, S., Bhattacharyya, P. (2023). Emotion-Aided Multi-modal Personality Prediction System. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1793. Springer, Singapore. https://doi.org/10.1007/978-981-99-1645-0_24

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-1645-0_24

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-1644-3

  • Online ISBN: 978-981-99-1645-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics