Skip to main content

ICDAR 2021 Competition on Multimodal Emotion Recognition on Comics Scenes

  • Conference paper
  • First Online:
Document Analysis and Recognition – ICDAR 2021 (ICDAR 2021)

Abstract

The paper describes the “Multimodal Emotion Recognition on Comics scenes” competition presented at the ICDAR conference 2021. This competition aims to tackle the problem of emotion recognition of comic scenes (panels). Emotions are assigned manually by multiple annotators for each comic scene of a subset of a public large-scale dataset of golden age American comics. As a multi-modal analysis task, the competition proposes to extract the emotions of comic characters in comic scenes based on visual information, text in speech balloons or captions and the onomatopoeia. Participants were competing on CodaLab.org from December 16\(^{th}\) 2020 to March 31\(^{th}\) 2021. The challenge has attracted 145 registrants, 21 teams have joined the public test phase, and 7 teams have competed in the private test phase. In this paper we present the motivation, dataset preparation, task definition of the competition, the analysis of participant’s performance and submitted methods. We believe that the competition have drawn attention from the document analysis community in both fields of computer vision and natural language processing on the task of emotion recognition in documents.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://bit.ly/kaggle-challenges-in-representation-learning.

  2. 2.

    https://emoreccom.univ-lr.fr.

  3. 3.

    https://competitions.codalab.org/competitions/27884.

  4. 4.

    https://obj.umiacs.umd.edu/comics/index.html.

  5. 5.

    https://github.com/collab-uniba/EmotionDatasetMSR18.

  6. 6.

    http://bit.ly/scikit-learn-multilabel-clf.

  7. 7.

    https://competitions.codalab.org/competitions/27884.

  8. 8.

    http://bit.ly/scikit-learn-auc.

  9. 9.

    https://competitions.codalab.org/competitions/30954.

References

  1. Augereau, O., Iwata, M., Kise, K.: A survey of comics research in computer science. J. Imaging 4, 87 (2018)

    Article  Google Scholar 

  2. Chung, J., Gülçehre, Ç., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555 (2014)

    Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 NAACL, pp. 4171–4186. Association for Computational Linguistics (2019)

    Google Scholar 

  4. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  5. Hand, D.J., Till, R.J.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach. Learn. 45(2), 171–186 (2001). https://doi.org/10.1023/A:1010920819831

    Article  MATH  Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 770–778. IEEE Computer Society (2016)

    Google Scholar 

  7. Iyyer, M., et al.: The amazing mysteries of the gutter: drawing inferences between panels in comic book narratives. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6478–6487 (2017)

    Google Scholar 

  8. Kiela, D., Bhooshan, S., Firooz, H., Testuggine, D.: Supervised multimodal bitransformers for classifying images and text. arXiv preprint arXiv:1909.02950 (2019)

  9. Le, D.T., et al.: ReINTEL: a multimodal data challenge for responsible information identification on social network sites, pp. 84–91. Association for Computational Lingustics, Hanoi (2020)

    Google Scholar 

  10. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  11. Lovheim, H.: A new three-dimensional model for emotions and monoamine neurotransmitters. Med. Hypoth. 78(2), 341–348 (2012)

    Article  Google Scholar 

  12. McCloud, S.: Making Comics: Storytelling Secrets of Comics, Manga and Graphic Novels. Harper, New York (2006)

    Google Scholar 

  13. Nguyen, H.D., Vu, X.S., Truong, Q.T., Le, D.T.: Reinforced data sampling for model diversification (2020)

    Google Scholar 

  14. Nguyen, N., Rigaud, C., Burie, J.: Digital comics image indexing based on deep learning. J. Imaging 4(7), 89 (2018)

    Article  Google Scholar 

  15. Novielli, N., Calefato, F., Lanubile, F.: A gold standard for emotion annotation in stack overflow. In: Proceedings of the 15th International Conference on Mining Software Repositories, MSR 2018, pp. 14–17. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3196398.3196453

  16. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  17. Plutchik, R., Kellerman, H.: Emotion: Theory, Research and Experience. Academic Press, Cambridge (1986)

    Google Scholar 

  18. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR abs/1910.01108 (2019)

    Google Scholar 

  19. Shaver, P., Schwartz, J., Kirson, D., Oćonnor, C.: Emotion knowledge: further exploration of a prototype approach. J. Pers. Soc. Psychol. 52(6), 1061 (1987)

    Article  Google Scholar 

  20. Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 6105–6114 (2019)

    Google Scholar 

  21. Truong, Q.T., Lauw, H.W.: VistaNet: visual aspect attention network for multimodal sentiment analysis. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 305–312 (2019). https://doi.org/10.1609/aaai.v33i01.3301305

  22. Vu, X.S., Bui, Q.A., Nguyen, N.V., Nguyen, T.T.H., Vu, T.: MC-OCR challenge: mobile-captured image document recognition for Vietnamese receipts. In: RIVF 2021. IEEE (2021)

    Google Scholar 

  23. Vu, X.S., Le, D.T., Edlund, C., Jiang, L., Nguyen D.H.: Privacy-preserving visual content tagging using graph transformer networks. In: ACM International Conference on Multimedia. In: ACM MM 2020. ACM (2020)

    Google Scholar 

  24. Yadollahi, A., Shahraki, A.G., Zaiane, O.R.: Current state of text sentiment analysis from opinion to emotion mining. ACM Comput. Surv. 50(2), Article 25 (2017)

    Google Scholar 

  25. Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhutdinov, R., Le, Q.V.: XlNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, pp. 5754–5764 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nhu-Van Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nguyen, NV., Vu, XS., Rigaud, C., Jiang, L., Burie, JC. (2021). ICDAR 2021 Competition on Multimodal Emotion Recognition on Comics Scenes. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12824. Springer, Cham. https://doi.org/10.1007/978-3-030-86337-1_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86337-1_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86336-4

  • Online ISBN: 978-3-030-86337-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics