Abstract
The paper describes the “Multimodal Emotion Recognition on Comics scenes” competition presented at the ICDAR conference 2021. This competition aims to tackle the problem of emotion recognition of comic scenes (panels). Emotions are assigned manually by multiple annotators for each comic scene of a subset of a public large-scale dataset of golden age American comics. As a multi-modal analysis task, the competition proposes to extract the emotions of comic characters in comic scenes based on visual information, text in speech balloons or captions and the onomatopoeia. Participants were competing on CodaLab.org from December 16\(^{th}\) 2020 to March 31\(^{th}\) 2021. The challenge has attracted 145 registrants, 21 teams have joined the public test phase, and 7 teams have competed in the private test phase. In this paper we present the motivation, dataset preparation, task definition of the competition, the analysis of participant’s performance and submitted methods. We believe that the competition have drawn attention from the document analysis community in both fields of computer vision and natural language processing on the task of emotion recognition in documents.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
References
Augereau, O., Iwata, M., Kise, K.: A survey of comics research in computer science. J. Imaging 4, 87 (2018)
Chung, J., Gülçehre, Ç., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555 (2014)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 NAACL, pp. 4171–4186. Association for Computational Linguistics (2019)
Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)
Hand, D.J., Till, R.J.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach. Learn. 45(2), 171–186 (2001). https://doi.org/10.1023/A:1010920819831
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 770–778. IEEE Computer Society (2016)
Iyyer, M., et al.: The amazing mysteries of the gutter: drawing inferences between panels in comic book narratives. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6478–6487 (2017)
Kiela, D., Bhooshan, S., Firooz, H., Testuggine, D.: Supervised multimodal bitransformers for classifying images and text. arXiv preprint arXiv:1909.02950 (2019)
Le, D.T., et al.: ReINTEL: a multimodal data challenge for responsible information identification on social network sites, pp. 84–91. Association for Computational Lingustics, Hanoi (2020)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Lovheim, H.: A new three-dimensional model for emotions and monoamine neurotransmitters. Med. Hypoth. 78(2), 341–348 (2012)
McCloud, S.: Making Comics: Storytelling Secrets of Comics, Manga and Graphic Novels. Harper, New York (2006)
Nguyen, H.D., Vu, X.S., Truong, Q.T., Le, D.T.: Reinforced data sampling for model diversification (2020)
Nguyen, N., Rigaud, C., Burie, J.: Digital comics image indexing based on deep learning. J. Imaging 4(7), 89 (2018)
Novielli, N., Calefato, F., Lanubile, F.: A gold standard for emotion annotation in stack overflow. In: Proceedings of the 15th International Conference on Mining Software Repositories, MSR 2018, pp. 14–17. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3196398.3196453
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Plutchik, R., Kellerman, H.: Emotion: Theory, Research and Experience. Academic Press, Cambridge (1986)
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR abs/1910.01108 (2019)
Shaver, P., Schwartz, J., Kirson, D., Oćonnor, C.: Emotion knowledge: further exploration of a prototype approach. J. Pers. Soc. Psychol. 52(6), 1061 (1987)
Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 6105–6114 (2019)
Truong, Q.T., Lauw, H.W.: VistaNet: visual aspect attention network for multimodal sentiment analysis. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 305–312 (2019). https://doi.org/10.1609/aaai.v33i01.3301305
Vu, X.S., Bui, Q.A., Nguyen, N.V., Nguyen, T.T.H., Vu, T.: MC-OCR challenge: mobile-captured image document recognition for Vietnamese receipts. In: RIVF 2021. IEEE (2021)
Vu, X.S., Le, D.T., Edlund, C., Jiang, L., Nguyen D.H.: Privacy-preserving visual content tagging using graph transformer networks. In: ACM International Conference on Multimedia. In: ACM MM 2020. ACM (2020)
Yadollahi, A., Shahraki, A.G., Zaiane, O.R.: Current state of text sentiment analysis from opinion to emotion mining. ACM Comput. Surv. 50(2), Article 25 (2017)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhutdinov, R., Le, Q.V.: XlNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, pp. 5754–5764 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Nguyen, NV., Vu, XS., Rigaud, C., Jiang, L., Burie, JC. (2021). ICDAR 2021 Competition on Multimodal Emotion Recognition on Comics Scenes. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12824. Springer, Cham. https://doi.org/10.1007/978-3-030-86337-1_51
Download citation
DOI: https://doi.org/10.1007/978-3-030-86337-1_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86336-4
Online ISBN: 978-3-030-86337-1
eBook Packages: Computer ScienceComputer Science (R0)