Abstract
Figure question answering (FQA) is proposed as a new multimodal task for visual question answering (VQA). Given a scientific-style figure and a related question, the machine needs to answer the question based on reasoning. The Relation Network (RN) is the proposed approach for the baseline of FQA, which computes a representation of relations between objects within images to get the answer result. We improve the RN model by using a variety of attention mechanism methods. Here, we propose a novel algorithm called Multi-attention Relation Network (MARN), which consists of a CBAM module, an LSTM module, and an attention relation module. The CBAM module first performs an attention mechanism during the feature extraction of the image to make the feature map more effective. Then in the attention relation module, each object pair contributes differently to reasoning. The experiments show that MARN greatly outperforms the RN model and other state-of-the-art methods on the FigureQA and DVQA datasets.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Zhou, B., Tian, Y., Sukhbaatar, S., Szlam, A., Fergus, R.: VQA: Visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2425–2433 (2015)
Kafle, K., Price, B., Cohen, S., Kanan, C.: Dvqa: Understanding data visualizations via question answering. In: Proceedings of the 2018 IEEE/ CVF Conference on Computer Vision and Pattern Recognition, pp. 5648–5656. IEEE (2018)
Kahou, S.E., Michalski, V., Atkinson, A., Kadar, A., Trischler, A., Bengio, Y.: Figureqa: An annotated figure dataset for visual reasoning (2017). arXiv preprint arXiv:1710.07300
Santoro, A., Raposo, D., Barrett, D.G., Malinowski, M., Pascanu, R.: A simple neural network module for relational reasoning (2017). arXiv preprint arXiv:1706.01427
Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: Convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1
Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., Parikh, D.: Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6325–6334 (2017). doi: https://doi.org/10.1109/CVPR.2017.670
Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K.: Visual genome: connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vis. 123(1), 32–73 (2017)
Kafle, K., Kanan, C.: Answer-type prediction for visual question answering. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4976–4984 (2016)
Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Deep compositional question answering with neural module networks. Comput. Sci. 27 (2015)
Methani, N., Ganguly, P., Khapra M., Kumar, P.: PlotQA: Reasoning over scientific plots. In: Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1516–1525 (2020)
Ritwick, C., Sumit, S., Utkarsh, G., Pranav, M., Prann, B., Ajay, J.: Leaf-qa: Locate, encode and attend for figure question answering. In: Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 3501–3510 (2020)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the International Conference on Learning Representations (2015)
Johnson, J., Hariharan, B., Maten, L. Fei-Fei, L.: CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1988–1997 (2017)
Reddy, R., Ramesh, R.: Figurenet: A deep learning model for question-answering on scientific plots. In: Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019)
Jialong, Z., Guoli, W., Taofeng, X., Qingfeng, W.: An affinity-driven relation network for figure question answering. In: Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2020)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 20, 1254–1259 (1998)
Rensink, R.A.: The dynamic representation of scenes. Vis. Cogn. 7, 17–42 (2000)
Larochelle, H., Hinton, G.E.: Learning to combine foveal glimpses with a thirdorder Boltzmann machine. Neural Inf. Process. Syst. (NIPS) (2010)
Wang, F., et al.: Residual attention network for image classification. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), arXiv preprint arXiv:1704.06904 (2017)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. arXiv preprint arXiv:1709.01507 (2017)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the International Conference on Machine Learning (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, Y., Wu, Q., Chen, B. (2022). Multi-Attention Relation Network for Figure Question Answering. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13369. Springer, Cham. https://doi.org/10.1007/978-3-031-10986-7_54
Download citation
DOI: https://doi.org/10.1007/978-3-031-10986-7_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-10985-0
Online ISBN: 978-3-031-10986-7
eBook Packages: Computer ScienceComputer Science (R0)