skip to main content
10.1145/3581783.3612850acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Answer-Based Entity Extraction and Alignment for Visual Text Question Answering

Published: 27 October 2023 Publication History

Abstract

As a variant of visual question answering (VQA), visual text question answering (VTQA) provides a text-image pair for each question. Text utilizes named entities to describe corresponding image. Consequently, the ability to perform multi-hop reasoning using named entities between text and image becomes critically important. However, existing models pay relatively less attention to this aspect. Therefore, we propose Answer-Based Entity Extraction and Alignment Model (AEEA) to enable a comprehensive understanding and support multi-hop reasoning. The core of AEEA lies in two main components: AKECMR and answer aware predictor. The former emphasizes the alignment of modalities and effectively distinguishes between intra-modal and inter-modal information, and the latter prioritizes the full utilization of intrinsic semantic information contained in answers during training. Our model outperforms the baseline by 2.24% on test-dev set and 1.06% on test set, securing the third place in VTQA2023(English).

References

[1]
Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of Detected Objects in Text for Visual Question Answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2131--2140.
[2]
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 6077--6086.
[3]
Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Songhao Piao, and Furu Wei. 2022. VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts. In Advances in Neural Information Processing Systems. 32897--32912.
[4]
Kang Chen and Xiangqian Wu. 2023. VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media Reasoning. arXiv preprint arXiv:2303.02635 (2023).
[5]
Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-Scale Adversarial Training for Vision-and-Language Representation Learning. In Advances in Neural Information Processing Systems. 6616--6628.
[6]
Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Boyang Li, Dacheng Tao, and Steven Hoi. 2023. From Images to Textual Prompts: Zero-shot Visual Question Answering with Frozen Large Language Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10867--10877.
[7]
Wenya Guo, Ying Zhang, Jufeng Yang, and Xiaojie Yuan. 2021. Re-attention for visual question answering. IEEE Transactions on Image Processing, Vol. 30 (2021), 6730--6743.
[8]
Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, and Jianfeng Gao. 2020. Towards learning a generic agent for vision-and-language navigation via pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13137--13146.
[9]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[10]
Huaizu Jiang, Ishan Misra, Marcus Rohrbach, Erik Learned-Miller, and Xinlei Chen. 2020. In defense of grid features for visual question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10267--10276.
[11]
Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2018. Pythia v0. 1: the winning entry to the vqa challenge 2018. arXiv preprint arXiv:1807.09956 (2018).
[12]
Hyounghun Kim and Mohit Bansal. 2019. Improving Visual Question Answering by Referring to Generated Paragraph Captions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 3606--3612.
[13]
Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. In Proceedings of the 38th International Conference on Machine Learning. 5583--5594.
[14]
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, Vol. 123 (2017), 32--73.
[15]
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before Fuse: Vision and Language Representation Learning with Momentum Distillation. In Advances in Neural Information Processing Systems. 9694--9705.
[16]
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2020. What does BERT with vision look at?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 5265--5275.
[17]
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In Advances in Neural Information Processing Systems.
[18]
Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Exploring Models and Data for Image Question Answering. In Advances in Neural Information Processing Systems.
[19]
Kevin J Shih, Saurabh Singh, and Derek Hoiem. 2016. Where to look: Focus regions for visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4613--4621.
[20]
Hao Tan and Mohit Bansal. 2019. LXMERT: Learning Cross-Modality Encoder Representations from Transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 5100--5111.
[21]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems.
[22]
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1492--1500.
[23]
Ming Yan, Haiyang Xu, Chenliang Li, Junfeng Tian, Bin Bi, Wei Wang, Xianzhe Xu, Ji Zhang, Songfang Huang, Fei Huang, et al. 2023. Achieving Human Parity on Visual Question Answering. ACM Transactions on Information Systems, Vol. 41, 3 (2023), 1--40.
[24]
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 21--29.
[25]
Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019. Deep Modular Co-Attention Networks for Visual Question Answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6281--6290.

Cited By

View all
  • (2024)Dialogue cross-enhanced central engagement attention model for real-time Engagement estimationProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/353(3187-3195)Online publication date: 3-Aug-2024
  • (2024)Relation-Aware Heterogeneous Graph Network for Learning Intermodal Semantics in Textbook Question AnsweringIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2024.338543635:9(11872-11883)Online publication date: Sep-2024
  • (2024)A common-specific feature cross-fusion attention mechanism for KGVQAInternational Journal of Data Science and Analytics10.1007/s41060-024-00536-7Online publication date: 13-Apr-2024

Index Terms

  1. Answer-Based Entity Extraction and Alignment for Visual Text Question Answering

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. answer semantics
      2. entity extraction feature alignment
      3. visual text question answering challenge (vtqa 2023)

      Qualifiers

      • Research-article

      Funding Sources

      • Natural Science Foundation of China
      • Anhui Province Key Research and Development Program

      Conference

      MM '23
      Sponsor:
      MM '23: The 31st ACM International Conference on Multimedia
      October 29 - November 3, 2023
      Ottawa ON, Canada

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)61
      • Downloads (Last 6 weeks)8
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Dialogue cross-enhanced central engagement attention model for real-time Engagement estimationProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/353(3187-3195)Online publication date: 3-Aug-2024
      • (2024)Relation-Aware Heterogeneous Graph Network for Learning Intermodal Semantics in Textbook Question AnsweringIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2024.338543635:9(11872-11883)Online publication date: Sep-2024
      • (2024)A common-specific feature cross-fusion attention mechanism for KGVQAInternational Journal of Data Science and Analytics10.1007/s41060-024-00536-7Online publication date: 13-Apr-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media