Skip to main content

Question Difficulty Estimation with Directional Modality Association in Video Question Answering

  • Conference paper
  • First Online:
Advances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence (IEA/AIE 2022)

Abstract

The questions in question-answering (QA) tasks have a different level of difficulty. Thus, a number of methods have been proposed to estimate a difficulty level of the questions. However, the existing methods estimate the difficulty based only on text information, and thus loose other modalities if a QA task is intrinsically multi-modal. To solve this problem, this paper proposes a novel question difficulty estimator for multi-modal QAs. The proposed estimator is designed to solve the question difficulty estimation for video QA but is not limited to. That is, it is capable of managing both a text and a video as a sequence of images. In addition, it models the directional influence of one modality to the other modality with Directional Modality Association Transformer (DiMAT). Inspired by the transformer, the directional influence in DiMAT is expressed through a directional attention layer and a feed-forward network layer. Then, the representations for the directional influence are used together with the representations of each modality to determine the difficulty of the questions. The experiments on two benchmark video QA data sets show that the proposed question estimator outperforms the SOTA modality interaction models, which proves the effectiveness of the proposed model.

This paper received the best Student Award, entitled with registration fee paid by ACM SIGAI as co-sponsored of the IEA/AIE2022.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Akilan, T., Wu, Q.-J., Safaei, A., Huo, J., Yang, Y.: A 3D CNN-LSTM-based image-to-image foreground segmentation. IEEE Trans. Intell. Transp. Syst. 21, 959–971 (2019)

    Article  Google Scholar 

  2. Cai, J., Hu, J., Tang, X., Hung, T.-Y., Tan, Y.-P.: Deep historical long short-term memory network for action recognition. Neurocomputing 407, 428–438 (2020)

    Article  Google Scholar 

  3. Choi, S.-H., et al.: DramaQA: character-centered video story understanding with hierarchical QA. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 1166–1174 (2021)

    Google Scholar 

  4. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019)

    Google Scholar 

  5. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  6. Desai, T., Moldovan, D.: Towards predicting difficulty of reading comprehension questions. In: Proceedings of the 32nd International Florida Artificial Intelligence Research Society Conference, pp. 8–13 (2019)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Ha, L.-A., Yaneva, V., Baldwin, P., Mee, J.: Predicting the difficulty of multiple choice questions in a high-stakes medical exam. In: Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 11–20 (2019)

    Google Scholar 

  9. Huang, Z., et al.: Question difficulty prediction for reading problems in standard tests. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 1352–1359 (2017)

    Google Scholar 

  10. Khan, A.-U., and Mazaheri, A., Lobo, N., Shah, M.: MMFT-BERT: multimodal fusion transformer with BERT encodings for visual question answering. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pp. 4648–4660 (2020)

    Google Scholar 

  11. Kim, J., Ma, M., Pham, T., Kim, K., Yoo, C.: Modality shifting attention network for multi-modal video question answering. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, pp. 10103–10112 (2020)

    Google Scholar 

  12. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale ReAding comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794 (2017)

    Google Scholar 

  13. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proceedings of the 7th International Conference on Learning Representations (2019)

    Google Scholar 

  14. Liu, J., Wang, Q., Lin, C.-Y., Hon, H.-W.: Question difficulty estimation in community question answering services. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 85–90 (2013)

    Google Scholar 

  15. Lei, J., Yu, L., Bansal, M., Berg, T.: TVQA: localized, compositional video question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1369–1379 (2018)

    Google Scholar 

  16. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  17. Qiu, Z., Wu, X., Fan, W.: Question difficulty prediction for multiple choice problems in medical exams. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 139–148 (2019)

    Google Scholar 

  18. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016)

    Google Scholar 

  19. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, pp: 91–99 (2015)

    Google Scholar 

  20. Shi, B., Bai, X., Yao, C.: An End-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2298–2304 (2017)

    Article  Google Scholar 

  21. Song, H-.J., Yoon, S.-H., Park, S.-B.: Question Difficulty Estimation based on Attention Model for Question Answering. Technical report CSE-2021-1. Kyung-Hee University (2021)

    Google Scholar 

  22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations (2015)

    Google Scholar 

  23. Wang, A., et al.: SuperGLUE: a stickier benchmark for general-purpose language understanding systems. In: Proceedings of the 32nd Conference on Neural Information Processing Systems, pp. 3261–3275 (2019)

    Google Scholar 

  24. Yu, Z., Yu, J., Cui, Y., Tao, D., Tian, Q.: Deep Modular Co-Attention Networks for Visual Question Answering. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6281–6290 (2019)

    Google Scholar 

  25. Zhao, Q, et al.: M2Det: a single-shot object detector based on multi-level feature pyramid network. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pp. 9259–9266 (2019)

    Google Scholar 

  26. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159 (2020)

Download references

Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) (No. 2020R1A4A1018607) and by the Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant funded by the Korea Government (MSIT) (Artificial Intelligence Innovation Hub) under Grant 2021-0-02068 and Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2013-0-00109, WiseKB: Big data based self-evolving knowledge base and reasoning platform).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seong-Bae Park .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, BM., Park, SB. (2022). Question Difficulty Estimation with Directional Modality Association in Video Question Answering. In: Fujita, H., Fournier-Viger, P., Ali, M., Wang, Y. (eds) Advances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence. IEA/AIE 2022. Lecture Notes in Computer Science(), vol 13343. Springer, Cham. https://doi.org/10.1007/978-3-031-08530-7_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08530-7_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08529-1

  • Online ISBN: 978-3-031-08530-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics