Abstract
Recently, Medical Visual Question Answering (VQA) became an active area of research with the induction of several publicly available benchmark datasets and the organization of challenges. Like many competitions, the quest for success has driven the use of increasingly complex neural networks. Winning strategies generally leverage multi-scale architectures and model ensembling to achieve state-of-the-art performance. However, several studies have established the capability of simpler architectures in learning more meaningful features and avoiding over-parameterization. Specifically, the use of MixUp based image augmentation with a simple VGG16 network helped achieve significant improvement in performance for medical VQA. Inspired by this finding, we propose a modified version, VQAMixUp, that leverages both images and questions for augmenting VQA datasets. VQAMixUp combined with a few enhanced training strategies help simple models (with \(\approx 65\)% reduced parameters) achieve state-of-the-performance on benchmark ImageCLEF-VQA-MED validation datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abacha, A.B., Datla, V.V., Hasan, S.A., Demner-Fushman, D., Muller, H.: Overview of the VQA-med task at ImageCLEF 2020: visual question answering and generation in the medical domain. In: CEUR Workshop Proceedings (2020)
Abacha, A.B., Sarrouti, M., Demner-Fushman, D., Hasan, S.A., Müller, H.: Overview of the VQA-med task at ImageCLEF 2021: visual question answering and generation in the medical domain. In: CEUR Workshop Proceedings (2021)
Al-Sadi, A., Al-Theiabat, H.A., Al-Ayyoub, M.: The inception team at VQA-med 2020: pretrained VGG with data augmentation for medical VQA and VQG. In: CEUR Workshop Proceedings, vol. 2696 (2020)
Castells, T., Weinzaepfel, P., Revaud, J.: Superloss: a generic loss for robust curriculum learning, vol. 33, pp. 4308–4319. Curran Associates, Inc. (2020)
Chen, G., Gong, H., Li, G.: HCP-mic at VQA-med 2020: effective visual representation for medical visual question answering, vol. 2696. CEUR (2020)
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555 (2014)
Eslami, S., de Melo, G., Meinel, C.: Teams at VQA-med 2021: BBN-orchestra for long-tailed medical visual question answering. In: CEUR, vol. 2936, pp. 1211–1217 (2021)
Gong, H., Huang, R., Chen, G., Li, G.: SYSU-HCP at VQA-med 2021: a data-centric model with efficient training methodology for medical visual question answering. In: CLEF (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR, abs/1512.03385 (2015)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Huang, G., Liu, Z., Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. CoRR, abs/1608.06993 (2016)
Jung, B., Gu, L., Harada, T.: bumjun jung at VQA-med 2020: VQA model based on feature extraction and multi-modal feature fusion. In: CEUR Workshop Proceedings, vol. 2696 (2020)
Lee, J., et al.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. CoRR, abs/1901.08746 (2019)
Liao, Z., Wu, Q., Shen, C., Hengel, A.V., Verjans, J.W.: AIML at VQA-med 2020: knowledge inference via a skeleton-based sentence mapping approach for medical domain visual question answering, vol. 2696, pp. 1–14. CEUR (2020)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation. CoRR, abs/1801.04381 (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, arXiv:1409.1556 (2014)
Virk, J.S., Bathula, D.R.: Domain-specific, semi-supervised transfer learning for medical imaging. In: CODS COMAD, pp. 145–153 (2021)
Xiao, Q., Zhou, X., Xiao, Y., Zhao, K.: Yunnan university at VQA-med 2021: pretrained BioBERT for medical domain visual question answering. In: CEUR Workshop Proceedings, vol. 2936, pp. 1405–1411 (2021)
Xie, S., Girshick, R.B., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. CoRR, abs/1611.05431 (2016)
Yang, Z., He, X., Gao, J., Deng, L., Smola, A.J.: Stacked attention networks for image question answering. CoRR, abs/1511.02274 (2015)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. CoRR, abs/1710.09412 (2018)
Zhang, H., et al.: Resnest: split-attention networks. CoRR, abs/2004.08955 (2020)
Zhou, B., Cui, Q., Wei, X.S., Chen, Z.: BBN: bilateral-branch network with cumulative learning for long-tailed visual recognition. CoRR, arXiv:1912.02413 (2019)
Zhou, Y., Yu, J., Xiang, C., Fan, J., Tao, D.: Beyond bilinear: generalized multimodal factorized high-order pooling for visual question answering. IEEE Trans. Neural Netw. Learn. Syst. 29, 5947–5959 (2018)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Singh, J., Mahapatra, D., Bathula, D.R. (2023). Medical VQA: MixUp Helps Keeping it Simple. In: Yan, W.Q., Nguyen, M., Stommel, M. (eds) Image and Vision Computing. IVCNZ 2022. Lecture Notes in Computer Science, vol 13836. Springer, Cham. https://doi.org/10.1007/978-3-031-25825-1_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-25825-1_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25824-4
Online ISBN: 978-3-031-25825-1
eBook Packages: Computer ScienceComputer Science (R0)