Abstract
Image caption generation is among the most rapidly growing research areas that combine image processing methodologies with natural language processing (NLP) technique(s). The effectiveness of the combination of image processing and NLP techniques can revolutionaries the areas of content creation, media analysis, and accessibility. The study proposed a novel model to generate automatic image captions by consuming visual and linguistic features. Visual image features are extracted by applying Convolutional Neural Network and linguistic features by Long Short-Term Memory (LSTM) to generate text. Microsoft Common Objects in Context dataset with over 330,000 images having corresponding captions is used to train the proposed model. A comprehensive evaluation of various models, including VGGNet + LSTM, ResNet + LSTM, GoogleNet + LSTM, VGGNet + RNN, AlexNet + RNN, and AlexNet + LSTM, was conducted based on different batch sizes and learning rates. The assessment was performed using metrics such as BLEU-2 Score, METEOR Score, ROUGE-L Score, and CIDEr. The proposed method demonstrated competitive performance, suggesting its potential for further exploration and refinement. These findings underscore the importance of careful parameter tuning and model selection in image captioning tasks.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Alibrahim, H., & Ludwig, S. A. (2021). Hyperparameter Optimization: Comparing Genetic Algorithm against Grid Search and Bayesian Optimization. 2021 IEEE Congress on Evolutionary Computation, CEC 2021 - Proceedings, 1551–1559. https://doi.org/10.1109/CEC45853.2021.9504761
Al-Malla MA, Jafar A, Ghneim N (2022) Image captioning model using attention and object features to mimic human image understanding. J Big Data 9(1):1–16. https://doi.org/10.1186/s40537-022-00571-w
Amirian, S., Rasheed, K., Taha, T. R., & Arabnia, H. R. (2019). A short review on image caption generation with deep learning. Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV), 10–18.
Amritkar, C., & Jabade, V. (2018). Image Caption Generation Using Deep Learning Technique. Proceedings - 2018 4th International Conference on Computing, Communication Control and Automation, ICCUBEA 2018, 1–4. https://doi.org/10.1109/ICCUBEA.2018.8697360
Bai S, An S (2018) A survey on automatic image caption generation. Neurocomputing 311:291–304. https://doi.org/10.1016/j.neucom.2018.05.080
Deorukhkar KP, Ket S (2022) Image captioning using hybrid LSTM-RNN with deep features. Sens Imaging 23(1):1–26. https://doi.org/10.1007/s11220-022-00400-7
Ding S, Qu S, Xi Y, Sangaiah AK, Wan S (2019) Image caption generation with high-level image features. Pattern Recogn Lett 123:89–95. https://doi.org/10.1016/j.patrec.2019.03.021
Donahue J, Hendricks LA, Rohrbach M, Venugopalan S, Guadarrama S, Saenko K, Darrell T (2017) Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans Pattern Anal Mach Intell 39(4):677–691. https://doi.org/10.1109/TPAMI.2016.2599174
Farhadi, A., Hejrati, M., Sadeghi, M. A., Young, P., Rashtchian, C., Hockenmaier, J., & Forsyth, D. (2010). Every picture tells a story: Generating sentences from images. In K. Daniilidis, P. Maragos, & N. Paragios (Eds.), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Vol. 6314 LNCS (Issue PART 4, pp. 15–29). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-15561-1_2
Feng Q, Wu Y, Fan H, Yan C, Xu M, Yang Y (2020) Cascaded revision network for novel object captioning. IEEE Trans Circuits Syst Video Technol 30(10):3413–3421. https://doi.org/10.1109/TCSVT.2020.2965966
Inayathulla M, Karthikeyan C (2024) Image caption generation using deep learning for video summarization applications. Int J Adv Comput Sci Appl 15(1):565–572. https://doi.org/10.14569/IJACSA.2024.0150155
Karpathy A, Fei-Fei L (2017) Deep visual-semantic alignments for generating image descriptions. IEEE Trans Pattern Anal Mach Intell 39(4):664–676. https://doi.org/10.1109/TPAMI.2016.2598339
Keskar, N. S., Nocedal, J., Tang, P. T. P., Mudigere, D., & Smelyanskiy, M. (2017). On large-batch training for deep learning: Generalization gap and sharp minima. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings.
Lamba, H. (2019a). Image Captioning with Keras - "Teaching Computers to describe pictures". Medium. https://towardsdatascience.com/image-captioning-with-keras-teaching-computers-to-describe-pictures-c88a46a311b8
Lamba, H. (2019b, February). Image Captioning with Keras - Teaching Computers to describe pictures’.
Liao L, Li H, Shang W, Ma L (2022) An empirical study of the impact of hyperparameter tuning and model optimization on the performance properties of deep neural networks. ACM Trans Software Eng Methodol 31(3):1–40. https://doi.org/10.1145/3506695
Lu X, Wang B, Zheng X, Li X (2018) Exploring models and data for remote sensing image caption generation. IEEE Trans Geosci Remote Sens 56(4):2183–2195. https://doi.org/10.1109/TGRS.2017.2776321
Lu, J., Xiong, C., Parikh, D., & Socher, R. (2017). Knowing when to look: Adaptive attention via a visual sentinel for image captioning. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-Janua, 3242–3250. https://doi.org/10.1109/CVPR.2017.345
Nguyen, D. T. T., & Nguyen, H. T. (2023). Image Caption Generator with a Combination Between Convolutional Neural Network and Long Short-Term Memory. In Studies in Computational Intelligence (Vol. 1045, pp. 225–238). Springer. https://doi.org/10.1007/978-3-031-08580-2_21
Wang H, Zhang Y, Yu X (2020) An overview of image caption generation methods. Comput Intell Neurosci. https://doi.org/10.1155/2020/3062706
Wu, S., Zhang, X., Wang, X., Li, C., & Jiao, L. (2020). Scene Attention Mechanism for Remote Sensing Image Caption Generation. Proceedings of the International Joint Conference on Neural Networks, 1–7. https://doi.org/10.1109/IJCNN48605.2020.9207381
Xu, K., Ba, J. L., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R. S., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. 32nd International Conference on Machine Learning, ICML 2015, 3, 2048–2057.
Yang L, Shami A (2020) On hyperparameter optimization of machine learning algorithms: theory and practice. Neurocomputing 415:295–316. https://doi.org/10.1016/j.neucom.2020.07.061
Funding
This research received no external funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Mishra, A., Agrawal, A. & Bhasker, S. Hybrid explainable image caption generation using image processing and natural language processing. Int J Syst Assur Eng Manag 15, 4874–4884 (2024). https://doi.org/10.1007/s13198-024-02495-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13198-024-02495-5