Reference Hub4
A Deep Structured Model for Video Captioning

A Deep Structured Model for Video Captioning

V. Vinodhini, B. Sathiyabhama, S. Sankar, Ramasubbareddy Somula
Copyright: © 2020 |Volume: 12 |Issue: 2 |Pages: 13
ISSN: 1942-3888|EISSN: 1942-3896|EISBN13: 9781799806141|DOI: 10.4018/IJGCMS.2020040103
Cite Article Cite Article

MLA

Vinodhini, V., et al. "A Deep Structured Model for Video Captioning." IJGCMS vol.12, no.2 2020: pp.44-56. http://doi.org/10.4018/IJGCMS.2020040103

APA

Vinodhini, V., Sathiyabhama, B., Sankar, S., & Somula, R. (2020). A Deep Structured Model for Video Captioning. International Journal of Gaming and Computer-Mediated Simulations (IJGCMS), 12(2), 44-56. http://doi.org/10.4018/IJGCMS.2020040103

Chicago

Vinodhini, V., et al. "A Deep Structured Model for Video Captioning," International Journal of Gaming and Computer-Mediated Simulations (IJGCMS) 12, no.2: 44-56. http://doi.org/10.4018/IJGCMS.2020040103

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Video captions help people to understand in a noisy environment or when the sound is muted. It helps people having impaired hearing to understand much better. Captions not only support the content creators and translators but also boost the search engine optimization. Many advanced areas like computer vision and human-computer interaction play a vital role as there is a successful growth of deep learning techniques. Numerous surveys on deep learning models are evolved with different methods, architecture, and metrics. Working with video subtitles is still challenging in terms of activity recognition in video. This paper proposes a deep structured model that is effective towards activity recognition, automatically classifies and caption it in a single architecture. The first process includes subtracting the foreground from the background; this is done by building a 3D convolutional neural network (CNN) model. A Gaussian mixture model is used to remove the backdrop. The classification is done using long short-term memory networks (LSTM). A hidden Markov model (HMM) is used to generate the high quality data. Next, it uses the nonlinear activation function to perform the normalization process. Finally, the video captioning is achieved by using natural language.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.