Reference Hub2
Video Semantic Analysis: The Sparsity Based Locality-Sensitive Discriminative Dictionary Learning Factor

Video Semantic Analysis: The Sparsity Based Locality-Sensitive Discriminative Dictionary Learning Factor

Daniel Danso Essel, Ben-Bright Benuwa, Benjamin Ghansah
Copyright: © 2021 |Volume: 11 |Issue: 2 |Pages: 21
ISSN: 2155-6997|EISSN: 2155-6989|EISBN13: 9781799862031|DOI: 10.4018/IJCVIP.2021040101
Cite Article Cite Article

MLA

Essel, Daniel Danso, et al. "Video Semantic Analysis: The Sparsity Based Locality-Sensitive Discriminative Dictionary Learning Factor." IJCVIP vol.11, no.2 2021: pp.1-21. http://doi.org/10.4018/IJCVIP.2021040101

APA

Essel, D. D., Benuwa, B., & Ghansah, B. (2021). Video Semantic Analysis: The Sparsity Based Locality-Sensitive Discriminative Dictionary Learning Factor. International Journal of Computer Vision and Image Processing (IJCVIP), 11(2), 1-21. http://doi.org/10.4018/IJCVIP.2021040101

Chicago

Essel, Daniel Danso, Ben-Bright Benuwa, and Benjamin Ghansah. "Video Semantic Analysis: The Sparsity Based Locality-Sensitive Discriminative Dictionary Learning Factor," International Journal of Computer Vision and Image Processing (IJCVIP) 11, no.2: 1-21. http://doi.org/10.4018/IJCVIP.2021040101

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Sparse Representation (SR) and Dictionary Learning (DL) based Classifier have shown promising results in classification tasks, with impressive recognition rate on image data. In Video Semantic Analysis (VSA) however, the local structure of video data contains significant discriminative information required for classification. To the best of our knowledge, this has not been fully explored by recent DL-based approaches. Further, similar coding findings are not being realized from video features with the same video category. Based on the foregoing, a novel learning algorithm, Sparsity based Locality-Sensitive Discriminative Dictionary Learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of Locality-Sensitive Dictionary Learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experimental results show that, the proposed SLSDDL significantly improves the performance of video semantic detection compared with state-of-the-art approaches. The proposed approach also shows robustness to diverse video environments, proving the universality of the novel approach.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.