Skip to main content

ED-NET: Educational Teaching Video Classification Network

  • Conference paper
  • First Online:
Computer Vision and Machine Intelligence

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 586))

  • 784 Accesses

Abstract

Convolutional neural networks (CNNs) are widely used in computer vision-based problems. Video and image data are dominating the Internet. This has led to extensive use of deep learning (DL)-based models in solving tasks like image recognition, image segmentation, video classification, etc. Encouraged by the enhanced performance of CNNs, we have developed ED-NET in order to classify videos as teaching videos or non-teaching videos. Along with the model, we have developed a novel dataset, Teach-VID, containing teaching videos. The data is collected through our e-learning platform Gyaan, an online end-to-end teaching platform developed by our organization, GahanAI. The purpose is to make sure we can restrict non-teaching videos from being played on our portal. The models proposed along with the dataset provide benchmarking results. There are two models presented one that makes use of 3D-CNN and the other uses 2D-CNN and LSTM. The results suggest that the models can be used in real-time settings. The model based on 3D-CNN has reached an accuracy of 98.87%, and the model based on 2D-CNN has reached an accuracy of 96.34%. The loss graph of both models suggests that there is no issue of overfitting and underfitting. The proposed model and dataset can provide useful results in the field of video classification regarding teaching versus non-teaching videos.

Supported by Organization GahanAI, Bengaluru, India

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: Youtube-8m: a large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016)

  2. Bhardwaj, S., Srinivasan, M., Khapra, M.M.: Efficient video classification using fewer frames. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 354–363 (2019)

    Google Scholar 

  3. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)

    Google Scholar 

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  5. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014)

    Google Scholar 

  6. Li, X., Wang, Y., Zhou, Z., Qiao, Y.: Smallbignet: integrating core and contextual views for video classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1092–1101 (2020)

    Google Scholar 

  7. Lin, R., Xiao, J., Fan, J.: Nextvlad: an efficient neural network to aggregate frame-level features for large-scale video classification. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops (2018)

    Google Scholar 

  8. Long, X., De Melo, G., He, D., Li, F., Chi, Z., Wen, S., Gan, C.: Purely attention based local feature integration for video classification. IEEE Trans. Pattern Anal. Mach. Intell. (2020)

    Google Scholar 

  9. Misra, D., Nalamada, T., Arasanipalai, A.U., Hou, Q.: Rotate to attend: convolutional triplet attention module. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3139–3148 (2021)

    Google Scholar 

  10. Peng, Y., Zhao, Y., Zhang, J.: Two-stream collaborative learning with spatial-temporal attention for video classification. IEEE Trans. Circuits Syst. Video Technol. 29(3), 773–786 (2018)

    Article  Google Scholar 

  11. Pouyanfar, S., Wang, T., Chen, S.C.: Residual attention-based fusion for video classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)

    Google Scholar 

  12. Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Mach. Vision Appl. 24(5), 971–981 (2013)

    Article  Google Scholar 

  13. Tran, D., Wang, H., Torresani, L., Feiszli, M.: Video classification with channel-separated convolutional networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5552–5561 (2019)

    Google Scholar 

  14. Wang, L., Li, W., Li, W., Van Gool, L.: Appearance-and-relation networks for video classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1430–1439 (2018)

    Google Scholar 

  15. Xu, X., Wu, X., Wang, G., Wang, H.: Violent video classification based on spatial-temporal cues using deep learning. In: 2018 11th International Symposium on Computational Intelligence and Design (ISCID), vol. 1, pp. 319–322. IEEE (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anmol Gautam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gautam, A., Hazra, S., Verma, R., Maji, P., Balabantaray, B.K. (2023). ED-NET: Educational Teaching Video Classification Network. In: Tistarelli, M., Dubey, S.R., Singh, S.K., Jiang, X. (eds) Computer Vision and Machine Intelligence. Lecture Notes in Networks and Systems, vol 586. Springer, Singapore. https://doi.org/10.1007/978-981-19-7867-8_12

Download citation

Publish with us

Policies and ethics