Skip to main content

User-Generated Content (UGC)/In-The-Wild Video Content Recognition

  • Conference paper
  • First Online:
Intelligent Information and Database Systems (ACIIDS 2022)

Abstract

According to Cisco, we are facing a three-fold increase in IP traffic in five years, ranging from 2017 to 2022. IP video traffic generated by users is largely related to user-generated content (UGC). Although at the beginning of UGC creation, this content was often characterised by amateur acquisition conditions and unprofessional processing, the development of widely available knowledge and affordable equipment allows one to create UGC of a quality practically indistinguishable from professional content. Since some UGC content is indistinguishable from professional content, we are not interested in all UGC content, but only in the quality that clearly differs from the professional. For this content, we use the term “in the wild” as a concept closely related to the concept of UGC, which is its special case. In this paper, we show that it is possible to deliver the new concept of an objective “in-the-wild” video content recognition model. The value of the F measure in our model is 0.988. The resulting model is trained and tested with the use of video sequence databases containing professional and “in the wild” content. These modelling results are obtained when the random forest learning method is used. However, it should be noted that the use of the more explainable decision tree learning method does not cause a significant decrease in the value of measure F (an F-measure of 0.973).

Supported by Polish National Centre for Research and Development (TANGO-IV-A/0038/2019-00).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Video source: https://youtu.be/8GKKdnjoeH0, https://youtu.be/psKb_bSFUsU and https://youtu.be/lVuk2KXBlL8.

References

  1. Berthon, P., Pitt, L., Kietzmann, J., McCarthy, I.P.: CGIP: managing consumer-generated intellectual property. Calif. Manage. Rev. 57(4), 43–62 (2015)

    Article  Google Scholar 

  2. U. Cisco: Cisco annual internet report (2018–2023) white paper. Cisco, San Jose (2020)

    Google Scholar 

  3. Ghadiyaram, D., Pan, J., Bovik, A.C., Moorthy, A.K., Panda, P., Yang, K.C.: In-capture mobile video distortions: a study of subjective behavior and objective algorithms. IEEE Trans. Circuits Syst. Video Technol. 28, 2061–2077 (2018). https://doi.org/10.1109/TCSVT.2017.2707479

    Article  Google Scholar 

  4. Guo, J., Gurrin, C.: Short user-generated videos classification using accompanied audio categories. In: Proceedings of the 2012 ACM International Workshop on Audio and Multimedia Methods for Large-Scale Video Analysis, pp. 15–20 (2012)

    Google Scholar 

  5. Guo, J., Gurrin, C., Lao, S.: Who produced this video, amateur or professional? In: Proceedings of the 3rd ACM Conference on International Conference on Multimedia Retrieval, pp. 271–278 (2013)

    Google Scholar 

  6. Hosu, V., et al.: The Konstanz natural video database (KoNViD-1k). In: 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), pp. 1–6 (2017)

    Google Scholar 

  7. Janowski, L., Papir, Z.: Modeling subjective tests of quality of experience with a generalized linear model. In: 2009 International Workshop on Quality of Multimedia Experience, pp. 35–40, July 2009. https://doi.org/10.1109/QOMEX.2009.5246979

  8. Kim, J.H., Seo, Y.S., Yoo, W.Y.: Professional and amateur-produced video classification for copyright protection. In: 2014 International Conference on Information and Communication Technology Convergence (ICTC), pp. 95–96. IEEE (2014)

    Google Scholar 

  9. Koźbiał, A., Leszczuk, M.: Collection, analysis and summarization of video content. In: Choroś, K., Kopel, M., Kukla, E., Siemiński, A. (eds.) MISSI 2018. AISC, vol. 833, pp. 405–414. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-98678-4_41

    Chapter  Google Scholar 

  10. Krumm, J., Davies, N., Narayanaswami, C.: User-generated content. IEEE Pervasive Comput. 7(4), 10–11 (2008)

    Article  Google Scholar 

  11. Leszczuk, M.: Assessing task-based video quality — a journey from subjective psycho-physical experiments to objective quality models. In: Dziech, A., Czyżewski, A. (eds.) MCSS 2011. CCIS, vol. 149, pp. 91–99. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21512-4_11

    Chapter  Google Scholar 

  12. Leszczuk, M., Hanusiak, M., Farias, M.C.Q., Wyckens, E., Heston, G.: Recent developments in visual quality monitoring by key performance indicators. Multimedia Tools Appl. 75(17), 10745–10767 (2014). https://doi.org/10.1007/s11042-014-2229-2

    Article  Google Scholar 

  13. Li, D., Jiang, T., Jiang, M.: Quality assessment of in-the-wild videos. In: Proceedings of the 27th ACM International Conference on Multimedia (MM 2019), pp. 2351–2359 (2019)

    Google Scholar 

  14. Marc Egger, A., Schoder, D.: Who are we listening to? Detecting user-generated content (UGC) on the web. ECIS 2015 Completed Research Papers (2015)

    Google Scholar 

  15. Mu, M., Romaniak, P., Mauthe, A., Leszczuk, M., Janowski, L., Cerqueira, E.: Framework for the integrated video quality assessment. Multimedia Tools Appl. 61(3), 787–817 (2012). https://doi.org/10.1007/s11042-011-0946-3

    Article  Google Scholar 

  16. Nawała, J., Leszczuk, M., Zajdel, M., Baran, R.: Software package for measurement of quality indicators working in no-reference model. Multimedia Tools Appl., December 2016. https://doi.org/10.1007/s11042-016-4195-3

  17. Nuutinen, M., Virtanen, T., Vaahteranoksa, M., Vuori, T., Oittinen, P., Hakkinen, J.: CVD 2014 - a database for evaluating no-reference video quality assessment algorithms. IEEE Trans. Image Process. 25, 3073–3086 (2016). https://doi.org/10.1109/TIP.2016.2562513

    Article  MathSciNet  MATH  Google Scholar 

  18. Pinson, M.H., Boyd, K.S., Hooker, J., Muntean, K.: How to choose video sequences for video quality assessment. In: Proceedings of the Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM-2013), pp. 79–85 (2013)

    Google Scholar 

  19. Romaniak, P., Janowski, L., Leszczuk, M., Papir, Z.: Perceptual quality assessment for H.264/AVC compression. In: 2012 IEEE Consumer Communications and Networking Conference (CCNC), pp. 597–602, January 2012. https://doi.org/10.1109/CCNC.2012.6181021

  20. Sinno, Z., Bovik, A.C.: Large-scale study of perceptual video quality. IEEE Trans. Image Process. 28, 612–627 (2019). https://doi.org/10.1109/TIP.2018.2869673

    Article  MathSciNet  MATH  Google Scholar 

  21. Tu, Z., Chen, C.J., Wang, Y., Birkbeck, N., Adsumilli, B., Bovik, A.C.: Video quality assessment of user generated content: a benchmark study and a new model. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 1409–1413. IEEE, September 2021. https://doi.org/10.1109/ICIP42928.2021.9506189. https://ieeexplore.ieee.org/document/9506189/

  22. Wang, Y., Inguva, S., Adsumilli, B.: YouTube UGC dataset for video compression research. In: 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP), pp. 1–5. IEEE, September 2019. https://doi.org/10.1109/MMSP.2019.8901772. https://ieeexplore.ieee.org/document/8901772/

  23. Wikipedia Contributors: Precision and recall – Wikipedia, the free encyclopedia (2020). https://en.wikipedia.org/w/index.php?title=Precision_and_recall &oldid=965503278d. Accessed 6 July 2020

  24. Yi, F., Chen, M., Sun, W., Min, X., Tian, Y., Zhai, G.: Attention based network for no-reference UGC video quality assessment. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 1414–1418. IEEE, September 2021. https://doi.org/10.1109/ICIP42928.2021.9506420. https://ieeexplore.ieee.org/document/9506420/

  25. Ying, Z., Mandal, M., Ghadiyaram, D., Bovik, A.: Patch-VQ: ‘patching up’ the video quality problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14019–14029, June 2021. http://arxiv.org/abs/2011.13544

  26. Zhang, M.: Swiss TV station replaces cameras with iphones and selfie sticks. Downloaded on 1 October 2015 (2015)

    Google Scholar 

  27. Zhao, K., Zhang, P., Lee, H.M.: Understanding the impacts of user-and marketer-generated content on free digital content consumption. Decis. Support Syst. 154, 113684 (2022)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikołaj Leszczuk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Leszczuk, M., Janowski, L., Nawała, J., Grega, M. (2022). User-Generated Content (UGC)/In-The-Wild Video Content Recognition. In: Nguyen, N.T., Tran, T.K., Tukayev, U., Hong, TP., Trawiński, B., Szczerbicki, E. (eds) Intelligent Information and Database Systems. ACIIDS 2022. Lecture Notes in Computer Science(), vol 13758. Springer, Cham. https://doi.org/10.1007/978-3-031-21967-2_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21967-2_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21966-5

  • Online ISBN: 978-3-031-21967-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics