Skip to main content

Abnormal Action Recognition in Social Media Clips Using Deep Learning to Analyze Behavioral Change

  • Conference paper
  • First Online:
Good Practices and New Perspectives in Information Systems and Technologies (WorldCIST 2024)

Abstract

With the increasing popularity of social media platforms like Instagram, there is a growing need for effective methods to detect and analyze abnormal actions in user-generated content. Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning that can learn complex patterns. This article proposes a novel deep learning approach for detecting abnormal actions in social media clips, focusing on behavioural change analysis. The approach uses a combination of Deep Learning and textural, statistical, and edge features for semantic action detection in video clips. The local gradient of video frames, time difference, and Sobel and Canny edge detectors are among the operators used in the proposed method. The method was evaluated on a large dataset of Instagram and Telegram clips and demonstrated its effectiveness in detecting abnormal actions with about 86% of accuracy. The results demonstrate the applicability of deep learning-based systems in detecting abnormal actions in social media clips.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Schefbeck, G., Spiliotopoulos, D., Risse, T.: The recent challenge in web archiving: archiving the social web. Context 7, 9 (2012)

    Google Scholar 

  2. Batrinca, B., Treleaven, P.C.: Social media analytics: a survey of techniques, tools and platforms. Ai Soc. 30, 89–116 (2015). https://doi.org/10.1007/s00146-014-0549-4

  3. Murthy, D., Gross, A., McGarry, M.: Visual social media and big data. interpreting Instagram images posted on twitter. Digit. Culture Soc. 2(2), 113–134 (2016). https://doi.org/10.14361/dcs-2016-0208

  4. Alam, F., Imran, M., Ofli, F.: Image4Act: online social media image processing for disaster response. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pp. 601–604 (2017). https://doi.org/10.1145/3110025.3110164

  5. Sherchan, W., Pervin, S., Butler, C.J., Lai, J.C., Ghahremanlou, L., Han, B.: Harnessing Twitter and Instagram for disaster management. IBM J. Res. Dev. 61(6), 8:1-8:12 (2017). https://doi.org/10.1147/JRD.2017.2729238

    Article  Google Scholar 

  6. Geboers, M.A., Van De Wiele, C.T.: Machine vision and social media images: why hashtags matter. Soc. Media+ Soc. 6(2), 2056305120928485 (2020). https://doi.org/10.1177/2056305120928485

  7. Gul, M.A., Yousaf, M.H., Nawaz, S., Ur Rehman, Z., Kim, H.: Patient monitoring by abnormal human activity recognition based on CNN architecture. Electronics 9(12), 1993 (2020). https://doi.org/10.3390/electronics9121993

    Article  Google Scholar 

  8. Graf, I., Gerwing, H., Hoefer, K., Ehlebracht, D., Christ, H., Braumann, B.: Social media and orthodontics: a mixed-methods analysis of orthodontic-related posts on Twitter and Instagram. Am. J. Orthod. Dentofac. Orthop. 158(2), 221–228 (2020). https://doi.org/10.1016/j.ajodo.2019.08.012

    Article  Google Scholar 

  9. Appel, G., Grewal, L., Hadi, R., Stephen, A.T.: The future of social media in marketing. J. Acad. Market Sci. 48(1), 79–95 (2020). https://doi.org/10.1007/s11747-019-00695-1

    Article  Google Scholar 

  10. Jacobsen, S.L., Barnes, N.G.: Social Media, Gen Z and consumer misbehavior: Instagram made me do it. J. Market. Dev. Competitiveness 14(3), 51–58 (2020). https://doi.org/10.33423/jmdc.v14i3.3062

  11. Zhou, Y., Deng, M.: A review of multiple-person abnormal activity recognition. J. Image Graph. 9(2), 55–60 (2021). https://doi.org/10.18178/joig.9.2.55-60

  12. Kinli, F., et al.: Aim 2022 challenge on Instagram filter removal: methods and results. In: European Conference on Computer Vision, pp. 27–43. Springer (2022). https://doi.org/10.1007/978-3-031-25066-8_2

  13. Pham, H.H., Khoudour, L., Crouzil, A., Zegers, P., Velastin, S.A.: Video-based human action recognition using deep learning: a review (2022). arXiv preprint arXiv:2208.03775

  14. Yeo, W.-H., Oh, W.-T., Kang, K.-S., Kim, Y.-I., Ryu, H.-C.: CAIR: fast and lightweight multi-scale color attention network for Instagram filter removal. In: European Conference on Computer Vision, pp. 714–728. Springer (2022). https://doi.org/10.1007/978-3-031-25063-7_45

  15. Gongane, V.U., Munot, M.V., Anuse, A.D.: Detection and moderation of detrimental content on social media platforms: current status and future directions. Soc. Netw. Anal. Min. 12(1), 129 (2022). https://doi.org/10.1007/s13278-022-00951-3

    Article  Google Scholar 

  16. Kushwaha, A., Khare, A., Prakash, O.: Human activity recognition algorithm in video sequences based on the fusion of multiple features for realistic and multi-view environment. Multimedia Tools Appl. 1–22 (2023). https://doi.org/10.1007/s11042-023-16364-z

  17. Pogadadanda, V., Shaik, S., Neeraj, G.V.S., Siralam, H.V., Rao, K.B., et al.: Abnormal activity recognition on surveillance: a review. In: 2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS), pp. 1072–1077. IEEE (2023). https://doi.org/10.1109/ICAIS56108.2023.10073703

  18. Hussein, N., Gavves, E., Smeulders, A.W.: Timeception for complex action recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 254–263 (2019)

    Google Scholar 

  19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  20. Gharahbagh, A.A., Hajihashemi, V., Ferreira, M.C., Machado, J.J., Tavares, J.M.R.: Best frame selection to enhance training step efficiency in video-based human action recognition. Appl. Sci. 12(4), 1830 (2022). https://doi.org/10.3390/app12041830

    Article  Google Scholar 

  21. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)

    Google Scholar 

  22. Zheng, Z., An, G., Ruan, Q.: Motion guided feature-augmented network for action recognition. In: 2020 15th IEEE International Conference on Signal Processing (ICSP), vol. 1, pp. 391–394. IEEE (2020). https://doi.org/10.1109/ICSP48669.2020.9321026

  23. Chen, E., Bai, X., Gao, L., Tinega, H.C., Ding, Y.: A spatiotemporal heterogeneous two-stream network for action recognition. IEEE Access 7, 57:267-57:275 (2019). https://doi.org/10.1109/ACCESS.2019.2910604

    Article  Google Scholar 

  24. Yudistira, N., Kurita, T.: Correlation net: spatiotemporal multimodal deep learning for action recognition. Signal Process. Image Commun. 82, 115731 (2020). https://doi.org/10.1016/j.image.2019.115731

  25. Alavigharahbagh, A., Hajihashemi, V., Machado, J.J., Tavares, J.M.R.: Deep learning approach for human action recognition using a time saliency map based on motion features considering camera movement and shot in video image sequences. Information 14(11), 616 (2023). https://doi.org/10.3390/info14110616

Download references

Acknowledgements

This article is partially a result of the project Sensitive Industry, co-funded by the European Regional Development Fund (ERDF), through the Operational Programme for Competitiveness and Internationalization (COMPETE 2020), under the PORTUGAL 2020 Partnership Agreement. The second author would like to thank “Fundação para a Ciência e Tecnologia” (FCT) for his Ph.D. grant with reference 2021.08660.BD.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to João Manuel R. S. Tavares .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gharahbagh, A.A., Hajihashemi, V., Ferreira, M.C., Machado, J.J.M., Tavares, J.M.R.S. (2024). Abnormal Action Recognition in Social Media Clips Using Deep Learning to Analyze Behavioral Change. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Poniszewska-Marańda, A. (eds) Good Practices and New Perspectives in Information Systems and Technologies. WorldCIST 2024. Lecture Notes in Networks and Systems, vol 990. Springer, Cham. https://doi.org/10.1007/978-3-031-60328-0_36

Download citation

Publish with us

Policies and ethics