Skip to main content
Log in

VisDmk: visual analysis of massive emotional danmaku in online videos

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Danmaku, a real-time comment function covering the top of the video, appears from the right of the video like a bullet and slides out horizontally from the left, which is gaining popularity in Asia. In recent years, the research on the analysis of massive danmaku data has mushroomed. The danmaku data contains a host of valuable information, such as the emotional expressions, attitudes, and opinions of the people watching the video, which helps people quickly get the content and effect of the video. The information is more representative and comprehensive with the ever-increasing amount of danmaku data over time. However, extracting valuable danmaku from huge amounts of data is a challenging task. Therefore, in this paper, we introduce VisDmk, an interactive visual analysis system, to help to analyze video content and effect. VisDmk incorporates five views: the projection view to exhibit emotion distribution, the detail view to analyze specific danmaku information, the individual view to illustrate the difference between viewers, the theme-aware view to identify themes in different periods, and the video view to ascertain some inference within a video. Case studies and user observation were conducted to evaluate this system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Peng, X., Zhao, Y.C., Teo, H.H.: Understanding young people’s use of danmaku websites: the effect of perceived coolness and subcultural identity. Paper presented at the Pacific Asia Conference on Information Systems, 252 (2016)

  2. Yao, Y., Bort, J., Huang, Y.: Understanding danmaku’s potential in online video learning. Paper presented at the CHI Conference on Human Factors in Computing Systems, 3034–3040 (2017)

  3. Sun, Z., Sun, M., Cao, N., Ma, X.: VideoForest: interactive visual summarization of video streams based on danmu data. Paper presented at the Siggraph Asia 2016 Symposium on Visualization, 1–8 (2016)

  4. Fernandez-Beltran, R., Pla, F.: Incremental probabilistic latent semantic analysis for video retrieval. Image Vision Comput. 38, 1–12 (2015)

    Article  Google Scholar 

  5. Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. Paper presented at the International Conference on Computing, Communication, and Intelligent Systems, 74–79 (2019)

  6. Li, Z., Li, R., Jin, G.: Sentiment analysis of danmaku videos based on naïve bayes and sentiment dictionary. IEEE Access 8, 75073–75084 (2020)

    Article  Google Scholar 

  7. Chen, Y., Gao, Q., Rau, P.-L.P.: Understanding gratifications of watching danmaku videos-videos with overlaid comments. In: Rau, P. (ed.) Cross-cultural design methods, practice and impact, vol. 9180, pp. 153–163. Springer, Switzerland (2015)

    Chapter  Google Scholar 

  8. Chen, Y., Gao, Q., Rau, P.-L.P.: Watching a movie alone yet together: understanding reasons for watching danmaku videos. Int. J. Hum. Comput. Int. 33(9), 731–743 (2017)

    Article  Google Scholar 

  9. Lv, G., Zhang, K., Wu, L., Chen, E., Xu, T., Liu, Q., He, W.: Understanding the users and videos by mining a novel danmu dataset. IEEE Trans. Big Data 5, 1–16 (2019)

    Google Scholar 

  10. Zhao, Y., Tang, J.: Exploring the motivational affordances of danmaku video sharing websites: evidence from gamification design. In: Kurosu, M. (ed.) Human-Computer Interaction, vol. 9733, pp. 467–479. Springer, Switzerland (2016)

    Google Scholar 

  11. Wu, Q., Sang, Y., Huang, Y.: Danmaku: a new paradigm of social interaction via online videos. ACM Transactions Soc. Comput. 2(2), 1–24 (2019)

    Article  Google Scholar 

  12. Ma, Z., Ge, J.: The review of the Japanese animation barrage: a perspective of parasocial interaction. J. Journalism Commun. 8, 116–130 (2014)

    Google Scholar 

  13. He, M., Ge, Y., Wu, L., Chen, E., Tan, C.: Predicting the popularity of danmu-enabled videos: a multi-factor view. In: Navathe, S., Wu, W., Shekhar, S., Du, X., Wang, S., Xiong, H. (eds.) Database Systems for Advanced Applications, vol. 9643, pp. 351–366. Springer, Switzerland (2016)

    Chapter  Google Scholar 

  14. Ping, Q.: Understanding young people’s use of danmaku websites: The effect of perceived coolness and subcultural identity. Paper presented at the ACM Conference on Recommender Systems, 568–572 (2018)

  15. Deng, Y., Zhang, C., Li, J.: Video shot recommendation model based on emotion analysis using time-sync comments. J. Comput. Appl. 37, 1065–1070 (2017)

    Google Scholar 

  16. Wang, S., Chen, Y., Ming, H., Huang, H., Mi, L., Shi, Z.: Improved danmaku emotion analysis and its application based on Bi-LSTM model. IEEE Access 8, 114123–114134 (2020)

    Article  Google Scholar 

  17. Zheng, Y., Xu, J., Zhuo, X.: Utilization of sentiment analysis and visualization in online video bullet-screen comments. Data Anal. Knowl. Discov. 31(11), 82–90 (2016)

    Google Scholar 

  18. Gobron, S., Ahn, J., Paltoglou, G., Thelwall, M., D., T.: From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text. Visual Comput. 26, 505–519 (2010)

  19. Zhao, J., Gou, L., Wang, F., Zhou, M.: Pearl: an interactive visual analytic tool for understanding personal emotion style derived from social media. In: 2014 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 203–212 (2014)

  20. Hoque, E., Carenini, G.: Multiconvis: a visual text analytics system for exploring a collection of online conversations. Paper presented at the 21st International Conference on Intelligent User Interfaces, 96–107 (2016)

  21. Kucher, K., Schamp-Bjerede, T., Kerren, A., Paradis, C., Sahlgren, M.: Visual analysis of online social media to open up the investigation of stance phenomena. Information Vis. 15(2), 93–116 (2016)

    Article  Google Scholar 

  22. Humayoun, S.R., Ardalan, S., AlTarawneh, R., Ebert, A.: Texvis: an interactive visual tool to explore twitter data. Paper presented at the Eurographics Conference on Visualization(Short Papers), 151–155 (2017)

  23. Chen, N.C., Brooks, M., Kocielnik, R., Hong, S.R., Smith, J., Lin, S., Qu, Z., Aragon, C.: Lariat: a visual analytics tool for social media researchers to explore twitter datasets. Paper presented at the Hawaii International Conference on System Sciences, 1881–1990 (2017)

  24. Kucher, K., Martins, R., Paradis, C., Kerren, A.: StanceVis Prime: visual analysis of sentiment and stance in social media texts. J. Vis. 23, 1015 (2020)

    Article  Google Scholar 

  25. Topal, K., Ozsoyoglu, G.: Emotional classification and visualization of movies based on their imdb reviews. Information Discov. Deliv. 45(3), 149–158 (2017)

    Article  Google Scholar 

  26. Chen, Y.S., Chen, L.H., Takama, Y.: Proposal of lda-based sentiment visualization of hotel reviews. Paper presented at the 2015 IEEE International Conference on Data Mining Workshop, 687–693 (2015)

  27. Liu, S., Cui, W., Wu, Y., Liu, M.: A survey on information visualization: recent advances and challenges. Vis. Comput. 30, 1373–1393 (2014)

    Article  Google Scholar 

  28. Kucher, K., Kerren, A.: Text visualization techniques: taxonomy, visual survey, and community insights. In: 2015 IEEE Pacific Visualization Symposium (PacificVis), pp. 117–121 (2015)

  29. Jánicke, S., Franzini, G., Cheema, M.F., Scheuermann, G.: Visual text analysis in digital humanities. Comput. Graph. Forum 36(6), 226–250 (2017)

    Article  Google Scholar 

  30. Viégas, F.B., Wattenberg, M.: Timelines tag clouds and the case for vernacular visualization. Interactions 15(4), 49–52 (2008)

    Article  Google Scholar 

  31. Collins, C., Carpendale, S., Penn, G.: Docuburst: visualizing document content using language structure. Paper presented at the Computer Graphics Forum, 1039–1046 (2009)

  32. Strobelt, H., Oelke, D., Rohrdantz, C., Stoffel, A., Keim, D.A., Deussen, O.: Document cards: a top trumps visualization for documents. IEEE Trans. Vis. Comput. Graph. 15(6), 1145–1152 (2009)

    Article  Google Scholar 

  33. Viégas, F.B., Wattenberg, M., Feinberg, J.: Participatory visualization with wordle. IEEE Trans. Vis. Comput. Graph. 15(6), 1137–1144 (2009)

    Article  Google Scholar 

  34. Jänicke, S., Blumenstein, J., Rücker, M., Zeckzer, D., Scheuermann, G.: Tagpies: comparative visualization of textual data, pp. 40–51 (2018). https://doi.org/10.5220/0006548000400051

  35. Burch, M., Lohmann, S., Pompe, D., Weiskopf, D.: Prefix tag clouds. Paper presented at the 17th International Conference on Information Visualisation, 45–50 (2013)

  36. Gan, Q., Zhu, M., Li, M., Liang, T., Cao, Y., Zhou, B.: Document visualization: an overview of current research. Wiley Interdiscip. Rev. Comput. Statistics 6 (2014)

  37. Havre, S., Hetzler, B., Nowell, L.: Themeriver: visualizing theme changes over time. Paper presented at the IEEE Symposium on Information Visualization, 115–123 (2000)

  38. Liu, S., Zhou, M., Pan, S., Song, Y., Qian, W., Cai, W., Lian, X.: Tiara: interactive, topic-based visual text summarization and analysis. ACM T. Intel. Syst. Tec. 3(2), 1–28 (2012)

    Article  Google Scholar 

  39. Dang, T., Nguyen, H.N., Pham, V., Johansson, J., Sadlo, F., Marai, G.E.: WordStream: interactive visualization for topic evolution. Paper presented at the Eurographics Conference on Visualization(Short Papers), 103–107 (2019)

  40. Kucher, K., Kerren, A.: Text visualization revisited: the state of the field in 2019. Paper presented at the Eurographics Conference on Visualization(Posters), 29–31 (2019)

  41. Chen, S., Lin, L., Yuan, X.: Social media visual analytics. Paper presented at the Computer Graphics Forum, 563–587 (2017)

  42. Kucher, K., Paradis, C., Kerren, A.: The state of the art in sentiment visualization. Comput. Graphics Forum 37(1), 71–96 (2018)

    Article  Google Scholar 

  43. Wanner, F., Rohrdantz, C., Mansmann, F., Oelke, D., Keim, D.A.: Visual sentiment analysis of rss news feeds featuring the us presidential election in 2008. Paper presented at the Visual Interfaces to the Social and Semantic Web, 1–8 (2009)

  44. Yu, Z., Wang, Z., Chen, L., Guo, B., Li, W.: Featuring, detecting, and visualizing human sentiment in Chinese micro-blog. ACM T. Knowl. Discov. D. 10, 1–23 (2016)

    Article  Google Scholar 

  45. Xiaoqing, Z.: Technology implementation of Chinese Jieba segmentation based on Python. China Comput. Commun. 31(18), 38–39 (2019)

    Google Scholar 

  46. Xu, L., Lin, H.F., Pan, Y., Ren, H., Chen, J.: Constructing the affective lexicon ontology. J. China Soc. Scientific Tech. Information 27, 180–185 (2008)

    Google Scholar 

  47. Ekman, P.: An argument for basic emotions. Cognit. Emot. 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  48. Plutchik, R.: The multifactor-analytic theory of emotion. J. Psychol. 50(1), 153–171 (1960)

    Article  Google Scholar 

  49. Ma, X.J., Cao, N.: Video-based evanescent, anonymous, asynchronous social interaction: motivation and adaption to medium. Paper presented at the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 770–782 (2017)

  50. Hanada, M.: Correspondence analysis of color-emotion associations. Color Res. Appl. 43(2), 224–237 (2018)

    Article  MathSciNet  Google Scholar 

  51. Wall, E., Agnihotri, M., Matzen, L., Divis, K., Haass, M., Endert, A., Stasko, J.: A heuristic approach to value-driven evaluation of visualizations. IEEE Transactions Vis. Computer Graphics 25(1), 491–500 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

Thanks to Associate Professor Wei Liu of the University of Technology Sydney for his guidance.

Funding

This work was supported in part by a grant from the National Key R &D Program of China, and the National Science Foundation of China under Grant 61802334 and 61902340, in part by the Natural Science Foundation of Hebei Province under Grant F2022203015 and in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant 22567637H.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable

Corresponding author

Correspondence to Dongliang Guo.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics approval

Not applicable

Consent to participate

All participants gave their written informed consent to the experimental procedure.

Consent for publication

The manuscript is approved by all authors for publication.

Availability of data and materials

Not applicable

Code availability

Not applicable

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: Information to the author Amit Kumar Singh was not correct.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mp4 40570 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, S., Guo, D., Cao, L. et al. VisDmk: visual analysis of massive emotional danmaku in online videos. Vis Comput 39, 6553–6570 (2023). https://doi.org/10.1007/s00371-022-02748-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02748-z

Keywords

Navigation