Abstract
One of the most recent trends in the evaluation of immersive virtual environments is the incorporation of user metrics. In this article, we conduct a user study on a 360° hypervideo, using a dashboard based on detailed metrics obtained from users’ interactions with 360° hypervideos. It is essential to evaluate the quality of experience to monitor service quality from the perspectives of consumers. We demonstrate a framework to examine the user experiences of 360° environments and evaluate them using the xAPI specification, facilitating the development of analytics solutions centered in the user experience; and how the graphs and related data composing the dashboard provide valuable information about ways of navigating and interacting with 360° video experiences, as well as the time invested in them. In the user study, we include the visual perception, attention, tracking and interaction of users watching Proemaid (a 360° multimedia production), collected from an interactive 360° video player in the form of xAPI statements. The Proemaid production has been played in a vast variety of contexts by stakeholders from government, technology and education, among others. Therefore, the quantitative results and the qualitative analysis of the user study are intended to outline a sketch of the users’ ways of navigating, interacting and investing time in 360° hypervideo productions. We consider that these metrics will be very interesting in the specification of new omnidirectional storyboards for film producers of content in 360°. Finally, we propose potential directions for empirical investigation that highlight its great potential in many fields.
Similar content being viewed by others
Notes
References
Acharya S, Smith BC, Parnes P (1999) Characterizing user access to videos on the World Wide Web. In Proceedings of Multimedia Computing and Networking (MMCN 2000), SPIE 3969. https://doi.org/10.1117/12.373516
Advanced Distributed Learning. The xAPI Specification. https://github.com/adlnet/xAPI-Spec/. Accessed 9 March 2020
Agarwala M, Hsiao IH, Chae HS, Natriello G (2012) Vialogues: videos and dialogues based social learning environment. In proceedings of the 12th IEEE international conference on advanced learning technologies (ICALT 2012), 629–633. https://doi.org/10.1109/ICALT.2012.127
Aubert O, Prié Y, Canellas C (2014) Leveraging video annotations in video-based e-learning. In proceedings of the 6th international conference on computer supported education - volume 1 (CSEDU 2014), 479–485. https://doi.org/10.5220/0004948604790485
Bakharia A, Kitto K, Pardo A, Gašević D, Dawson S (2016) Recipe for success: lessons learnt from using xAPI within the connected learning analytics toolkit. In proceedings of the sixth international conference on Learning Analytics & Knowledge (LAK ‘16), 378–382. https://doi.org/10.1145/2883851.2883882
Berg A, Scheffel M, Drachsler H, Ternier S, Specht M (2016) Dutch cooking with xAPI recipes: the good, the bad, and the consistent. IEEE 16th international conference on advanced learning technologies (ICALT), 234–236. https://doi.org/10.1109/ICALT.2016.48
Bibiloni T, Oliver A, Del Molino J (2017) Automatic collection of user behavior in 360° multimedia. Multimed Tools Appl 77:1–18. https://doi.org/10.1007/s11042-017-5510-3
Brar J, Van der Meij H (2017) Complex software training: harnessing and optimizing video instruction. Comput Hum Behav 70:475–485. https://doi.org/10.1016/j.chb.2017.01.014
Brinton CG, Buccapatnam S, Chiang M, Poor HV (2016) Mining MOOC clickstreams: video-watching behavior vs. in-video quiz performance. IEEE Trans Signal Process 64(14):3677–3692. https://doi.org/10.1109/TSP.2016.2546228
Brooks C, Thompson C, Greer J (2013) Visualizing lecture capture usage: A learning analytics case study. In Proceedings of the LAK 2013 Workshop on Analytics on Video-based Learning (WAVe), 9–14. http://ceur-ws.org/Vol-983/
Calmettes G, Drummond GB, Vowler SL (2012) Making do with what we have: use your bootstraps. Statistical reporting guidelines. Adv Physiol Educ 36(3):177–180. https://doi.org/10.1113/jphysiol.2012.239376
Cattaneo AAP, Van der Meij H, Aprea C, Sauli F, Zahn C (2018) A model for designing hypervideo-based instructional scenarios. Interact Learn Environ 27:508–529. https://doi.org/10.1080/10494820.2018.1486860
Chorianopoulos K, Shamma DA, Kennedy L (2013) Social video retrieval: research methods in controlling, sharing, and editing of web video. In Social Media Retrieval, Computer Communications and Networks, 3–22. https://doi.org/10.1007/978-1-4471-4555-4_1
Corbillon X, De Simone F, Simon G (2017) 360-degree video head movement dataset. In proceedings of the 8th ACM on multimedia systems conference (MMSys ‘17), 199–204. https://doi.org/10.1145/3083187.3083215
David EJ, Gutiérrez J, Coutrot A, Perreira da Silva M, Le Callet P (2018) A dataset of head and eye movements for 360° videos. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys ‘18), 432–437. https://doi.org/10.1145/3204949.3208139
De Abreu A, Ozcinar C, Smolic A (2017) Look around you: saliency maps for omnidirectional images in VR applications. Ninth international conference on quality of multimedia experience (QoMEX), 1–6. https://doi.org/10.1109/QoMEX.2017.7965634
De Boer J, Kommers PA, De Brock B (2011) Using learning styles and viewing styles in streaming video. Comput Educ 56(3):727–735. https://doi.org/10.1016/j.compedu.2010.10.015
Debevc M, Kosec P, Holzinger A (2011) Improving multimodal web accessibility for deaf people: sign language interpreter module. Multimed Tools Appl 54:181–199. https://doi.org/10.1007/s11042-010-0529-8
G3 International. lxHive. https://github.com/g3i/lxHive. Accessed 9 March 2020
Gaudenzi S (2013) The living documentary: From representing reality to co-creating reality in digital interactive documentary. Goldsmiths, University of London [PhD thesis]. http://research.gold.ac.uk/id/eprint/7997
Giannakos, MN, Chorianopoulos K, Chrisochoides N (2015) Making sense of video analytics: Lessons learned from clickstream interactions, attitudes, and learning outcome in a video-assisted course. The International Review of Research in Open and Distributed Learning, 16(1). https://doi.org/10.19173/irrodl.v16i1.1976
Giannakos MN, Sampson DG, Kidziński Ł (2016) Introduction to smart learning analytics: foundations and developments in video-based learning. Smart Learning Environments 3(1). https://doi.org/10.1186/s40561-016-0034-2
Gorissen P, Van Bruggen J, Jochems W (2012) Usage reporting on recorded lectures using educational data mining. Int J Learn Technol 7(1):23–40. https://doi.org/10.1504/IJLT.2012.046864
Guo PJ, Kim J, Rubin R (2014) How video production affects student engagement: an empirical study of MOOC videos. In proceedings of the first ACM conference on learning at scale (L@S ‘14), 41–50. https://doi.org/10.1145/2556325.2566239
Kevan JM, Ryan PR (2016) Experience API: flexible, decentralized and activity-centric data collection. In Technology, Knowledge and Learning 21(1):143–149. https://doi.org/10.1007/s10758-015-9260-x
Kim J, Guo PJ, Seaton DT, Mitros P, Gajos KZ, Miller RC (2014) Understanding in-video dropouts and interaction peaks in online lecture videos. In proceedings of the first ACM conference on learning at scale (L@S ‘14), 31–40. https://doi.org/10.1145/2556325.2566237
Kleftodimos A, Evangelidis G (2014) Using metrics and cluster analysis for analyzing learner video viewing behaviours in educational videos. IEEE/ACS 11th international conference on computer systems and applications (AICCSA), 280–287. https://doi.org/10.1109/AICCSA.2014.7073210
Kleftodimos A, Evangelidis G (2016) Using open source technologies and open internet resources for building an interactive video based learning environment that supports learning analytics. Smart Learning Environments 3:9. https://doi.org/10.1186/s40561-016-0032-4
Kuzyakov E, Chen S, Peng R (2017) Enhancing high-resolution 360 streaming with view prediction. Facebook. https://code.facebook.com/posts/118926451990297/enhancing-high-resolution-360-streaming-with-view-prediction/
Kwiatek K, Woolner M (2009) Embedding interactive storytelling within still and video panoramas for cultural heritage sites. In 15th international conference on virtual systems and multimedia, 197–202. https://doi.org/10.1109/VSMM.2009.36
Li N, Kidziński Ł, Jermann P, Dillenbourg P (2015) MOOC video interaction patterns: what do they tell us? Design for Teaching and Learning in a networked world. Lect Notes Comput Sci 9307:197–210. https://doi.org/10.1007/978-3-319-24258-3_15
Lo WC, Fan CL, Lee J, Huang CY, Chen KT, Hsu CH (2017) 360° video viewing dataset in head-mounted virtual reality. In proceedings of the 8th ACM on multimedia systems conference (MMSys ‘17), 211–216. https://doi.org/10.1145/3083187.3083219
Meixner B (2017) Hypervideos and interactive multimedia presentations. ACM computing surveys (CSUR), 50, 1, 9:1–9:34. https://doi.org/10.1145/3038925
Meixner B, Matusik K, Grill C, Kosch H (2012) Towards an easy to use authoring tool for interactive non-linear video. Multimed Tools Appl 70(2):1251–1276. https://doi.org/10.1007/s11042-012-1218-6
Meixner B, John S, Handschigl C (2015) SIVA suite: framework for hypervideo creation, playback and management. In proceedings of the 23rd ACM international conference on multimedia (MM ‘15), 713–716. https://doi.org/10.1145/2733373.2807413
Meixner B, John S, Handschigl C (2016) SIVA suite: an open-source framework for hypervideos. SIGMultimedia Records 8(1):10–14. https://doi.org/10.1145/2898367.2898371
Mirriahi N, Vigentini N (2017) Analytics of learner video use. In handbook of learning analytics, 251–267. Society for Learning Analytics Research (SoLAR). https://doi.org/10.18608/hla17.022
Morikawa H, Nagao T, Hasegawa D, Sakuta H, Nakayama E (2017) Evaluation of educational material using 360-degree video for hazard prediction training in nursing. The Japanese Journal of Ergonomics 53(2):758–759. https://doi.org/10.5100/jje.53.S758
Neng LAR, Chambel T (2010) Get around 360 degree hypervideo. In proceedings of the 14th international academic MindTrek conference: envisioning future media environments, MindTrek ‘10, 119–122. https://doi.org/10.1145/1930488.1930512
Neng LAR, Chambel T (2012) Get around 360° hypervideo: its design and evaluation. International Journal of Ambient Computing and Intelligence (IJACI) 4(4):40–57. https://doi.org/10.4018/jaci.2012100103
NumFOCUS. Pandas. https://pandas.pydata.org/. Accessed 9 March 2020
NumFOCUS. Bokeh. http://bokeh.pydata.org/. Accessed 9 March 2020
Oliver A, Del Molino J, Vidal ME, Bibiloni T (2017) 360° hypervideo: An interactive documentary around the refugee crisis in Greece. In Proceedings of the 6th Iberoamerican Conference on Applications and Usability of Interactive TV - jAUTI 2017, 162–171 http://jauti2017.web.ua.pt/index.php/theconference/
Oliver A, Del Molino J, Bibiloni T (2018) Automatic view tracking in 360° multimedia using xAPI. In applications and usability of interactive television, jAUTI 2017. Communications in Computer and Information Science (CCIS) 813:117–131. https://doi.org/10.1007/978-3-319-90170-1_9
Oliver A, Del Molino J, Cañellas M, Clar A, Bibiloni A (2019) VR Macintosh museum: case study of a WebVR application. In world conference on information systems and technologies, WorldCIST’19. Advances in intelligent systems and Computing, vol. 931. https://doi.org/10.1007/978-3-030-16184-2_27
Paternò F, Schiavone AG, Pitardi P (2016) Timelines for mobile web usability evaluation. In proceedings of the international working conference on advanced visual interfaces (AVI ‘16), 88–91. https://doi.org/10.1145/2909132.2909272
Pavel A, Hartmann B, Agrawala M (2017) Shot orientation controls for interactive cinematography with 360 video. In proceedings of the 30th annual ACM symposium on user Interface software and technology (UIST ‘17), 289–297. https://doi.org/10.1145/3126594.3126636
Preston M, Campbell G, Ginsburg H, Sommer P, Moretti F (2005) Developing new tools for video analysis and communication to promote critical thinking. In Proceedings of World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-MEDIA 2005), 1, 4357–4364. https://www.learntechlib.org/p/20763/
Rabelo T, Lama M, Vidal JC, Amorim R (2017) Comparative study of xAPI validation tools. IEEE Frontiers in education conference (FIE), 1–5. https://doi.org/10.1109/FIE.2017.8190729
Rai Y, Gutiérrez J, Le Callet P (2017) A dataset of head and eye movements for 360 degree images. In proceedings of the 8th ACM on multimedia systems conference (MMSys ‘17), 205–210. https://doi.org/10.1145/3083187.3083218
Risko EF, Foulsham T, Dawson S, Kingstone A (2013) The collaborative lecture annotation system (CLAS): a new TOOL for distributed learning. IEEE Trans Learn Technol 6(1):4–13. https://doi.org/10.1109/TLT.2012.15
Rovelo Ruiz GA, Vanacken D, Luyten K, Abad F, Camahort E (2014) Multi-viewer gesture-based interaction for omni-directional video. In proceedings of the SIGCHI conference on human factors in computing systems (CHI ‘14), 4077–4086. https://doi.org/10.1145/2556288.2557113
Sauli F, Cattaneo A, Van der Meij H (2018) Hypervideo for educational purposes: a literature review on a multifaceted technological tool. Technol Pedagog Educ 27(1):115–134. https://doi.org/10.1080/1475939X.2017.1407357
Sawhney N, Balcom D, Smith I (1996) HyperCafe: narrative and aesthetic properties of hypervideo. In proceedings of the the seventh ACM conference on HYPERTEXT (HYPERTEXT ‘96), 1–10. https://doi.org/10.1145/234828.234829
Schwan S, Riempp R (2004) The cognitive benefits of interactive videos: learning to tie nautical knots. Learn Instr 14(3):293–305. https://doi.org/10.1016/j.learninstruc.2004.06.005
Seidel N (2017) Analytics on video-based learning. A literature review. In Proceedings of DeLFI and GMW Workshops 2017. http://ceur-ws.org/Vol-2092/
Sottilare RA, Long RA, Goldberg BS (2017) Enhancing the experience application program interface (xAPI) to improve domain competency modeling for adaptive instruction. In proceedings of the fourth ACM conference on learning @ scale (L@S ‘17), 265–268. https://doi.org/10.1145/3051457.3054001
The SciPy Community. Scipy.org: Statistics. https://docs.scipy.org/doc/scipy-1.4.1/reference/stats.html. Accessed 9 March 2020
Theodosiou Z, Kounoudes A, Tsapatsoulis N, Milis M (2009) MuLVAT: A video annotation tool based on XML-dictionaries and shot clustering. In Artificial Neural Networks (ICANN 2009). Lect Notes Comput Sci:5769. https://doi.org/10.1007/978-3-642-04277-5_92
Wijnants M, Van Erum K, Quax P, Lamotte W (2016) Augmented ODV: web-driven annotation and interactivity enhancement of 360 degree video in both 2D and 3D. Web information systems and technologies (WEBIST 2015), 47–69. https://doi.org/10.1007/978-3-319-30996-5_3
Yousef AMF, Chatti MA, Danoyan N, Thüs H, Schroeder U (2015) Video-mapper: a video annotation tool to support collaborative learning in MOOCs. In proceedings of the third European MOOCs stakeholders summit (EMOOCs), 131–140. https://www.researchgate.net/publication/279203366
Acknowledgments
The authors want to thank the people that appear in the documentary for sharing their stories with us. We also want to thank PROEM-AID for their work where they are much needed, and especially Manuel Elviro Vidal for recording the scenes shown in the documentary in situ.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
del Molino, J., Bibiloni, T. & Oliver, A. Keys for successful 360° hypervideo design: A user study based on an xAPI analytics dashboard. Multimed Tools Appl 79, 22771–22796 (2020). https://doi.org/10.1007/s11042-020-09059-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-020-09059-2