Abstract
To evaluate a system that automatically summarizes video files (image and audio) and text, how the system works, and the quality of the results should be considered. With this objective, the authors have performed two types of evaluation: objective and subjective. The actual assessment is performed mainly automatically, while the individual assessment is based directly on the opinion of people, who evaluate the system by answering a set of questions, which are then processed to obtain the targeted conclusions. One of the purposes of the described research is to try to narrow the space of possible summarization scenarios.
Meanwhile, in the light of individual results obtained, the researchers cannot unambiguously indicate one single scenario, recommended as the only one for further development. However, the researchers can state with certainty that the new development of scene 1, which has received many negative evaluations among professionals, should be discontinued. Considering the results of the set of questions about the quality of the complete system, the end-users have evaluated the scenario 3, and they think that the quality is excellent, obtaining results over 70% on a scale of 0 to 100.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Smaïli, K., et al.: Summarizing videos into a target language: methodology, architectures and evaluation. J. Intell. Fuzzy Syst., 1–2 (2019)
Smaïli, K., et al.: A first summarization system of a video in a target language. In: Choroś, K., Kopel, M., Kukla, E., Siemiński, A. (eds.) MISSI 2018. AISC, vol. 833, pp. 77–88. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-98678-4_10
Komorowski, A., Janowski, L., Leszczuk, M.: Evaluation of multimedia content summarization algorithms. In: Choroś, K., Kopel, M., Kukla, E., Siemiński, A. (eds.) MISSI 2018. AISC, vol. 833, pp. 424–433. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-98678-4_43
Amato, F., Castiglione, A., Moscato, V., Picadillo, A.: Giancarlo Sperlì. Multimedia summarization using social media content, Multimedia Tools Appl. (2018)
Mani, I., House, D., Klein, G., Hirschman, L., Firmin, T., Sondheim, B.: The TIPSTER SUMAC Text Summarization Evaluation. In: Proceedings of EACL ’99 (1999)
Lloret, E., Plaza, L., Aker, A.: The challenging task of summary evaluation: an overview. Lang. Resour. Eval. 52(1), 101–148 (2017). https://doi.org/10.1007/s10579-017-9399-2
Jouvet, D., Langlois, D., Menacer, M.A., Fohr, D., Mella, O., Smaïli, K.: About vocabulary adaptation for automatic speech recognition of video data. In: ICNLSSP’2017 - International Conference on Natural Language, Signal and Speech Processing, Casablanca, Morocco, pp. 1–5, December 2017
Acknowledgement
Research work funded by CHIST-ERA call 2014 (project AMIS under the topic Human Language Understanding: Grounding Language Learning). Research work by Michał Grega and Mikołaj Leszczuk funded by the National Science Center, Poland (project registration number 2015/16/Z/ST7/00559).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Badiola, A., Zorrilla, A.M., Garcia-Zapirain Soto, B., Grega, M., Leszczuk, M., Smaïli, K. (2020). Evaluation of Improved Components of AMIS Project for Speech Recognition, Machine Translation and Video/Audio/Text Summarization. In: Dziech, A., Mees, W., Czyżewski, A. (eds) Multimedia Communications, Services and Security. MCSS 2020. Communications in Computer and Information Science, vol 1284. Springer, Cham. https://doi.org/10.1007/978-3-030-59000-0_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-59000-0_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58999-8
Online ISBN: 978-3-030-59000-0
eBook Packages: Computer ScienceComputer Science (R0)