Skip to main content

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 703))

Abstract

In this paper, we propose a novel V-LESS technique for generating the event summaries from monocular videos. We employed Linear Discriminant Analysis (LDA) as a machine learning approach. First, we analyze the features of the frames, after breaking the video into the frames. Then these frames are used as input to the model which classifies the frames into active frames and inactive frames using LDA. The clusters are formed with the remaining active frames. Finally, the events are obtained using the key-frames with the assumption that a key-frame is either the centroid or the nearest frame to the centroid of an event. The users can easily opt the number of key-frames without incurring the additional computational overhead. Experimental results on two benchmark datasets show that our model outperforms the state-of-the-art models on Precision and F-measure. It also successfully abates the video content while holding the interesting information as events. The computational complexity indicates that the V-LESS model meets the requirements for the real-time applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 199.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 259.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.open-video.org.

  2. 2.

    https://sites.google.com/site/vsummsite/download.

References

  1. Singh N., et al., “A Novel Position Prior Using Fusion of Rule of Thirds and Image Center for Salient Object Detection,” MTAP (2016), pp. 1–18.

    Google Scholar 

  2. Vermaak J., et al., “Rapid summarization and browsing of video sequences,” BMVC, (2002), pp. 1–10.

    Google Scholar 

  3. Krishan K., et al., “Event BAGGING: A novel event summarization approach in multi-view surveillance videos,” IEEE IESC’17.

    Google Scholar 

  4. Truong B.T., Venkatesh S., “Video abstraction: a systematic review and classification,” ACM Trans. Multimed. Comp. Comm. App., 3, 1, (2007), 37 pages.

    Google Scholar 

  5. Chowdhury A. S., et al., “Video Storyboard Design using Delaunay Graphs,” Int. Conf. on Pattern Recog., (2012), pp. 3108–3111.

    Google Scholar 

  6. Mundur P., Rao Y., Yesha Y., “Keyframe-based video summarization using Delaunay clustering,” Int. J. Digit. Libr., 6, 2, (2006), pp. 219–232.

    Google Scholar 

  7. Kumar K., et al., “Equal Partition based Clustering approach for Event Summarization in Videos,” SITIS, 2016, pp. 119–126.

    Google Scholar 

  8. Chang H.S., et al., “Efficient video indexing scheme for content-based retrieval,” IEEE TCSVT, 9, 8, (1999), pp. 1269–1279.

    Google Scholar 

  9. Gong Y., Liu X., “Video summarization using singular value decomposition,” IEEE CVPR, 2, (2000), pp. 174–180.

    Google Scholar 

  10. K. Kumar, et al., “Eratosthenes sieve based key-frame extraction technique for event summarization in videos,” MTAP (2017), pp. 1–22.

    Google Scholar 

  11. Zhuang Y., et al., “Adaptive key frame extraction using unsupervised clustering,” IEEE ICIP, 1, (1998), pp. 866–870.

    Google Scholar 

  12. K. Kumar, et al., “SOMES: An efficient SOM technique for Event Summarization in multi-view surveillance videos,” Springer ICACNI’17.

    Google Scholar 

  13. Sahouria, E., et al., “Content analysis of video using principal components.” IEEE TCSVT, 9, 8, (1999), pp. 1290–1298.

    Google Scholar 

  14. Altman, E. I., et al. “Corporate distress diagnosis: Comparisons using linear discriminant analysis and neural networks (the Italian experience).”, Journal of banking & finance, 18, 3, (1994), pp. 505–529.

    Google Scholar 

  15. Furini M., et al., “Stimo: still and moving video storyboard for the web scenario,” Multimed. Tools Appl. 46, 1, (2010), pp. 47–69.

    Google Scholar 

  16. Avila S., et al., “Vsumm: a mechanism designed to produce static video summaries and a novel evaluation method,” Pattern Recognit. Lett., 32, 1, (2011), pp. 56–68.

    Google Scholar 

  17. OVP, “Video Open Project Storyboard,” https://open-video.org/results.php?size= extralarge, Retrieved December, 2016.

  18. Cong Y., et al., “Towards scalable summarization of consumer videos via sparse dictionary selection,” IEEE TMM, 14, 1, (2012), pp. 66–75.

    Google Scholar 

  19. Guan G., et al., “Keypoint based keyframe selection,” IEEE Trans. Circuits Syst. Video Tech., 23, 4, (2013), pp. 729–734.

    Google Scholar 

  20. Mei S., et al., “Video summarization via minimum sparse reconstruction,” Pattern Recog., 48, 2, (2015), pp. 522–533.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krishan Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, K., Shrimankar, D.D., Singh, N. (2018). V-LESS: A Video from Linear Event Summaries. In: Chaudhuri, B., Kankanhalli, M., Raman, B. (eds) Proceedings of 2nd International Conference on Computer Vision & Image Processing . Advances in Intelligent Systems and Computing, vol 703. Springer, Singapore. https://doi.org/10.1007/978-981-10-7895-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-7895-8_30

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-7894-1

  • Online ISBN: 978-981-10-7895-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics