Scene generative models for adaptive video fast forward | IEEE Conference Publication | IEEE Xplore

Scene generative models for adaptive video fast forward

Publisher: IEEE

Abstract:

In this paper, we present a statistical generative model of scenes with multiple objects that can be efficiently used for tasks related to video search, browsing and retr...View more

Abstract:

In this paper, we present a statistical generative model of scenes with multiple objects that can be efficiently used for tasks related to video search, browsing and retrieval. Instead of using a combination of weighted Euclidean distances as a shot similarity measure, we base the retrieval process on the likelihood of a video frame under the generative model trained on the query sequence. This allows for automatic separation and balancing of various causes of variability, such as occlusion, appearance change and motion. In previous work, this usually required complex user intervention. The likelihood models we study in this paper are based on appearances of multiple, possibly occluding objects in a video clip. Given a query, the video is played at a higher rate until similar frames are found. The playback speed is linked to the frame likelihood, so that the speed drops as the likely target frames are starting to come in.
Date of Conference: 14-17 September 2003
Date Added to IEEE Xplore: 24 November 2003
Print ISBN:0-7803-7750-8
Print ISSN: 1522-4880
Publisher: IEEE
Conference Location: Barcelona, Spain

References

References is not available for this document.