skip to main content
10.1145/1631272.1631370acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
short-paper

Temporal spectral residual: fast motion saliency detection

Published: 19 October 2009 Publication History

Abstract

Saliency detection has attracted much attention in recent years. It aims at locating semantic regions in images for further image understanding. In this paper, we address the issue of motion saliency detection for video content analysis. Inspired by the idea of Spectral Residual for image saliency detection, we propose a new method Temporal Spectral Residual on video slices along X-T and Y-T planes, which can automatically separate foreground motion objects from backgrounds, also with the help of threshold selection and voting schemes. Different from conventional background modeling methods with complex mathematical model, the proposed method is only based on Fourier spectrum analysis, so it is simple and fast. The power of our proposed method is demonstrated in the experiments of four typical videos with different dynamic background.

References

[1]
http://homepages.inf.ed.ac.uk/rbf/caviardata1/.
[2]
A. Elgammal, R. Duraiswami, D. Harwood, L. S. Davis, R. Duraiswami, and D. Harwood. Background and foreground modeling using nonparametric kernel density for visual surveillance. In Proceedings of the IEEE, 2002.
[3]
D. Gao and N. Vasconcelos. Discriminant saliency for visual recognition from cluttered scenes. In NIPS, 2004.
[4]
X. Hou and L. Zhang. Saliency detection: A spectral residual approach. CVPR, 2007.
[5]
L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 2000.
[6]
L. Itti and C. Koch. Computational modelling of visual attention. Nature Review Neuroscience, 2001.
[7]
L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. PAMI, 1998.
[8]
A. Mittal and N. Paragios. Motion-based background subtraction using adaptive kernel density estimation. CVPR, 2004.
[9]
A. Monnet, A. Mittal, N. Paragios, and V. Ramesh. Background modeling and subtraction of dynamic scenes. ICCV, 2003.
[10]
C. Stauffer and W. Grimson. Adaptive background mixture models for real-time tracking. CVPR, 1999.
[11]
O. Tuzel, F. Porikli, and P. Meer. A bayesian approach to background modeling. In CVPR, 2005.
[12]
D. Walther, L. Itti, M. Riesenhuber, T. Poggio, and C. Koch. Attentional selection for object recognition -- a gentle way. In 2nd Workshop on Biologically Motivated Computer Vision, 2002.
[13]
J. Zhong and S. Sclaroff. Segmenting foreground objects from a dynamic textured background via a robust kalman filter. ICCV, 2003.

Cited By

View all
  • (2024)Metaverse Framework Designing for Energy Scheduling in Energy Internet of Things Considering EmergenceDigital Twin10.12688/digitaltwin.17873.23(6)Online publication date: 26-Jul-2024
  • (2024)Endow SAM with Keen Eyes: Temporal-Spatial Prompt Learning for Video Camouflaged Object Detection2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01803(19058-19067)Online publication date: 16-Jun-2024
  • (2024)Hybrid time-spatial video saliency detection method to enhance human action recognition systemsMultimedia Tools and Applications10.1007/s11042-024-18126-x83:30(74053-74073)Online publication date: 14-Feb-2024
  • Show More Cited By

Index Terms

  1. Temporal spectral residual: fast motion saliency detection

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '09: Proceedings of the 17th ACM international conference on Multimedia
    October 2009
    1202 pages
    ISBN:9781605586083
    DOI:10.1145/1631272
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 October 2009

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. motion saliency detection
    2. temporal spectral residual
    3. video analysis

    Qualifiers

    • Short-paper

    Conference

    MM09
    Sponsor:
    MM09: ACM Multimedia Conference
    October 19 - 24, 2009
    Beijing, China

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)17
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Metaverse Framework Designing for Energy Scheduling in Energy Internet of Things Considering EmergenceDigital Twin10.12688/digitaltwin.17873.23(6)Online publication date: 26-Jul-2024
    • (2024)Endow SAM with Keen Eyes: Temporal-Spatial Prompt Learning for Video Camouflaged Object Detection2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01803(19058-19067)Online publication date: 16-Jun-2024
    • (2024)Hybrid time-spatial video saliency detection method to enhance human action recognition systemsMultimedia Tools and Applications10.1007/s11042-024-18126-x83:30(74053-74073)Online publication date: 14-Feb-2024
    • (2023)Metaverse Framework Designing for Energy Scheduling in Energy Internet of Things Considering EmergenceDigital Twin10.12688/digitaltwin.17873.13(6)Online publication date: 7-Aug-2023
    • (2022)A Gated Fusion Network for Dynamic Saliency PredictionIEEE Transactions on Cognitive and Developmental Systems10.1109/TCDS.2021.309497414:3(995-1008)Online publication date: Sep-2022
    • (2022)A compact deep architecture for real-time saliency predictionSignal Processing: Image Communication10.1016/j.image.2022.116671104(116671)Online publication date: May-2022
    • (2021)Active Contour Model Using Fast Fourier Transformation for Salient Object DetectionElectronics10.3390/electronics1002019210:2(192)Online publication date: 15-Jan-2021
    • (2019)Learning Coupled Convolutional Networks Fusion for Video Saliency PredictionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2018.287095429:10(2960-2971)Online publication date: Oct-2019
    • (2019)Real-Time Video Saliency Prediction Via 3D Residual Convolutional Neural NetworkIEEE Access10.1109/ACCESS.2019.29464797(147743-147754)Online publication date: 2019
    • (2019)A motion and lightness saliency approach for forest smoke segmentation and detectionMultimedia Tools and Applications10.1007/s11042-019-08047-5Online publication date: 9-Aug-2019
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media