skip to main content
10.1145/1101149.1101306acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
Article

Robust subspace analysis for detecting visual attention regions in images

Published: 06 November 2005 Publication History

Abstract

Detecting visually attentive regions of an image is a challenging but useful issue in many multimedia applications. In this paper, we describe a method to extract visual attentive regions in images using subspace estimation and analysis techniques. The image is represented in a 2D space using polar transformation of its features so that each region in the image lies in a 1D linear subspace. A new subspace estimation algorithm based on Generalized Principal Component Analysis (GPCA) is proposed. The robustness of subspace estimation is improved by using weighted least square approximation where weights are calculated from the distribution of K nearest neighbors to reduce the sensitivity of outliers. Then a new region attention measure is defined to calculate the visual attention of each region by considering both feature contrast and geometric properties of the regions. The method has been shown to be effective through experiments to be able to overcome the scale dependency of other methods. Compared with existing visual attention detection methods, it directly measures the global visual contrast at the region level as opposed to pixel level contrast and can correctly extract the attentive region.

References

[1]
U. Rutishauser, D. Walther, C. Koch, and P. Perona. Is bottom-up attention useful for object recognition? In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 37--44, Washington, DC, USA, July 2004.
[2]
U. Rutishauser, D. Walther, C. Koch, and P. Perona. On the usefulness of attention for object recognition. In 2nd International Workshop on Attention and Performance in Computational Vision 2004, pages 96--103, Prague, Czech Republic, May 2004.
[3]
D. Walther, U. Rutishauser, C. Koch, and P. Perona. Selective visual attention enables learning and recognition of multiple objects in cluttered scenes. Computer Vision and Image Understanding, pages 745--770, to be published 2005.
[4]
A. Bamidele, F. W. Stentiford, and J. Morphett. An attention-based approach to content based image retrieval. British Telecommunications Advanced Research Technology Journal on Intelligent Spaces (Pervasive Computing), 22(3), July 2004.
[5]
X.-J. Wang, W.-Y. Ma, and X. Li. Data-driven approach for bridging the cognitive gap in image retrieval. In Proceedings of the 2004 IEEE International Conference on Multimedia and Expo, volume 3, pages 2231--2234, Taibei, Taiwan, June 2004.
[6]
H. Liu, X. Xie, W.-Y. Ma, and H.-J. Zhang. Automatic browsing of large pictures on mobile devices. In Proceedings of the eleventh ACM international conference on Multimedia, pages 148--155, Berkeley, CA, USA, 2003. ACM Press.
[7]
L. Chen, X. Xie, X. Fan, W.-Y. Ma, H.-J. Zhang, and H. Zhou. A visual attention model for adapting images on small displays. ACM Multimedia Systems Journal, 9(4):353--364, November 2003.
[8]
Y. Hu, L.-T. Chia, and D. Rajan. Region-of-interest based image resolution adaptation for mpeg-21 digital item. In Proceedings of the 12th annual ACM international conference on Multimedia, pages 340--343, New York, NY, USA, 2004. ACM Press.
[9]
B. Suh, H. Ling, B. B. Bederson, and D. W. Jacobs. Automatic thumbnail cropping and its effectiveness. In Proceedings of the 16th annual ACM symposium on User interface software and technology, pages 95--104, New York, NY, USA, Novemember 2003. ACM Press.
[10]
L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254--1259, November 1998.
[11]
Y.-F. Ma and H.-J. Zhang. Contrast-based image attention analysis by using fuzzy growing. In Proceedings of the eleventh ACM international conference on Multimedia, pages 374--381, New York, NY, USA, Novemember 2003. ACM Press.
[12]
L. Itti and C. Koch. A comparison of feature combination strategies for saliency-based visual attention systems. In Proceedings of SPIE Human Vision and Electronic Imaging IV (HVEI'99), volume 3644, pages 473--482, San Jose, CA, January 1999.
[13]
A. P. Bradley and F. W. Stentiford. Visual attention for region of interest coding in jpeg2000. Journal of Visual Communication and Image Representation, 14(3):232--250, September 2003.
[14]
R. Vidal, Y. Ma, and S. Sastry. Generalized principal component analysis (gpca). In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 621--628, Madison, Wisconsin, USA, June 2003.
[15]
R. Vidal. Generalized Principal Component Analysis (GPCA): an Algebraic Geometric Approach to Subspace Clustering and Motion Segmentation. PhD thesis, School of Electrical Engineering and Computer Sciences, University of California at Berkeley, August 2003.
[16]
Q. Ke and T. Kanade. Robust subspace clustering by combined use of knnd metric and svd algorithm. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 592--599, Washington, DC, USA, July 2004.
[17]
Y. Hu, X. Xie, W.-Y. Ma, L.-T. Chia, and D. Rajan. Salient region detection using weighted feature maps based on the human visual attention model. In Proceedings of the Fifth IEEE Pacific-Rim Conference on Multimedia, volume 2, pages 993--1000, Tokyo Waterfront City, Japan, November 2004.

Cited By

View all
  • (2025)SIHENet: Semantic Interaction and Hierarchical Embedding Network for 360° Salient Object DetectionIEEE Transactions on Instrumentation and Measurement10.1109/TIM.2024.350704774(1-15)Online publication date: 2025
  • (2024)Salient object detection: a mini reviewFrontiers in Signal Processing10.3389/frsip.2024.13567934Online publication date: 10-May-2024
  • (2024)Context Proposals for video saliency segmentation2024 International Conference on Control, Automation and Diagnosis (ICCAD)10.1109/ICCAD60883.2024.10553765(1-8)Online publication date: 15-May-2024
  • Show More Cited By

Index Terms

  1. Robust subspace analysis for detecting visual attention regions in images

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MULTIMEDIA '05: Proceedings of the 13th annual ACM international conference on Multimedia
      November 2005
      1110 pages
      ISBN:1595930442
      DOI:10.1145/1101149
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 November 2005

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. GPCA
      2. subspace analysis
      3. visual attention

      Qualifiers

      • Article

      Conference

      MM05

      Acceptance Rates

      MULTIMEDIA '05 Paper Acceptance Rate 49 of 312 submissions, 16%;
      Overall Acceptance Rate 1,291 of 5,076 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)5
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 15 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)SIHENet: Semantic Interaction and Hierarchical Embedding Network for 360° Salient Object DetectionIEEE Transactions on Instrumentation and Measurement10.1109/TIM.2024.350704774(1-15)Online publication date: 2025
      • (2024)Salient object detection: a mini reviewFrontiers in Signal Processing10.3389/frsip.2024.13567934Online publication date: 10-May-2024
      • (2024)Context Proposals for video saliency segmentation2024 International Conference on Control, Automation and Diagnosis (ICCAD)10.1109/ICCAD60883.2024.10553765(1-8)Online publication date: 15-May-2024
      • (2024)Cross‐scale resolution consistent network for salient object detectionIET Image Processing10.1049/ipr2.13136Online publication date: 16-Jun-2024
      • (2023)Salient Objects in ClutterIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2022.316645145:2(2344-2366)Online publication date: 1-Feb-2023
      • (2022)Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial NetworkElectronics10.3390/electronics1121363711:21(3637)Online publication date: 7-Nov-2022
      • (2020)Salient Object Detection Techniques in Computer Vision—A SurveyEntropy10.3390/e2210117422:10(1174)Online publication date: 19-Oct-2020
      • (2019)Salient object detection: A surveyComputational Visual Media10.1007/s41095-019-0149-95:2(117-150)Online publication date: 21-Jun-2019
      • (2019)Salient object detection employing regional principal color and texture cuesMultimedia Tools and Applications10.1007/s11042-019-7153-z78:14(19735-19751)Online publication date: 1-Jul-2019
      • (2018)Salient Structure Detection Using Depth-Wise Analysis2018 International Conference on Emerging Trends and Innovations In Engineering And Technological Research (ICETIETR)10.1109/ICETIETR.2018.8529126(1-4)Online publication date: Jul-2018
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media