Skip to main content

Target-Driven Visual Words Representation via Conditional Random Field and Sparse Coding

  • Chapter
Book cover Robot Intelligence Technology and Applications 2

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 274))

  • 4071 Accesses

Abstract

At any given moment, humans eye captures a large amount of information simultaneously. Among these information, human visual system is able to select specific information in which human is interested. In recent years, there have been trials for (system) experimental, computational and theoretical studies on imitating human visual system, which are commonly referred as sparse coding. When any visual stimuli are given, human visual system makes a minimal number of neurons activated efficiently. It increases the storage capacity in associative memories. A set of activated neurons and deactivated neurons are called sparse code and the process to make sparse code is called sparse coding. In this paper, the effectiveness of the proposed method is demonstrated for Graz-02 dataset. And visual words were visualized that were relevant to activated neurons as patch-level images and sparse coding. By displaying active neurons that are represented by visual words, sparse coding could be a solution to top-down visual object detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Fei-Fei, L., Perona, P.: A bayesian hierarchical model for learning natural scene categories. In: CVPR 2005 (2005)

    Google Scholar 

  2. Olshausen, B., Field, D.: Sparse coding of sensory Inputs. Current Opinion in Neurobiology 14(4), 481–487 (2004)

    Article  Google Scholar 

  3. Yang, J., Yang, M.-H.: Top-down visual saliency via Joint CRF andDictionary Learning. In: CVPR (2012)

    Google Scholar 

  4. Laffety, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence. ACM (2001)

    Google Scholar 

  5. Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004)

    Article  Google Scholar 

  6. Szummer, M., Kohli, P., Hoiem, D.: Learning cRFs using graph cuts. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 582–595. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  7. Yang, J., Yu, K., Huang, T.: Supervised translation-invariant sparse coding. In: CVPR (2010)

    Google Scholar 

  8. Tosic, I., Frossard, P.: Dictionary Learning. IEEE Signal Processing Magazine (2011)

    Google Scholar 

  9. Jolliffe, I.: Principal Component Analysis. Springer, New York (1986)

    Book  Google Scholar 

  10. Information-maximization approach to blind separation and blind deconvolution. Neural Comput. 7(6), 1129–1159 (1995)

    Google Scholar 

  11. Barlow, H.B.: Possible principles underlying the transformations of sensory messages. In: Rosenblith, W. (ed.) Sensory Communication, ch. 13, pp. 217–234. MIT Press, Cambridge (1961)

    Google Scholar 

  12. Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: NIPS (2006)

    Google Scholar 

  13. Joachims, T., Finley, T., Yu, C.-N.: Cutting-plane training of structural svms. Machine Learning 77(1), 27–59 (2009)

    Article  MATH  Google Scholar 

  14. Yang, J., Yu, K., Huang, T.: Supervised translation-invariant sparse coding. In: CVPR (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Y. -H. Yoo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Yoo, Y.H., Kim, J.H. (2014). Target-Driven Visual Words Representation via Conditional Random Field and Sparse Coding. In: Kim, JH., Matson, E., Myung, H., Xu, P., Karray, F. (eds) Robot Intelligence Technology and Applications 2. Advances in Intelligent Systems and Computing, vol 274. Springer, Cham. https://doi.org/10.1007/978-3-319-05582-4_60

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-05582-4_60

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-05581-7

  • Online ISBN: 978-3-319-05582-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics