Skip to main content
Log in

Capturing contextual relationship for effective media search

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. In this paper, we assume the semantics of media is determined by the contextual relationship in a dataset, and introduce the method to capture the contextual information from a large media (especially image) dataset for effective search. Similarity search in an image database based on this contextual information shows encouraging experimental results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Barnard K, Forsyth D (2003) Learning the semantics of words and pictures. J Mach Learn Res 3:1107–1135

    MATH  Google Scholar 

  2. De Valois RL, De Valois KK (1988) Spatial vision. Oxford Science, Oxford

    Google Scholar 

  3. Ennis D (1998) Modeling similarity and identification when there are momentary fluctuations in psychological amplitudes. Multidimensional models of perception and cognition. Lawrence Erlbaum Assoc. Pub, Philadelphia, pp 279–298

    Google Scholar 

  4. Goh K-S, Li B, Chang E (2002) DynDex: a dynamic and non-metric space indexer. Proc ACM Multimedia, pp 466–475

  5. Haykin S (1994) Neural networks: a comprehensive foundation. Maxmillan, NY

    MATH  Google Scholar 

  6. He X, Ma W-Y, Zhang H-J (2004) Learning an image manifold for retrieval. Proc. ACM Multimedia Conf, pp 17–23

  7. Hoi C-H, Lyu M (2004) A novel log-based relevance feedback technique in content-based image retrieval. Proc ACM Multimedia Conf, pp 24–31

  8. Ishikawa Y, Subramanya R, Faloutsos C (1998) MindReader: querying databases through multiple examples. Proc VLDB Conf pp 218–227

  9. Jeon J, Lavrenko V, Manmatha R (2003) Automatic image annotation and retrieval using cross-media relevance models. Proc of ACM SIGIR Conf, pp 119–126

  10. Muneesawang P, Guan L (2004) An interactive approach for CBIR using a network of radial basis functions. IEEE Trans Multimedia 6(5):703–716

    Article  Google Scholar 

  11. Pan JY, Yang HJ, Duygulu P, Faloutsos C (2004) Automatic image captioning. Proc IEEE Int’l Conf on Multimedia and Expo, pp 1987–1990

  12. Rui Y, Huang TS, Mehrotra S (1997) Content-based image retrieval with relevance feedback in MARS. Proc of Int’l Conf. on Image Processing, pp 815–818

  13. Rui Y, Huang TS, Ortega M, Mehrotra S (1998) Relevance feedback: a power tool for interactive content-based image retrieval. IEEE Transactions on Circuits and Systems for Video Technology 8(5):644–655

    Article  Google Scholar 

  14. Schölkopf B, Kung S, Burges C, Girosi F, Niyogi P, Poggio T, Vapnik V (1997) Comparing support vector machines with Gaussian kernels to radial basis function classifiers. IEEE Trans Signal Process 45:2758–2765

    Article  Google Scholar 

  15. Schölkopf B, Smola A, Müller K (1998) Nonlinear component analysis as a Kernel Eigenvalue problem. Neural Comput 10:1299–1319

    Article  Google Scholar 

  16. Shepard R (1987) Toward a universal law of generalization for psychological science. Science 237:1317–1323

    Article  MATH  MathSciNet  Google Scholar 

  17. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  18. Srikanth M, Varner J, Bowden M, Moldovan D (2005) Exploiting ontologies for automatic image annotation. Proc ACM SIGIR Conf, pp 552–558

  19. Tong S, Chang E (2001) Support vector machine active learning for image retrieval. Proc ACM Multimedia Conf, pp 107–118

  20. Vapnik VN (1998) Statistical learning theory. Wiley, NY

    MATH  Google Scholar 

  21. Wu G, Chang EY, Panda N (2005) Formulating context-dependent similarity functions. Proc ACM Multimedia, pp 725–734

  22. Wu L, Faloutsos C, Sycara K, Payne TR (2000) FALCON: feedback adaptive loop for content-based retrieval. Proc of VLDB Conf, pp 297–306

  23. Zhou D, Bousquet O, Lal TN, Weston J, Schölkopf B (2004) Learning with local and global consistency, advances in neural information processing systems. MIT, Cambridge, p 16

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guang-Ho Cha.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cha, GH. Capturing contextual relationship for effective media search. Multimed Tools Appl 56, 351–364 (2012). https://doi.org/10.1007/s11042-010-0670-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-010-0670-4

Keywords

Navigation