Skip to main content

A Bottom-Up Approach for Learning Visual Object Detection Models from Unreliable Sources

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 7476))

Abstract

The ability to learn models of computational vision from sample data has significantly advanced the field. Obtaining suitable training image sets, however, remains a challenging problem. In this paper we propose a bottom-up approach for learning object detection models from weakly annotated samples, i.e., only category labels are given per image. By combining visual saliency and distinctiveness of local image features regions of interest are extracted in a completely automatic way without requiring detailed annotations. Using a bag-of-features representation of these regions, object recognition models can be trained for the given object categories. As weakly labeled sample images can easily be obtained from image search engines, our approach does not require any manual annotation effort. Experiments on data from the Visual Object Classes Challenge 2011 show that promising object detection results can be achieved by our proposed method.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breiman, L.: Random forests. Machine Learning, 5–32 (2001)

    Google Scholar 

  2. Cheng, M.M., Zhang, G.X., Mitra, N.J., Huang, X., Hu, S.M.: Global contrast based salient region detection. In: IEEE CVPR, pp. 409–416 (2011)

    Google Scholar 

  3. Crandall, D.J., Huttenlocher, D.P.: Weakly Supervised Learning of Part-Based Spatial Models for Visual Object Recognition. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 16–29. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  4. Elazary, L., Itti, L.: Interesting objects are visually salient. Journal of Vision 8(3), 1–15 (2008)

    Article  Google Scholar 

  5. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2011 (VOC 2011) Results (2011), http://www.pascal-network.org/challenges/VOC/voc2011/workshop/index.html

  6. Fergus, R., Perona, P., Zisserman, A.: Weakly supervised scale-invariant learning of models for visual recognition. Int. J. Comput. Vision 71(3), 273–303 (2007)

    Article  Google Scholar 

  7. Fergus, R., Li, F.F., Perona, P., Zisserman, A.: Learning object categories from Google’s image search. In: Proc. Int. Conf. Computer Vision (2005)

    Google Scholar 

  8. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  9. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. International Journal of Computer Vision 1(4), 321–331 (1988)

    Article  Google Scholar 

  10. Kim, G., Torralba, A.: Unsupervised detection of regions of interest using iterative link analysis. In: Annual Conference on Neural Information Processing Systems, NIPS 2009, Vancouver, Canada (2009)

    Google Scholar 

  11. Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.Y.: Learning to detect a salient object. IEEE Trans. on Pattern Analysis and Machine Intelligence 33(2), 353–367 (2011)

    Article  Google Scholar 

  12. Liu, Z., Xiong, H.: Object detection and localization using random forest. In: International Conference on Intelligent System Design and Engineering Application, pp. 1074–1078 (2012)

    Google Scholar 

  13. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  14. Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: A database and web-based tool for image annotation. International Journal of Computer Vision 77, 157–173 (2008)

    Article  Google Scholar 

  15. Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence 30(11), 1958–1970 (2008)

    Article  Google Scholar 

  16. van de Weijer, J., Schmid, C., Verbeek, J.: Learning color names from real-world images. In: Proc. IEEE Comp. Soc. Conf. on Computer Vision and Pattern Recognition, Los Alamitos, CA, USA, pp. 1–8 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nasse, F., Fink, G.A. (2012). A Bottom-Up Approach for Learning Visual Object Detection Models from Unreliable Sources. In: Pinz, A., Pock, T., Bischof, H., Leberl, F. (eds) Pattern Recognition. DAGM/OAGM 2012. Lecture Notes in Computer Science, vol 7476. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32717-9_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32717-9_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32716-2

  • Online ISBN: 978-3-642-32717-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics