Skip to main content

Boosting Shift-Invariant Features

  • Conference paper
  • 2559 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 5748))

Abstract

This work presents a novel method for training shift-invariant features using a Boosting framework. Features performing local convolutions followed by subsampling are used to achieve shift-invariance. Other systems using this type of features, e.g. Convolutional Neural Networks, use complex feed-forward networks with multiple layers. In contrast, the proposed system adds features one at a time using smoothing spline base classifiers. Feature training optimizes base classifier costs. Boosting sample-reweighting ensures features to be both descriptive and independent. Our system has a lower number of design parameters as comparable systems, so adapting the system to new problems is simple. Also, the stage-wise training makes it very scalable. Experimental results show the competitiveness of our approach.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ranzato, M., Huang, F.J., Boureau, Y.L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)

    Google Scholar 

  2. Serre, T., Wolf, L., Bileschi, S., Riesenhuber, M., Poggio, T.: Robust object recognition with cortex-like mechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(3), 411–426 (2007)

    Article  Google Scholar 

  3. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. The Annals of Statistics 38(2) (2000)

    Google Scholar 

  4. Bouchard, G., Triggs, B.: Hierarchical part-based visual object categorization. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 710–715 (2005)

    Google Scholar 

  5. Mutch, J., Lowe, D.G.: Multiclass object recognition with sparse, localized features. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 11–18 (2006)

    Google Scholar 

  6. Huang, F.J., LeCun, Y.: Large-scale learning with svm and convolutional nets for generic object categorization. In: Proc. Computer Vision and Pattern Recognition Conference (CVPR 2006). IEEE Press, Los Alamitos (2006)

    Google Scholar 

  7. Schwenk, H., Bengio, Y.: Boosting neural networks. Neural Comput. 12(8), 1869–1887 (2000)

    Article  Google Scholar 

  8. Eilers, P.H.C., Marx, B.D.: Flexible smoothing with b-splines and penalties. Statistical Science 11(2), 89–121 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  9. Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR 2003: Proceedings of the Seventh International Conference on Document Analysis and Recognition, Washington, DC, USA, Microsoft Research, p. 958. IEEE Computer Soc, Los Alamitos (2003)

    Chapter  Google Scholar 

  10. Simard, P.Y., LeCun, Y.A., Denker, J.S., Victorri, B.: Transformation invariance in pattern recognition - tangent distance and tangent propagation. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 239–274. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  11. Keysers, D., et al.: Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 269–274 (2004)

    Article  Google Scholar 

  12. Bottou, L., Cortes, C., Denker, J.S., Drucker, H., Guyon, I., Jackel, L.D., LeCun, Y., Muller, U.A., Sackinger, E., Simard, P., Vapnik, V.: Comparison of classifier methods: a case study in handwritten digit recognition. In: Proceedings of the 12th IAPR International Conference on Pattern Recognition, 1994. Conference B: Computer Vision & Image Processing, vol. 2, pp. 77–82 (1994)

    Google Scholar 

  13. Cun, Y.L., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Howard, W., Jackel, L.D.: Handwritten digit recognition with a back-propagation network. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems II (Denver 1989), pp. 396–404. Morgan Kaufmann, San Mateo (1990)

    Google Scholar 

  14. Agarwal, S., Awan, A., Roth, D.: Learning to detect objects in images via a sparse, part-based representation. In: IEEE Transactions on Pattern Analysis and Matchine Intelligence, vol. 26 (2004)

    Google Scholar 

  15. Lampert, C.H., Blaschko, M.B., Hofmann, T.: Beyond sliding windows: Object localization by efficient subwindow search. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–8 (2008)

    Google Scholar 

  16. Leibe, B., Leonardis, A., Schiele, B.: Robust object detection with interleaved categorization and segmentation. Int. J. Comput. Vision 77(1-3), 259–289 (2008)

    Article  Google Scholar 

  17. Fritz, M., Leibe, B., Caputo, B., Schiele, B.: Integrating representative and discriminative models for object category detection. In: ICCV 2005: Proceedings of the Tenth IEEE International Conference on Computer Vision, Washington, DC, USA, pp. 1363–1370. IEEE Computer Society Press, Los Alamitos (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hörnlein, T., Jähne, B. (2009). Boosting Shift-Invariant Features. In: Denzler, J., Notni, G., Süße, H. (eds) Pattern Recognition. DAGM 2009. Lecture Notes in Computer Science, vol 5748. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03798-6_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03798-6_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03797-9

  • Online ISBN: 978-3-642-03798-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics