Abstract
In this paper, we consider the problem of unsupervised feature learning for spatio-temporal data streams, specifically video data. We focus on the problem of learning features invariant to image transformations and regard a video stream as a set of pairwise similiar images. Many existing methods dealing with the problem of invariant feature extraction either try to build a model of the transformations present in the data or achieve invariance by adding a penalty to a reconstruction loss term. In contrast to this, we propose to learn invariant features by directly optimizing the temporal coherence of a hidden, and possibly deep, representation. We find that our approach is both fast and capable of learning deep feature representations invariant to complex image transformations. We furthermore show that features learned using our approach can be used to improve object recognition performance in still images (Caltech-101, STL-10).
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Lecun, Y., Huang, F.J., Bottou, L.: Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. In: CVPR (2004)
Coates, A., Lee, H., Ng, A.Y.: An Analysis of Single-layer Networks in Unsupervised Feature Learning. In: AISTATS (2011)
Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality Reduction by Learning an Invariant Mapping. In: CVPR (2006)
Goodfellow, I., Le, Q., Saxe, A., Lee, H., Ng, A.: Measuring Invariances in Deep Networks. In: NIPS (2009)
Field, D.J.: What is the goal of sensory coding? Neural Computation, 559–601 (1994)
Willmore, B., Tolhurst, D.J.: Characterizing the Sparseness of Neural Codes. Network 12, 255–270 (2001)
Grimes, D.B., Rao, R.P.N.: Bilinear Sparse Coding for Invariant Vision. Neural Computation 17, 47–73 (2005)
Olshausen, B.A., Cadieu, C., Culpepper, J., Warland, D.K.: Bilinear Models of Natural Images. In: SPIE (2007)
Memisevic, R.: Unsupervised Learning of Image Transformations. In: CVPR (2007)
Adelson, E.H., Bergen, J.R.: Spatiotemporal Energy Models for the Perception of Motion. J. Opt. Soc. Am. A 2(2), 284–299 (1985)
Ohzawa, I., Deangelis, G.C., Freeman, R.D.: Stereoscopic Depth Discrimination in the Visual Cortex: Neurons Ideally Suited as Disparity Detectors. Science 249, 1037–1041 (1990)
Hyvrinen, A., Hoyer, P.: Emergence of Phase and Shift Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces. Neural Computation 12(7), 1705–1720 (2000)
Bergstra, J., Bengio, Y., Louradour, J.: Suitability of V1 Energy Models for Object Classification. Neural Computation, 1–17 (2010)
Földiák, P.: Learning Invariance from Transformation Sequences. Neural Computation 3(2), 194–200 (1991)
Hyvarinen, A., Hurri, J., Vayrynen, J.: Bubbles: a Unifying Framework for Low-level Statistical Properties of Natural Image Sequences. Journal of the Optical Society of America A 20(7), 1237–1252 (2003)
Hurri, J., Hyvrinen, A.: Temporal Coherence, Natural Image Sequences, and the Visual Cortex. In: NIPS (2002)
Berkes, P., Wiskott, L.: Slow Feature Analysis Yields a Rich Repertoire of Complex Cell Properties. Journal of Vision 5(6), 579–602 (2005)
Zou, W., Ng, A., Yu, K.: Unsupervised Learning of Visual Invariance with Temporal Coherence. In: NIPS*2011 Workshop on Deep Learning and Unsupervised Feature Learning (2011)
Le, Q., Ngiam, J., Chen, Z., Hao Chia, D.J., Koh, P.W., Ng, A.: Tiled convolutional neural networks. In: NIPS (2010)
Ngiam, J., Koh, P.W.W., Chen, Z., Bhaskar, S.A., Ng, A.Y.: Sparse Filtering. In: NIPS (2011)
Memisevic, R., Zach, C., Hinton, G., Pollefeys, M.: Gated Softmax Classification. In: NIPS (2010)
Riedmiller, M., Braun, H.: A Direct Adaptive Method for Faster Backpropagation Learning: the RPROP Algorithm. In: ICNN (1993)
Memisevic, R., Hinton, G.E.: Learning to Represent Spatial Transformations with Factored Higher-Order Boltzmann Machines. Neural Computation 22(6), 1473–1492 (2010)
Fei-Fei, L., Fergus, R., Perona, P.: Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In: CVPR Workshop of Generative Model Based Vision, WGMBV (2004)
Kavukcuoglu, K., Sermanet, P., Lan Boureau, Y., Gregor, K., Mathieu, M., Lecun, Y.: Learning Convolutional Feature Hierarchies for Visual Recognition. In: NIPS (2010)
Yu, K., Lin, Y., Lafferty, J.D.: Learning Image Representations from the Pixel Level via Hierarchical Sparse Coding. In: CVPR (2011)
Coates, A., Ng, A.: The Importance of Encoding versus Training with Sparse Coding and Vector Quantization. In: ICML (2011)
Coates, A., Ng, A.Y.: Selecting Receptive Fields in Deep Networks. In: NIPS (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Springenberg, J.T., Riedmiller, M. (2012). Learning Temporal Coherent Features through Life-Time Sparsity. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds) Neural Information Processing. ICONIP 2012. Lecture Notes in Computer Science, vol 7663. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34475-6_42
Download citation
DOI: https://doi.org/10.1007/978-3-642-34475-6_42
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34474-9
Online ISBN: 978-3-642-34475-6
eBook Packages: Computer ScienceComputer Science (R0)