Abstract
Based on general findings from the field of neuroscience and their algorithmic implementations using signal processing, information theory and machine learning techniques, this paper highlights the advantages of modelling a signal in a sparse and high-dimensional feature space. The emphasis is put on the hierarchical organisation, very high dimensionality and sparseness aspects of auditory information, that allow unsupervised learning of meaningful auditory objects from simple linear projections. When the dictionaries are learned using independent component analysis (ICA), it is shown that specific spectro-temporal modulation patterns are learned to optimally represent speech, noise and tonal components. In a noisy isolated-word speech recognition task, sparse and high-dimensional features have shown greater robustness to noise compared to a standard system based on a dense low-dimensional feature space. This brings new ways of thinking in the field of recognition and classification of acoustic signals.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Molotchnikoff, S., Rouat, J.: Brain at work: Time, Sparseness and Superposition Principles. Frontiers in Bioscience (Landmark Edition) 17(1), 583–606 (2012)
Winer, J., Schreiner, C.: The inferior colliculus. Springer (2005)
Hickok, G., Poeppel, D.: The cortical organization of speech processing. Nature Reviews Neuroscience 8(5), 393–402 (2007)
Dahl, G.E., Yu, D., Deng, L., Acero, A.: Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition. IEEE Transactions on Audio, Speech, and Language Processing 20(1), 30–42 (2012)
Tosic, I., Frossard, P.: Dictionnary Learning: What is the right representation for my signal? IEEE Signal Processing Magazine 28(2), 27–38 (2011)
Klein, D.J., König, P., Körding, K.P.: Sparse Spectrotemporal Coding of Sounds. EURASIP Journal on Advances in Signal Processing 2003(7), 659–667 (2003)
Heckmann, M., Domont, X., Joublin, F., Goerick, C.: A Hierarchical Framework for Spectro-Temporal Feature Extraction. Speech Communication 53, 736–752 (2011)
Papageorgiou, C.P., Oren, M., Poggio, T.: A general framework for object detection. In: Proceedings of ICCV, pp. 555–562 (1998)
Lewicki, M.: Efficient coding of natural sounds. Nat. Neurosci. 5(4), 356–363 (2002)
Lee, J., Lee, T., Jung, H., Lee, S.: On the efficient speech feature extraction based on independent component analysis. Neural Process. Lett. 15(3), 235–245 (2002)
Hohmann, V.: Frequency analysis and synthesis using a Gammatone filterbank. Acta Acustica united with Acustica 88(3), 433–442 (2002)
Avendaño, C., Deng, L., Hermansky, H., Gold, B.: The analysis and representation of speech. Speech Processing in the Auditory System 18, 63–100 (2004)
Greenberg, S., Kingsbury, B.: The modulation spectrogram: In pursuit of an invariant representation of speech. In: Proceedings of ICASSP, pp. 1647–1650 (1997)
Hyvärinen, A., Oja, E.: Independent component analysis: algorithms and applications. Neural Networks 13(4-5), 411–430 (2000)
Obradovic, D., Deco, G.: Blind signal separation revisited. In: Proceedings of the 36th IEEE Conference on Decision and Control, pp. 1591–1596 (1997)
Lee, J., Jung, H., Lee, T., Lee, S.: Speech enhancement with MAP estimation and ICA-based speech features. Electronics Letters 36(17), 1506–1507 (2000)
Hyvärinen, A.: Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks 10(3), 626–634 (1999)
Liberman, M., et al.: TI 46-Word Linguistic Data Consortium, Philadelphia (1993)
Young, S., Evermann, G., Kershaw, D., Moore, G., Odell, J., Ollason, D., Valtchev, V., Woodland, P.: The HTK Book Version 3.4. Cambridge University Press (2009)
Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989)
Varga, A., Steeneken, H.: Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems. Speech Communication 12(3), 247–251 (1993)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Brodeur, S., Rouat, J. (2013). Robust Hierarchical and Sparse Representation of Natural Sounds in High-Dimensional Space. In: Drugman, T., Dutoit, T. (eds) Advances in Nonlinear Speech Processing. NOLISP 2013. Lecture Notes in Computer Science(), vol 7911. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38847-7_20
Download citation
DOI: https://doi.org/10.1007/978-3-642-38847-7_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-38846-0
Online ISBN: 978-3-642-38847-7
eBook Packages: Computer ScienceComputer Science (R0)