A robust model for spatiotemporal dependencies
Introduction
Blind source separation (BSS) describes the task of recovering an unknown mixing process and underlying sources of an observed data set. It has numerous applications in fields ranging from signal and image processing to the separation of speech and radar signals to financial data analysis. Many BSS algorithms assume either independence (independent component analysis, ICA) or diagonal autocorrelations of the sources [7], [6]. Here, we extend BSS algorithms based on time-decorrelation [18], [12], [2], [20], [14], [17]. They rely on the fact that the data sets have non-trivial autocorrelations so that the unknown mixing matrix can be recovered by generalized eigenvalue decomposition.
Spatiotemporal BSS in contrast to the more common spatial or temporal BSS tries to achieve both spatial and temporal separations by optimizing a joint energy function. First proposed by Stone et al. [15], it is a promising method, which has potential applications in areas where data contains an inherent spatiotemporal structure, such as data from biomedicine or geophysics including oceanography and climate dynamics. Stone's algorithm is based on the Infomax ICA algorithm [1], which due to its online nature involves some rather intricate choices of parameters, specifically in the spatiotemporal version, where online updates are being performed both in space and time. Commonly, the spatiotemporal data sets are recorded in advance, so we can easily replace spatiotemporal online learning by batch optimization. This has the advantage of greatly reducing the number of parameters in the system, and leads to more stable optimization algorithms. We focus on the so-called algebraic BSS algorithms [18], [2], [20], [3], reviewed for example in [16], which employ generalized eigenvalue decomposition and joint diagonalization for the factorization. The corresponding learning rules are essentially parameter-free and are known to be robust and efficient [4].
In this contribution, we extend Stone's approach by generalizing the time-decorrelation algorithms to the spatiotemporal case, thereby allowing us to use the inherent spatiotemporal structures of the data. In the experiments presented, we observe good performance of the proposed algorithm when applied to noisy, high-dimensional data sets acquired from functional magnetic resonance imaging (fMRI). We concentrate on fMRI as it is well fit for spatiotemporal decomposition due to the fact that spatial activation networks are mixed with functional and structural temporal components.
Section snippets
Blind source separation
We consider the following temporal BSS problem: Let be a second-order stationary, zero-mean, m-dimensional stochastic process and a full rank matrix such that . The n-dimensional source signals are assumed to have diagonal autocorrelations for all , and the additive noise is modeled by a stationary, temporally and spatially white zero-mean process with variance . is observed, and the goal is to recover and . Having found ,
Separation based on time-delayed decorrelation
For , the mixture autocorrelations factorize,1
This gives an indication of how to recover from . The correlation of the signal part of the mixtures may be calculated as , provided that the noise variance is known. After whitening of , i.e. joint diagonalization of , we can assume
Spatiotemporal structures
Real-world data sets often possess structure in addition to the simple factorization models treated above. For example fMRI measurements contain both temporal and spatial indices so a data entry can depend on position as well as time t. More generally, we want to consider data sets depending on two indices and t, where can be any multidimensional (spatial) index and t indexes the time axis. In practice this generalized random process is realized by a
Algorithmic spatiotemporal BSS
Stone et al. [15] first proposed the model from Eq. (3), where a joint energy function is employed based on mutual entropy and Infomax. Apart from the many parameters used in the algorithm, the involved gradient descent optimization is susceptible to noise, local minima and inappropriate initializations, so we propose a novel, more robust algebraic approach in the following. It is based on the joint diagonalization of source conditions posed not only temporally but also spatially at the same
Results
BSS, mainly based on ICA, is nowadays a quite common tool in fMRI analysis [11], [10]. For this work, we analyzed the performance of stSOBI when applied to fMRI measurements. fMRI data were recorded from 10 healthy subjects performing a visual task. One hundred scans () with five slices each were acquired with five periods of rest and five photic stimulation periods. Stimulation and rest periods comprised 10 repetitions each, i.e. 30 s. Resolution was . The slices were
Conclusion
We have proposed a novel spatiotemporal BSS algorithm named stSOBI. It is based on the joint diagonalization of both spatial and temporal autocorrelations. Sharing the properties of all algebraic algorithms, stSOBI is easy to use, robust (with only a single parameter) and fast (in contrast to the online algorithm proposed by Stone). The employed dimension reduction allows for the spatiotemporal decomposition of high-dimensional data sets such as fMRI recordings. The presented results for such
Acknowledgments
The authors gratefully acknowledge partial financial support by the DFG (GRK 638) and the BMBF (project ‘ModKog’). They would like to thank D. Auer from the MPI of Psychiatry in Munich, Germany, for providing the fMRI data, and A. Meyer-Bäse from the Department of Electrical and Computer Engineering, FSU, Tallahassee, USA for discussions concerning the fMRI analysis. The authors thank the anonymous reviewers for their helpful comments during preparation of this manuscript.
Fabian J. Theis obtained M.Sc. degree in Mathematics and Physics at the University of Regensburg in 2000. He also received a Ph.D. degree in Physics from the same university in 2002 and a Ph.D. in Computer Science from the University of Granada in 2003. He worked as visiting researcher at the department of Architecture and Computer Technology (University of Granada, Spain), at the RIKEN Brain Science Institute (Wako, Japan) and at FAMU-FSU (Florida State University, USA). Currently, he is
References (20)
- et al.
Spatiotemporal independent component analysis of event-related fMRI data using skewed probability density functions
NeuroImage
(2002) - et al.
An information-maximisation approach to blind separation and blind deconvolution
Neural Comput.
(1995) - et al.
A blind source separation technique based on second order statistics
IEEE Trans. Signal Process.
(1997) - et al.
Blind beamforming for nonGaussian signals
IEE Proc.—F
(1993) - et al.
Jacobi angles for simultaneous diagonalization
SIAM J. Mat. Anal. Appl.
(1995) - et al.
Blind separation of nonstationary sources in noisy mixtures
Electron. Lett.
(2000) - et al.
Adaptive Blind Signal and Image Processing
(2002) - A. Hyvärinen, J. Karhunen, E. Oja, Independent Component Analysis, Wiley,...
- et al.
A fast fixed-point algorithm for independent component analysis
Neural Comput.
(1997) - et al.
Overdetermined blind source separationusing more sensors than source signals in a noisy mixture
Cited by (10)
A new convolutive source separation approach for independent/dependent source components
2020, Digital Signal Processing: A Review JournalCitation Excerpt :Extensions and theoretical study of [20] have been given in [21]. Algebraic approaches, using temporal informations and relations between source components, have been also proposed in [22], [23] and [24]. These approaches are based on joint approximate diagonalization of a set of autocorrelation matrices, where temporally and spatially conditions on the sources are used.
A multi-variate blind source separation algorithm
2017, Computer Methods and Programs in BiomedicineCitation Excerpt :Both works have shortcomings, which are corrected for by our proposed combination. The algorithm in [11] employs many additional parameters resulting in a less robust behavior, whereas the approach in [12] is an exact analytical method. Nevertheless, the work of [12] uses pseudoinverses, which introduce a computationally costly and a random component to the algorithm.
Brain Connectivity Studies on Structure-Function Relationships: A Short Survey with an Emphasis on Machine Learning
2021, Computational Intelligence and NeuroscienceSeparation of recto-verso documents using copula based dependent source separation
2020, Proceedings of the IEEE Sensor Array and Multichannel Signal Processing WorkshopBrain connectivity analysis: A short survey
2012, Computational Intelligence and Neuroscience
Fabian J. Theis obtained M.Sc. degree in Mathematics and Physics at the University of Regensburg in 2000. He also received a Ph.D. degree in Physics from the same university in 2002 and a Ph.D. in Computer Science from the University of Granada in 2003. He worked as visiting researcher at the department of Architecture and Computer Technology (University of Granada, Spain), at the RIKEN Brain Science Institute (Wako, Japan) and at FAMU-FSU (Florida State University, USA). Currently, he is heading the ‘Signal Processing & Information Theory’ group at the Institute of Biophysics, University of Regensburg, Germany. His research interests include statistical signal processing, linear and nonlinear independent component analysis, overcomplete blind source separation and biomedical data analysis.
Peter Gruber was born in Bad Homburg, Germany, on April 12, 1976. He obtained a degree in Mathematics in 2002 at the University of Regensburg. He is currently working on his Ph.D. thesis at the Biophysics Department of the University of Regensburg. His research topics include statistical signal processing, linear and nonlinear independent component analysis and geometric measure theory.
Ingo Rudolf Keck was born in Nabburg, Germany, 1974-06-15. He graduated in physics at the University of Regensburg, Germany and received the doctor europeus from the University of Granada, Spain, in 2006.
He works as investigator and postdoc in projects in Biomedicine, Biophysics and Informatics at the Universities of Regensburg, Germany and Granada, Spain. Also he was working as assistant professor at the University of Granada. His interests lay in image processing and signal processing in Biomedicine.
Elmar W. Lang received his Physics Diploma in 1977 and his Ph.D. in Physics in 1980 and habilitated in Biophysics in 1988 at the University of Regensburg. He is an Apl. Professor of Biophysics at the University of Regensburg, where he is heading the Computational Intelligence Group. Currently he serves as associate editor of Neurocomputing and Neural Information Processing—Letters and Reviews. His current research interests include biomedical signal and image processing, independent component analysis and blind source separation, neural networks for classification and pattern recognition and stochastic process limits in queuing applications.