ABSTRACT
The computational neuroscience community has found that neural population activities have stable low-dimensional structures. Latent variable models based on Statistical machine learning and deep neural networks have revealed the informative low-dimensional representations with promising performance and efficiency. To address the issue of identifiability and interpretability due to the noise in the neural spike trains, recently there has been a focus on drawing progress from representation learning to better capture the universality and variability of the neural spikes. However, an important but less studied solution for the issue is signal denoising, which may be simpler and more practical. In this work, we introduce a simple yet effective improvement that extracts the informative signal from the noisy neural data by decomposing the latent space into one part relevant to the underlying neural patterns and one part irrelevant to it. We train our model in a self-supervised learning manner. We show that our model consistently improves the performance of the baseline model on a motor task dataset.
- Barack, David L., and John W. Krakauer. "Two views on the cognitive brain." Nature Reviews Neuroscience 22.6 (2021): 359-371.Google ScholarCross Ref
- Bondanelli, Giulio, "Network dynamics underlying OFF responses in the auditory cortex." Elife 10 (2021): e53151.Google ScholarCross Ref
- Bruno, Angela M., William N. Frost, and Mark D. Humphries. "A spiral attractor network drives rhythmic locomotion." Elife 6 (2017): e27342.Google ScholarCross Ref
- Yu, Byron M., "Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity." Journal of neurophysiology 102.1 (2009): 614-635.Google ScholarCross Ref
- Chen, Ricky TQ, "Isolating Sources of Disentanglement in VAEs." Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2019.Google Scholar
- Cunningham, John P., and M. Yu Byron. "Dimensionality reduction for large-scale neural recordings." Nature neuroscience 17.11 (2014): 1500-1509.Google ScholarCross Ref
- Ebitz, R. Becket, and Benjamin Y. Hayden. "The population doctrine in cognitive neuroscience." Neuron 109.19 (2021): 3055-3068.Google ScholarCross Ref
- Gallego, Juan A., "Long-term stability of cortical population dynamics underlying consistent behavior." Nature neuroscience 23.2 (2020): 260-270.Google ScholarCross Ref
- Gao, Yuanjun, "Linear dynamical neural population models through nonlinear embeddings." Advances in neural information processing systems 29 (2016): 163-171.Google Scholar
- Hurwitz, Cole, "Building population models for large-scale neural recordings: opportunities and pitfalls." arXiv preprint arXiv:2102.01807 (2021).Google Scholar
- Hurwitz, Cole, "Targeted Neural Dynamical Modeling." Advances in Neural Information Processing Systems 34 (2021).Google Scholar
- Inagaki, Hidehiko K., "Low-dimensional and monotonic preparatory activity in mouse anterior lateral motor cortex." Journal of Neuroscience 38.17 (2018): 4163-4185.Google ScholarCross Ref
- Ito, Takuya, "Discovering the computational relevance of brain network organization." Trends in cognitive sciences 24.1 (2020): 25-38.Google ScholarCross Ref
- Keshtkaran, Mohammad Reza, and Chethan Pandarinath. "Enabling hyperparameter optimization in sequential autoencoders for spiking neural data." Advances in Neural Information Processing Systems 32 (2019): 15937-15947.Google Scholar
- Kim, Hyunjik, and Andriy Mnih. "Disentangling by factorising." International Conference on Machine Learning. PMLR, 2018.Google Scholar
- Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).Google Scholar
- Kumar, Abhishek, Prasanna Sattigeri, and Avinash Balakrishnan. "Variational Inference of Disentangled Latent Concepts from Unlabeled Observations." International Conference on Learning Representations. 2018.Google Scholar
- Liu, Ran, "Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity." Advances in Neural Information Processing Systems 34 (2021).Google Scholar
- Macke, Jakob H., "Empirical models of spiking in neural populations." Advances in Neural Information Processing Systems 24: 25th conference on Neural Information Processing Systems (NIPS 2011). 2012.Google Scholar
- Mastrogiuseppe, Francesca, and Srdjan Ostojic. "Linking connectivity, dynamics, and computations in low-rank recurrent neural networks." Neuron 99.3 (2018): 609-623.Google ScholarCross Ref
- Pandarinath, Chethan, "Inferring single-trial neural population dynamics using sequential auto-encoders." Nature methods 15.10 (2018): 805-815.Google ScholarCross Ref
- Sani, Omid G., "Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification." Nature Neuroscience 24.1 (2021): 140-149.Google ScholarCross Ref
- Saxena, Shreya, and John P. Cunningham. "Towards the neural population doctrine." Current opinion in neurobiology 55 (2019): 103-111.Google ScholarCross Ref
- She, Qi, and Anqi Wu. "Neural dynamics discovery via gaussian process recurrent neural networks." Uncertainty in Artificial Intelligence. PMLR, 2020.Google Scholar
- Sohn, Hansem, "A network perspective on sensorimotor learning." Trends in Neurosciences 44.3 (2021): 170-181.Google ScholarCross Ref
- Vyas, Saurabh, "Computation through neural population dynamics." Annual Review of Neuroscience 43 (2020): 249-275.Google ScholarCross Ref
- Wu, Anqi, "Learning a latent manifold of odor representations from neural responses in piriform cortex." Advances in Neural Information Processing Systems 31 (2018): 5378-5388.Google Scholar
- Wu, Anqi, "Gaussian process based nonlinear latent structure discovery in multivariate spike train data." Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017.Google Scholar
- Zhao, Yuan, and Il Memming Park. "Variational latent gaussian process for recovering single-trial dynamics from population spike trains." Neural computation 29.5 (2017): 1293-1316.Google ScholarCross Ref
- Zhou, Ding, and Xue-Xin Wei. "Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE." Advances in Neural Information Processing Systems 33 (2020): 7234-7247.Google Scholar
Index Terms
- Improving Latent Factor Analysis via Self-supervised Signal Extracting
Recommendations
A Comparative Analysis of Latent Variable Models for Web Page Classification
LA-WEB '08: Proceedings of the 2008 Latin American Web ConferenceA main challenge for Web content classification is how to model the input data. This paper discusses the application of two text modeling approaches, Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA), in the Web page classification ...
Exploring weakly supervised latent sentiment explanations for aspect-level review analysis
CIKM '13: Proceedings of the 22nd ACM international conference on Information & Knowledge ManagementIn sentiment analysis, aspect-level review analysis has been an important task because it can catalogue, aggregate, or summarize various opinions according to a product's properties. In this paper, we explore a new concept for aspect-level review ...
Supervised N-gram topic model
WSDM '14: Proceedings of the 7th ACM international conference on Web search and data miningWe propose a Bayesian nonparametric topic model that rep- resents relationships between given labels and the corre- sponding words/phrases, from supervised articles. Unlike existing supervised topic models, our proposal, supervised N-gram topic model (...
Comments