skip to main content
10.1145/3523286.3524586acmotherconferencesArticle/Chapter ViewAbstractPublication PagesbicConference Proceedingsconference-collections
research-article

Improving Latent Factor Analysis via Self-supervised Signal Extracting

Authors Info & Claims
Published:31 May 2022Publication History

ABSTRACT

The computational neuroscience community has found that neural population activities have stable low-dimensional structures. Latent variable models based on Statistical machine learning and deep neural networks have revealed the informative low-dimensional representations with promising performance and efficiency. To address the issue of identifiability and interpretability due to the noise in the neural spike trains, recently there has been a focus on drawing progress from representation learning to better capture the universality and variability of the neural spikes. However, an important but less studied solution for the issue is signal denoising, which may be simpler and more practical. In this work, we introduce a simple yet effective improvement that extracts the informative signal from the noisy neural data by decomposing the latent space into one part relevant to the underlying neural patterns and one part irrelevant to it. We train our model in a self-supervised learning manner. We show that our model consistently improves the performance of the baseline model on a motor task dataset.

References

  1. Barack, David L., and John W. Krakauer. "Two views on the cognitive brain." Nature Reviews Neuroscience 22.6 (2021): 359-371.Google ScholarGoogle ScholarCross RefCross Ref
  2. Bondanelli, Giulio, "Network dynamics underlying OFF responses in the auditory cortex." Elife 10 (2021): e53151.Google ScholarGoogle ScholarCross RefCross Ref
  3. Bruno, Angela M., William N. Frost, and Mark D. Humphries. "A spiral attractor network drives rhythmic locomotion." Elife 6 (2017): e27342.Google ScholarGoogle ScholarCross RefCross Ref
  4. Yu, Byron M., "Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity." Journal of neurophysiology 102.1 (2009): 614-635.Google ScholarGoogle ScholarCross RefCross Ref
  5. Chen, Ricky TQ, "Isolating Sources of Disentanglement in VAEs." Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2019.Google ScholarGoogle Scholar
  6. Cunningham, John P., and M. Yu Byron. "Dimensionality reduction for large-scale neural recordings." Nature neuroscience 17.11 (2014): 1500-1509.Google ScholarGoogle ScholarCross RefCross Ref
  7. Ebitz, R. Becket, and Benjamin Y. Hayden. "The population doctrine in cognitive neuroscience." Neuron 109.19 (2021): 3055-3068.Google ScholarGoogle ScholarCross RefCross Ref
  8. Gallego, Juan A., "Long-term stability of cortical population dynamics underlying consistent behavior." Nature neuroscience 23.2 (2020): 260-270.Google ScholarGoogle ScholarCross RefCross Ref
  9. Gao, Yuanjun, "Linear dynamical neural population models through nonlinear embeddings." Advances in neural information processing systems 29 (2016): 163-171.Google ScholarGoogle Scholar
  10. Hurwitz, Cole, "Building population models for large-scale neural recordings: opportunities and pitfalls." arXiv preprint arXiv:2102.01807 (2021).Google ScholarGoogle Scholar
  11. Hurwitz, Cole, "Targeted Neural Dynamical Modeling." Advances in Neural Information Processing Systems 34 (2021).Google ScholarGoogle Scholar
  12. Inagaki, Hidehiko K., "Low-dimensional and monotonic preparatory activity in mouse anterior lateral motor cortex." Journal of Neuroscience 38.17 (2018): 4163-4185.Google ScholarGoogle ScholarCross RefCross Ref
  13. Ito, Takuya, "Discovering the computational relevance of brain network organization." Trends in cognitive sciences 24.1 (2020): 25-38.Google ScholarGoogle ScholarCross RefCross Ref
  14. Keshtkaran, Mohammad Reza, and Chethan Pandarinath. "Enabling hyperparameter optimization in sequential autoencoders for spiking neural data." Advances in Neural Information Processing Systems 32 (2019): 15937-15947.Google ScholarGoogle Scholar
  15. Kim, Hyunjik, and Andriy Mnih. "Disentangling by factorising." International Conference on Machine Learning. PMLR, 2018.Google ScholarGoogle Scholar
  16. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).Google ScholarGoogle Scholar
  17. Kumar, Abhishek, Prasanna Sattigeri, and Avinash Balakrishnan. "Variational Inference of Disentangled Latent Concepts from Unlabeled Observations." International Conference on Learning Representations. 2018.Google ScholarGoogle Scholar
  18. Liu, Ran, "Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity." Advances in Neural Information Processing Systems 34 (2021).Google ScholarGoogle Scholar
  19. Macke, Jakob H., "Empirical models of spiking in neural populations." Advances in Neural Information Processing Systems 24: 25th conference on Neural Information Processing Systems (NIPS 2011). 2012.Google ScholarGoogle Scholar
  20. Mastrogiuseppe, Francesca, and Srdjan Ostojic. "Linking connectivity, dynamics, and computations in low-rank recurrent neural networks." Neuron 99.3 (2018): 609-623.Google ScholarGoogle ScholarCross RefCross Ref
  21. Pandarinath, Chethan, "Inferring single-trial neural population dynamics using sequential auto-encoders." Nature methods 15.10 (2018): 805-815.Google ScholarGoogle ScholarCross RefCross Ref
  22. Sani, Omid G., "Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification." Nature Neuroscience 24.1 (2021): 140-149.Google ScholarGoogle ScholarCross RefCross Ref
  23. Saxena, Shreya, and John P. Cunningham. "Towards the neural population doctrine." Current opinion in neurobiology 55 (2019): 103-111.Google ScholarGoogle ScholarCross RefCross Ref
  24. She, Qi, and Anqi Wu. "Neural dynamics discovery via gaussian process recurrent neural networks." Uncertainty in Artificial Intelligence. PMLR, 2020.Google ScholarGoogle Scholar
  25. Sohn, Hansem, "A network perspective on sensorimotor learning." Trends in Neurosciences 44.3 (2021): 170-181.Google ScholarGoogle ScholarCross RefCross Ref
  26. Vyas, Saurabh, "Computation through neural population dynamics." Annual Review of Neuroscience 43 (2020): 249-275.Google ScholarGoogle ScholarCross RefCross Ref
  27. Wu, Anqi, "Learning a latent manifold of odor representations from neural responses in piriform cortex." Advances in Neural Information Processing Systems 31 (2018): 5378-5388.Google ScholarGoogle Scholar
  28. Wu, Anqi, "Gaussian process based nonlinear latent structure discovery in multivariate spike train data." Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017.Google ScholarGoogle Scholar
  29. Zhao, Yuan, and Il Memming Park. "Variational latent gaussian process for recovering single-trial dynamics from population spike trains." Neural computation 29.5 (2017): 1293-1316.Google ScholarGoogle ScholarCross RefCross Ref
  30. Zhou, Ding, and Xue-Xin Wei. "Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE." Advances in Neural Information Processing Systems 33 (2020): 7234-7247.Google ScholarGoogle Scholar

Index Terms

  1. Improving Latent Factor Analysis via Self-supervised Signal Extracting
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        BIC 2022: 2022 2nd International Conference on Bioinformatics and Intelligent Computing
        January 2022
        551 pages
        ISBN:9781450395755
        DOI:10.1145/3523286

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 31 May 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)28
        • Downloads (Last 6 weeks)1

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format