Skip to main content
Log in

Learning Compact DNN Models for Behavior Prediction from Neural Activity of Calcium Imaging

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

In this paper, we develop methods for efficient and accurate information extraction from calcium-imaging-based neural signals. The particular form of information extraction we investigate involves predicting behavior variables linked to animals from which the calcium imaging signals are acquired. More specifically, we develop algorithms to systematically generate compact deep neural network (DNN) models for accurate and efficient calcium-imaging-based predictive modeling. We also develop a software tool, called NeuroGRS, to apply the proposed methods for compact DNN derivation with a high degree of automation. GRS stands for Greedy inter-layer order with Random Selection of intra-layer units, which describes the central algorithm developed in this work for deriving compact DNN structures. Through extensive experiments using NeuroGRS and calcium imaging data, we demonstrate that our methods enable highly streamlined information extraction from calcium images of the brain with minimal loss in accuracy compared to much more computationally expensive approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3

Similar content being viewed by others

References

  1. Andrews, R. J. (2010). Neuromodulation: Advances in the next decade. Annals of the New York Academy of Sciences, 212–220.

  2. Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.

    Article  Google Scholar 

  3. Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1106–1114).

  4. Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning (pp. 160–167): ACM.

  5. Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., & et al. (2014). Deep speech: Scaling up end-to-end speech recognition. arXiv:http://arxiv.org/abs1412.5567.

  6. Anwar, S., Hwang, K., & Sung, W. (2017). Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems, 13(3), 1–18.

    Article  Google Scholar 

  7. Li, C., Chan, D. C., Yang, X., Ke, Y., & Yung, W. H. (2019). Prediction of forelimb reach results from motor cortex activities based on calcium imaging and deep learning. Frontiers in cellular neuroscience, 13, 88.

    Article  Google Scholar 

  8. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

  9. Lee, Y., Madayambath, S. C., Liu, Y., Lin, D. T., Chen, R., & Bhattacharyya, S. S. (2017). Online learning in neural decoding using incremental linear discriminant analysis. In 2017 IEEE International conference on cyborg and bionic systems (CBS) (pp. 173–177): IEEE.

  10. Liu, Z., Sun, M., Zhou, T., Huang, G., & Darrell, T. (2018). Rethinking the value of network pruning. arXiv:http://arxiv.org/abs1810.05270.

  11. Frankle, J., & Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv:http://arxiv.org/abs1803.03635.

  12. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient ConvNets. arXiv:http://arxiv.org/abs1608.08710.

  13. Hu, H., Peng, R., Tai, Y. W., & Tang, C. K. (2016). Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv:http://arxiv.org/abs1607.03250.

  14. Luo, J. H., Wu, J., & Lin, W. (2017). Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision (pp. 5058–5066).

  15. Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. arXiv:http://arxiv.org/abs1611.06440.

  16. He, Y., Zhang, X., & Sun, J. (2017). Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1389–1397).

  17. Suau, X., Zappella, L., Palakkode, V., & Apostoloff, N. (2018). Principal filter analysis for guided network compression. arXiv:http://arxiv.org/abs1807.10585.

  18. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., & Zhang, C. (2017). Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2736–2744).

  19. Han, S., Pool, J., Tran, J., & Dally, W. (2015). Learning both weights and connections for efficient neural network. In Advances in neural information processing systems (pp. 1135–1143).

  20. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman, coding. arXiv:http://arxiv.org/abs1510.00149.

  21. Bhattacharyya, S. S., Deprettere, E., Leupers, R., & Takala, J. (Eds.). (2019). Handbook of Signal Processing Systems, 3rd edn. Berlin: Springer.

  22. Lee, E. A., & Parks, T. M. (1995). Dataflow process networks. Proceedings of the IEEE, 83(5), 773–799.

    Article  Google Scholar 

  23. Buck, J. T., & Lee, E. A. (1993). Scheduling dynamic dataflow graphs using the token flow model. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing.

  24. Lin, S., Liu, Y., Lee, K., Li, L., Plishker, W., & Bhattacharyya, S. S. (2017). The DSPCAD framework for modeling and synthesis of signal processing systems. In Ha, S., & Teich, J. (Eds.) Handbook of hardware/software codesign (pp. 1–35): Springer.

  25. Barbera, G., Liang, B., Zhang, L., Gerfen, C. R., Culurciello, E., Chen, R., Li, Y., & Lin, D. T. (2016). Spatially compact neural clusters in the dorsal striatum encode locomotion relevant information. Neuron, 92(1), 202–213.

    Article  Google Scholar 

  26. Keras. (2020). https://keras.io/.

  27. Kingma, D.P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv:http://arxiv.org/abs1412.6980 [cs.LG].

  28. Abadi, M., & et al. (2016). TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv:http://arxiv.org/abs1603.04467v2 [cs.DC].

Download references

Acknowledgements

This work was supported by the NIH NINDS (R01NS110421) and the BRAIN Initiative.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Rong Chen or Shuvra S. Bhattacharyya.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Experiment Results on Separate Datasets

Appendix: Experiment Results on Separate Datasets

The results for nn1, nn2, cnn1 and cnn2, are summarized in Tables 91011, and 12, respectively. In each of these tables, the columns labeled Acc_X, FLOPs_X, and Params_X provide the accuracy, FLOP count and number of parameters for the model represented by X. Here, X = O to denote the original model (without any pruning), while X = G, X = N and X = R for the pruned models derived by applying GRS, NWM and RRS, respectively, to the original model.

Table 9 Results of comparison experiments with nn1 as the input model.
Table 10 Results of comparison experiments with nn2 as the input model.
Table 11 Results of comparison experiments with cnn1 as the input model.
Table 12 Results of comparison experiments with cnn2 as the input model.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, X., Lin, DT., Chen, R. et al. Learning Compact DNN Models for Behavior Prediction from Neural Activity of Calcium Imaging. J Sign Process Syst 94, 455–472 (2022). https://doi.org/10.1007/s11265-021-01662-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-021-01662-2

Keywords

Navigation