Abstract
In this paper, we develop methods for efficient and accurate information extraction from calcium-imaging-based neural signals. The particular form of information extraction we investigate involves predicting behavior variables linked to animals from which the calcium imaging signals are acquired. More specifically, we develop algorithms to systematically generate compact deep neural network (DNN) models for accurate and efficient calcium-imaging-based predictive modeling. We also develop a software tool, called NeuroGRS, to apply the proposed methods for compact DNN derivation with a high degree of automation. GRS stands for Greedy inter-layer order with Random Selection of intra-layer units, which describes the central algorithm developed in this work for deriving compact DNN structures. Through extensive experiments using NeuroGRS and calcium imaging data, we demonstrate that our methods enable highly streamlined information extraction from calcium images of the brain with minimal loss in accuracy compared to much more computationally expensive approaches.
Similar content being viewed by others
References
Andrews, R. J. (2010). Neuromodulation: Advances in the next decade. Annals of the New York Academy of Sciences, 212–220.
Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.
Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1106–1114).
Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning (pp. 160–167): ACM.
Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., & et al. (2014). Deep speech: Scaling up end-to-end speech recognition. arXiv:http://arxiv.org/abs1412.5567.
Anwar, S., Hwang, K., & Sung, W. (2017). Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems, 13(3), 1–18.
Li, C., Chan, D. C., Yang, X., Ke, Y., & Yung, W. H. (2019). Prediction of forelimb reach results from motor cortex activities based on calcium imaging and deep learning. Frontiers in cellular neuroscience, 13, 88.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
Lee, Y., Madayambath, S. C., Liu, Y., Lin, D. T., Chen, R., & Bhattacharyya, S. S. (2017). Online learning in neural decoding using incremental linear discriminant analysis. In 2017 IEEE International conference on cyborg and bionic systems (CBS) (pp. 173–177): IEEE.
Liu, Z., Sun, M., Zhou, T., Huang, G., & Darrell, T. (2018). Rethinking the value of network pruning. arXiv:http://arxiv.org/abs1810.05270.
Frankle, J., & Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv:http://arxiv.org/abs1803.03635.
Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient ConvNets. arXiv:http://arxiv.org/abs1608.08710.
Hu, H., Peng, R., Tai, Y. W., & Tang, C. K. (2016). Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv:http://arxiv.org/abs1607.03250.
Luo, J. H., Wu, J., & Lin, W. (2017). Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision (pp. 5058–5066).
Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. arXiv:http://arxiv.org/abs1611.06440.
He, Y., Zhang, X., & Sun, J. (2017). Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1389–1397).
Suau, X., Zappella, L., Palakkode, V., & Apostoloff, N. (2018). Principal filter analysis for guided network compression. arXiv:http://arxiv.org/abs1807.10585.
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., & Zhang, C. (2017). Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2736–2744).
Han, S., Pool, J., Tran, J., & Dally, W. (2015). Learning both weights and connections for efficient neural network. In Advances in neural information processing systems (pp. 1135–1143).
Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman, coding. arXiv:http://arxiv.org/abs1510.00149.
Bhattacharyya, S. S., Deprettere, E., Leupers, R., & Takala, J. (Eds.). (2019). Handbook of Signal Processing Systems, 3rd edn. Berlin: Springer.
Lee, E. A., & Parks, T. M. (1995). Dataflow process networks. Proceedings of the IEEE, 83(5), 773–799.
Buck, J. T., & Lee, E. A. (1993). Scheduling dynamic dataflow graphs using the token flow model. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing.
Lin, S., Liu, Y., Lee, K., Li, L., Plishker, W., & Bhattacharyya, S. S. (2017). The DSPCAD framework for modeling and synthesis of signal processing systems. In Ha, S., & Teich, J. (Eds.) Handbook of hardware/software codesign (pp. 1–35): Springer.
Barbera, G., Liang, B., Zhang, L., Gerfen, C. R., Culurciello, E., Chen, R., Li, Y., & Lin, D. T. (2016). Spatially compact neural clusters in the dorsal striatum encode locomotion relevant information. Neuron, 92(1), 202–213.
Keras. (2020). https://keras.io/.
Kingma, D.P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv:http://arxiv.org/abs1412.6980 [cs.LG].
Abadi, M., & et al. (2016). TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv:http://arxiv.org/abs1603.04467v2 [cs.DC].
Acknowledgements
This work was supported by the NIH NINDS (R01NS110421) and the BRAIN Initiative.
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Experiment Results on Separate Datasets
Appendix: Experiment Results on Separate Datasets
The results for nn1, nn2, cnn1 and cnn2, are summarized in Tables 9, 10, 11, and 12, respectively. In each of these tables, the columns labeled Acc_X, FLOPs_X, and Params_X provide the accuracy, FLOP count and number of parameters for the model represented by X. Here, X = O to denote the original model (without any pruning), while X = G, X = N and X = R for the pruned models derived by applying GRS, NWM and RRS, respectively, to the original model.
Rights and permissions
About this article
Cite this article
Wu, X., Lin, DT., Chen, R. et al. Learning Compact DNN Models for Behavior Prediction from Neural Activity of Calcium Imaging. J Sign Process Syst 94, 455–472 (2022). https://doi.org/10.1007/s11265-021-01662-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11265-021-01662-2