Skip to main content

Gradient-Free Neural Network Training Based on Deep Dictionary Learning with the Log Regularizer

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Included in the following conference series:

  • 1769 Accesses

Abstract

Gradient-free neural network training is attracting increasing attentions, which efficiently to avoid the gradient vanishing issue in traditional neural network training with gradient-based methods. The state-of-the-art gradient-free methods introduce a quadratic penalty or use an equivalent approximation of the activation function to achieve the training process without gradients, but they are hardly to mine effective signal features since the activation function is a limited nonlinear transformation. In this paper, we first propose to construct the neural network training as a deep dictionary learning model for achieving the gradient-free training of the network. To further enhance the ability of feature extraction in network training based on gradient-free method, we introduce the logarithm function as a sparsity regularizer which introduces accurate sparse activations on the hidden layer except for the last layer. Then, we employ a proximal block coordinate descent method to forward update the variables of each layer and apply the log-thresholding operator to achieve the optimization of the non-convex and non-smooth subproblems. Finally, numerical experiments conducted on several publicly available datasets prove the sparse representation of inputs is effective for gradient-free neural network training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brock, A., De, S., Smith, S.L., Simonyan, K.: High-performance large-scale image recognition without normalization. arXiv preprint arXiv:2102.06171 (2021)

  2. Wu, Z., Zhao, D., Liang, Q., Yu, J., Gulati, A., Pang, R.: Dynamic sparsity neural networks for automatic speech recognition. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6014–6018. IEEE (2021)

    Google Scholar 

  3. Woodworth, B., et al.: Is local SGD better than minibatch SGD? In: International Conference on Machine Learning, pp. 10334–10343. PMLR (2020)

    Google Scholar 

  4. Kristiadi, A., Hein, M., Hennig, P.: Being Bayesian, even just a bit, fixes overconfidence in ReLU networks. In: International Conference on Machine Learning, pp. 5436–5446. PMLR (2020)

    Google Scholar 

  5. Yao, Z., Cao, Y., Zheng, S., Huang, G., Lin, S.: Cross-iteration batch normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12331–12340 (2021)

    Google Scholar 

  6. Peng, S., Huang, H., Chen, W., Zhang, L., Fang, W.: More trainable inception-ResNet for face recognition. Neurocomputing 411, 9–19 (2020)

    Article  Google Scholar 

  7. Li, J., Xiao, M., Fang, C., Dai, Y., Xu, C., Lin, Z.: Training neural networks by lifted proximal operator machines. IEEE Trans. Pattern Anal. Mach. Intell. (2020)

    Google Scholar 

  8. Taylor, G., Burmeister, R., Xu, Z., Singh, B., Patel, A., Goldstein, T.: Training neural networks without gradients: a scalable ADMM approach. In: International Conference on Machine Learning, pp. 2722–2731. PMLR (2016)

    Google Scholar 

  9. Wang, J., Yu, F., Chen, X., Zhao, L.: ADMM for efficient deep learning with global convergence. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 111–119 (2019)

    Google Scholar 

  10. Carreira-Perpinan, M., Wang, W.: Distributed optimization of deeply nested systems. In: Artificial Intelligence and Statistics, pp. 10–19. PMLR (2014)

    Google Scholar 

  11. Lau, T.T.K., Zeng, J., Wu, B., Yao, Y.: A proximal block coordinate descent algorithm for deep neural network training. arXiv preprint arXiv:1803.09082 (2018)

  12. Zeng, J., Lau, T.T.K., Lin, S., Yao, Y.: Global convergence of block coordinate descent in deep learning. In: International Conference on Machine Learning, pp. 7313–7323. PMLR (2019)

    Google Scholar 

  13. Gu, F., Askari, A., El Ghaoui, L.: Fenchel lifted networks: a lagrange relaxation of neural network training. In: International Conference on Artificial Intelligence and Statistics, pp. 3362–3371. PMLR (2020)

    Google Scholar 

  14. Chen, Y., Su, J.: Dict layer: a structured dictionary layer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 422–431 (2018)

    Google Scholar 

  15. Liu, Y., Chen, Q., Chen, W., Wassell, I.: Dictionary learning inspired deep network for scene recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  16. Singhal, V., Aggarwal, H.K., Tariyal, S., Majumdar, A.: Discriminative robust deep dictionary learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 55(9), 5274–5283 (2017)

    Article  Google Scholar 

  17. Qiao, L., Sun, T., Pan, H., Li, D.: Inertial proximal deep learning alternating minimization for efficient neutral network training. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3895–3899. IEEE (2021)

    Google Scholar 

  18. Li, Z., Zhao, H., Guo, Y., Yang, Z., Xie, S.: Accelerated log-regularized convolutional transform learning and its convergence guarantee. IEEE Trans. Cybern. (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenni Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, Y., Li, Z., Zhao, H. (2021). Gradient-Free Neural Network Training Based on Deep Dictionary Learning with the Log Regularizer. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics