Skip to main content

Edge-Wise One-Level Global Pruning on NAS Generated Networks

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Included in the following conference series:

Abstract

In recent years, there has been a lot of studies in neural architecture search (NAS) in the field of deep learning. Among them, the cell-based search method, such as [23, 27, 32, 36], is one of the most popular and widely discussed topics, which usually stacks less cells in search process and more in evaluation. Although this method can reduce the resource consumption in the process of search, the difference in the number of cells may inevitably cause a certain degree of redundancy in network evaluation. In order to mitigate the computational cost, we propose a novel algorithm called Edge-Wise One-Level Global Pruning (EOG-Pruning). The proposed approach can prune out weak edges from the cell-based network generated by NAS globally, by introducing an edge factor to represent the importance of each edge, which can not only greatly improve the inference speed of the model with reducing the number of edges, but also promote the model accuracy. Experimental results show that networks pruned by EOG-Pruning achieve significant improvement in accuracy and speedup rate on CPU in common with 50% pruning rate on CIFAR. Specifically, we reduced the test error rate by 1.58% and 1.34% on CIFAR-100 for DARTS (2nd-order) and PC-DARTS.

Supported by Beijing Natural Science Foundation (4202063), Fundamental Research Funds for the Central Universities (2020JBM020), Research Founding of Electro-Optical Information Security Control Laboratory, National Key Research and Development Program of China under Grant 2019YFB2204200, BJTU-Kuaishou Research Grant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. In: Submitted to International Conference on Learning Representations (2016)

    Google Scholar 

  2. Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for Hyper-parameter optimization. In: Proceedings of the 24th International Conference on Neural Information Processing Systems, pp. 2546–2554. NIPS 2011, Curran Associates Inc., Red Hook, NY, USA (2011), event-place: Granada, Spain

    Google Scholar 

  3. Bi, K., Hu, C., Xie, L., Chen, X., Wei, L., Tian, Q.: Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters. arXiv e-prints arXiv:1910.11831 (2019)

  4. Bi, K., Xie, L., Chen, X., Wei, L., Tian, Q.: GOLD-NAS: Gradual, One-Level, Differentiable. arXiv e-prints arXiv:2007.03331 (2020)

  5. Cai, H., Zhu, L., Han, S.: ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. arXiv e-prints arXiv:1812.00332

  6. Chen, X., Xie, L., Wu, J., Tian, Q.: Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation. arXiv e-prints arXiv:1904.12760 (2019)

  7. Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: AutoAugment: learning augmentation strategies from data. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 113–123. IEEE, Long Beach, CA, USA, June 2019. https://doi.org/10.1109/CVPR.2019.00020, https://ieeexplore.ieee.org/document/8953317/

  8. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). https://doi.org/10.1109/CVPR.2009.5206848

  9. DeVries, T., Taylor, G.W.: Improved Regularization of Convolutional Neural Networks with Cutout. arXiv e-prints arXiv:1708.04552 (2017)

  10. Ghiasi, G., Lin, T.Y., Pang, R., Le, Q.V.: NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. arXiv e-prints arXiv:1904.07392, April 2019

  11. Goyal, P., et al.: Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv e-prints arXiv:1706.02677 (2017)

  12. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning. ICLR, trained quantization and huffman coding (2016)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  14. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1398–1406 (2017). https://doi.org/10.1109/ICCV.2017.155

  15. Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017). https://doi.org/10.1109/CVPR.2017.243

  16. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Handb. Syst. Autoimmune Dis. 1(4) (2009)

    Google Scholar 

  17. Larsson, G., Maire, M., Shakhnarovich, G.: Fractalnet: Ultra-deep neural networks without residuals. In: ICLR (2017)

    Google Scholar 

  18. LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2, pp. 598–605. Morgan-Kaufmann (1990). http://papers.nips.cc/paper/250-optimal-brain-damage.pdf

  19. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient ConvNets. In: International Conference on Learning Representations (ICLR) (2016)

    Google Scholar 

  20. Li, K., Malik, J.: Learning to Optimize. arXiv e-prints arXiv:1606.01885 (2016)

  21. Li, L., Talwalkar, A.: Random Search and Reproducibility for Neural Architecture Search. arXiv e-prints arXiv:1902.07638 (2019)

  22. Liu, C., et al.: Progressive Neural Architecture Search. arXiv e-prints arXiv:1712.00559 (2017)

  23. Liu, H., Simonyan, K., Yang, Y.: DARTS: Differentiable Architecture Search. arXiv e-prints arXiv:1806.09055 (2018)

  24. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  25. Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: ICCV, pp. 5068–5076 (2017). https://doi.org/10.1109/ICCV.2017.541

  26. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: ICML, pp. 4092–4101 (2018). http://proceedings.mlr.press/v80/pham18a.html

  27. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: AAAI, pp. 4780–4789 (2019). https://doi.org/10.1609/aaai.v33i01.33014780

  28. Shin*, R., Packer*, C., Song, D.: Differentiable neural network architecture search. In: In International Conference on Learning Representationss-Workshops (2018). https://openreview.net/forum?id=BJ-MRKkwG

  29. Suganuma, M., Shirakawa, S., Nagao, T.: A genetic programming approach to designing convolutional neural network architectures. In: IJCAI, pp. 5369–5373 (2018). https://doi.org/10.24963/ijcai.2018/755

  30. Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015). https://doi.org/10.1109/CVPR.2015.7298594

  31. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826 (2016). https://doi.org/10.1109/CVPR.2016.308

  32. Xie, L., Yuille, A.: Genetic CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1388–1397 (2017). https://doi.org/10.1109/ICCV.2017.154

  33. Xu, Y., et al.: PC-DARTS: partial channel connections for memory-efficient architecture search. In: Submitted to International Conference on Learning Representations (2020). https://openreview.net/forum?id=BJlS634tPr

  34. Yao, L., Xu, H., Zhang, W., Liang, X., Li, Z.: SM-NAS: Structural-to-Modular Neural Architecture Search for Object Detection. arXiv e-prints arXiv:1911.09929 (2019)

  35. Zoph, B., Le, Q.V.: Neural Architecture Search with Reinforcement Learning. arXiv e-prints arXiv:1611.01578 (2016)

  36. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dong Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Feng, Q., Xu, K., Li, Y., Sun, Y., Wang, D. (2021). Edge-Wise One-Level Global Pruning on NAS Generated Networks. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics