Skip to main content
Log in

Multi-Level Cell Progressive Differentiable Architecture Search to Improve Image Classification Accuracy

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

In recent years, the neural architecture search has continuously made significant progress in the field of image recognition. Among them, the differentiable method has obvious advantages compared with other search methods in terms of computational cost and accuracy to deal with image classification. However, the differentiable method is usually composed of single cell, which cannot efficiently extract the features of the network. In response to this problem, we propose a multi-level cell progressive differentiable method which allows cells to have different types according to the levels of the network. In differentiable method, the gap between the search network and the evaluation one is large, and the correlation is low. We design an algorithm to improve the distribution of architecture parameters. We also optimize the loss function and use the regularization method of additional action to improve deep network performance. The method achieves good search and classification results on CIFAR10 and ImageNet (mobile setting).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7

Similar content being viewed by others

References

  1. Baker, B., Gupta, O., Naik, N., & Raskar, R. (2016). Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167.

  2. Chen, X., Xie, L., Wu, J., & Tian, Q. (2019). Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In 2019 IEEE/CVF international conference on computer vision (ICCV) (pp. 1294–1303).

  3. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255).

  4. He, X., Zhao, K., & Chu, X. (2019). AutoML: A survey of the state-of-the-art. ArXiv Preprint ArXiv:1908.00709.

  5. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Adam, H. et al. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. ArXiv Preprint ArXiv:1704.04861.

  6. Howard, A., Zhmoginov, A., Chen, L.-C., Sandler, M., & Zhu, M. (2018). Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation.

  7. Huang, G., Liu, Z., Maaten, L. van der, & Weinberger, K. Q. (2017). Densely connected convolutional networks. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2261–2269).

  8. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd international conference on machine learning (pp. 448–456).

  9. A. Krizhevsky and G. Hinton (2009). Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.

  10. Liu, H., Simonyan, K., Vinyals, O., Fernando, C., & Kavukcuoglu, K. (2018). Hierarchical representations for efficient architecture search. In ICLR 2018: International conference on learning representations 2018.

  11. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.-J., Murphy, K., et al. (2018). Progressive neural architecture search. In Proceedings of the European conference on computer vision (ECCV) (pp. 19–35).

  12. Liu, H., Simonyan, K., & Yang, Y. (2019). DARTS: Differentiable architecture search. In ICLR 2019: 7th international conference on learning representations.

  13. Pham, H., Guan, M. Y., Zoph, B., Le, Q. V., & Dean, J. (2018). Efficient neural architecture search via parameter sharing. ArXiv Preprint ArXiv:1802.03268.

  14. Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y. L., Tan, J., Kurakin, A., et.al. (2017). Large-scale evolution of image classifiers. In ICML’17 proceedings of the 34th international conference on machine learning – volume 70 (pp. 2902–2911).

  15. Real, E., Aggarwal, A., Huang, Y., & Le, Q. V. (2019). Regularized evolution for image classifier architecture search. Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 4780–4789.

    Article  Google Scholar 

  16. Rezgui, J., Bisaillon, C., & O’Leary, L. O. (2019). Finding better learning algorithms for self-driving cars: An overview of the LAOP platform. In 2019 international symposium on networks, computers and communications (ISNCC) (pp. 1–6).

  17. Schmarje, L., Santarossa, M., Schröder, S.-M., & Koch, R. (2020). A survey on semi-, self- and unsupervised techniques in image classification. ArXiv Preprint ArXiv:2002.08721.

  18. Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In ICLR 2015: International conference on learning representations 2015.

  19. Srivastava, R. K., Greff, K., & Schmidhuber, J. (2015). Training very deep networks. In NIPS’15 proceedings of the 28th international conference on neural information processing systems – volume 2 (pp. 2377–2385).

  20. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Rabinovich, A. et.al. (2015). Going deeper with convolutions. In 2015 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1–9).

  21. Xie, S., Girshick, R., Dollar, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5987–5995).

  22. Xie, S., Zheng, H., Liu, C., & Lin, L. (2019). SNAS: Stochastic neural architecture search. In ICLR 2019: 7th international conference on learning representations.

  23. Xu, Y., Xie, L., Zhang, X., Chen, X., Qi, G.-J., Tian, Q., & Xiong, H. (2020). PC-DARTS: Partial channel connections for memory-efficient architecture search. In ICLR 2020: Eighth international conference on learning representations.

  24. Yao, Q., Wang, M., Chen, Y., Dai, W., Li, Y. F., Tu, W. W., et.al. & Yu, Y. (2018). Taking human out of learning applications: A survey on automated machine learning. ArXiv Preprint ArXiv:1810.13306.

  25. Zhang, H., Kiranyaz, S., & Gabbouj, M. (2018). Finding better topologies for deep convolutional neural networks by evolution. ArXiv Preprint ArXiv:1809.03242.

  26. Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). ShuffleNet: An extremely efficient convolutional neural network for mobile devices. In 2018 IEEE/CVF conference on computer vision and pattern recognition (pp. 6848–6856).

  27. Zhong, Z., Yan, J., Wu, W., Shao, J., & Liu, C.-L. (2018). Practical block-wise neural network architecture generation. In 2018 IEEE/CVF conference on computer vision and pattern recognition (pp. 2423–2432).

  28. Zoph, B., & Le, Q. (2017). Neural architecture search with reinforcement learning. In ICLR 2017: International conference on learning representations 2017.

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yugang Shan or Jie Yuan.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Shan, Y. & Yuan, J. Multi-Level Cell Progressive Differentiable Architecture Search to Improve Image Classification Accuracy. J Sign Process Syst 93, 689–699 (2021). https://doi.org/10.1007/s11265-021-01647-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-021-01647-1

Keywords

Navigation