Skip to main content

A Max-Flow Based Approach for Neural Architecture Search

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13680))

Included in the following conference series:

Abstract

Neural Architecture Search (NAS) aims to automatically produce network architectures suitable to specific tasks on given datasets. Unlike previous NAS strategies based on reinforcement learning, genetic algorithm, Bayesian optimization, and differential programming, we formulate the NAS task as a Max-Flow problem on search space consisting of Directed Acyclic Graph (DAG) and thus propose a novel NAS approach, called MF-NAS, which defines the search space and designs the search strategy in a fully graphic manner. In MF-NAS, parallel edges with capacities are induced by combining different operations, including skip connection, convolutions and pooling, and the weights and capacities of the parallel edges are updated iteratively during the search process. Moreover, we interpret MF-NAS from the perspective of non-parametric density estimation and show the relationship between the flow of a graph and the corresponding classification accuracy of a neural network architecture. We evaluate the competitive efficacy of our proposed MF-NAS across different datasets with different search spaces that are used in DARTS/ENAS and NAS-Bench-201.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    If there are only the normal cells in the architecture, as NAS-Bench-201, then \(m^* = m_{normal}^*\).

  2. 2.

    For the search space in ENAS and DARTS, we set \(N = 4\), \(M = 2\), \(K = 8\); for the search space in NAS-Bench-201, we set \(N = 3\) and \(K = 5\) without constraining M.

  3. 3.

    Many architectures can get the same best reward.

  4. 4.

    Precisely, MF-NAS uses the search space of DARTS.

References

  1. Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. In: ICLR (2017)

    Google Scholar 

  2. Bein, W.W., Brucker, P., Tamir, A.: Minimum cost flow algorithms for series-parallel networks. Discrete Appl. Math. 10, 117–124 (1985)

    Google Scholar 

  3. Bender, G., Kindermans, P., Zoph, B., Vasudevan, V., Le, Q.V.: Understanding and simplifying one-shot architecture search. In: ICML (2018)

    Google Scholar 

  4. Bengio, E., Jain, M., Korablyov, M., Precup, D., Bengio, Y.: Flow network based generative models for non-iterative diverse candidate generation. In: NeurIPS (2021)

    Google Scholar 

  5. Bengio, Y., Deleu, T., Hu, E.J., Lahlou, S., Tiwari, M., Bengio, E.: GFlowNet foundations. arXiv preprint arXiv:2111.09266 (2021)

  6. Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: NeurIPS (2011)

    Google Scholar 

  7. Bi, K., Hu, C., Xie, L., Chen, X., Wei, L., Tian, Q.: Stabilizing DARTS with amended gradient estimation on architectural parameters. arXiv:1910.11831 (2019)

  8. Bonilla, E.V., Chai, K.M.A., Williams, C.K.I.: Multi-task gaussian process prediction. In: NeurIPS (2007)

    Google Scholar 

  9. Chao, X., Mengting, H., Xueqi, H., Chun-Guang, L.: Automated search space and search strategy selection for AutoML. Pattern Recognit. 124, 108474 (2022)

    Google Scholar 

  10. Chen, X., Hsieh, C.J.: Stabilizing differentiable architecture search via perturbation-based regularization. In: ICLR (2020)

    Google Scholar 

  11. Chen, X., Xie, L., Wu, J., Tian, Q.: Progressive differentiable architecture search: bridging the depth gap between search and evaluation. In: ICCV (2019)

    Google Scholar 

  12. Chu, X., Wang, X., Zhang, B., Lu, S., Wei, X., Yan, J.: Darts-: robustly stepping out of performance collapse without indicators. In: ICLR (2021)

    Google Scholar 

  13. Coates, A., Ng, A.Y., Lee, H.: An analysis of single-layer networks in unsupervised feature learning. In: AISTATS (2011)

    Google Scholar 

  14. Dong, X., Yang, Y.: One-shot neural architecture search via self-evaluated template network. In: ICCV (2019)

    Google Scholar 

  15. Dong, X., Yang, Y.: Searching for a robust neural architecture in four GPU hours. In: CVPR (2019)

    Google Scholar 

  16. Dong, X., Yang, Y.: An algorithm-agnostic NAS benchmark. In: ICLR (2020)

    Google Scholar 

  17. Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset (2007)

    Google Scholar 

  18. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  19. Kandasamy, K., Neiswanger, W., Schneider, J., Póczos, B., Xing, E.P.: Neural architecture search with Bayesian optimisation and optimal transport. In: NeurIPS (2018)

    Google Scholar 

  20. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009)

    Google Scholar 

  21. de Laroussilhe, Q., Jastrzkebski, S., Houlsby, N., Gesmundo, A.: Neural architecture search over a graph search space. CoRR (2018)

    Google Scholar 

  22. Li, G., Qian, G., Delgadillo, I.C., Muller, M., Thabet, A., Ghanem, B.: SGAS: sequential greedy architecture search. In: CVPR (2020)

    Google Scholar 

  23. Li, L., Talwalkar, A.: Random search and reproducibility for neural architecture search. In: UAI (2019)

    Google Scholar 

  24. Li, L., Jamieson, K.G., DeSalvo, G., Rostamizadeh, A., Talwalkar, A.: Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res., 185:1–185:52 (2017)

    Google Scholar 

  25. Liu, C., et al.: Progressive neural architecture search. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 19–35. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_2

    Chapter  Google Scholar 

  26. Liu, H., Simonyan, K., Vinyals, O., Fernando, C., Kavukcuoglu, K.: Hierarchical representations for efficient architecture search. In: ICLR (2018)

    Google Scholar 

  27. Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: ICLR (2019)

    Google Scholar 

  28. Mnih, V., et al.: Nature (2015)

    Google Scholar 

  29. Muresan, H., Oltean, M.: Fruit recognition from images using deep learning. Acta Universitatis Sapientiae Informatica (2018)

    Google Scholar 

  30. Nguyen, V., Le, T., Yamada, M., Osborne, M.A.: Optimal transport kernels for sequential and parallel neural architecture search. In: ICML (2021)

    Google Scholar 

  31. Nilsback, M., Zisserman, A.: Automated flower classification over a large number of classes. In: ICVGIP (2008)

    Google Scholar 

  32. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: ICML (2018)

    Google Scholar 

  33. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: AAAI (2019)

    Google Scholar 

  34. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. In: IJCV (2015)

    Google Scholar 

  35. Shi, H., Pi, R., Xu, H., Li, Z., Kwok, J., Zhang, T.: Bridging the gap between sample-based and one-shot neural architecture search with BONAS. In: NeurIPS (2020)

    Google Scholar 

  36. Smith, S.L., Kindermans, P., Ying, C., Le, Q.V.: Don’t decay the learning rate, increase the batch size. In: ICLR (2018)

    Google Scholar 

  37. Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: NeurIPS (2012)

    Google Scholar 

  38. Su, X., et al.: Prioritized architecture sampling with Monto-Carlo tree search. In: CVPR (2021)

    Google Scholar 

  39. Swersky, K., Snoek, J., Adams, R.P.: Multi-task Bayesian optimization. In: NeurIPS (2013)

    Google Scholar 

  40. Wang, L., Fonseca, R., Tian, Y.: Learning search space partition for black-box optimization using Monte Carlo tree search. In: NeurIPS (2020)

    Google Scholar 

  41. Wang, R., Cheng, M., Chen, X., Tang, X., Hsieh, C.J.: Rethinking architecture selection in differentiable NAS. In: ICLR (2021)

    Google Scholar 

  42. Wang, X., Lin, J., Zhao, J., Yang, X., Yan, J.: EAutoDet: efficient architecture search for object detection. In: Farinella, T. (ed.) ECCV 2022. LNCS, vol. 13680, pp. 668–684 (2022)

    Google Scholar 

  43. Wang, X., Xue, C., Yan, J., Yang, X., Hu, Y., Sun, K.: MergeNAS: merge operations into one for differentiable architecture search. In: IJCAI (2020)

    Google Scholar 

  44. West, D.B., et al.: Introduction to Graph Theory, vol. 2. Prentice Hall, Upper Saddle River (1996)

    Google Scholar 

  45. White, C., Neiswanger, W., Savani, Y.: Bananas: Bayesian optimization with neural architectures for neural architecture search. In: AAAI (2021)

    Google Scholar 

  46. Xie, S., Kirillov, A., Girshick, R.B., He, K.: Exploring randomly wired neural networks for image recognition. In: ICCV (2019)

    Google Scholar 

  47. Xie, S., Zheng, H., Liu, C., Lin, L.: SNAS: stochastic neural architecture search. In: ICLR (2019)

    Google Scholar 

  48. Xu, Y., et al.: PC-DARTS: partial channel connections for memory-efficient architecture search. In: ICLR (2019)

    Google Scholar 

  49. Xue, C., Wang, X., Yan, J., Hu, Y., Yang, X., Sun, K.: Rethinking Bi-level optimization in neural architecture search: a gibbs sampling perspective. In: AAAI (2021)

    Google Scholar 

  50. Zela, A., Elsken, T., Saikia, T., Marrakchi, Y., Brox, T., Hutter, F.: Understanding and robustifying differentiable architecture search. In: ICLR (2020)

    Google Scholar 

  51. Zhou, H., Yang, M., Wang, J., Pan, W.: BayesNAS: a Bayesian approach for neural architecture search. In: ICML (2019)

    Google Scholar 

  52. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017)

    Google Scholar 

  53. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: CVPR (2018)

    Google Scholar 

Download references

Acknowledgments

J. Yan is supported by the National Key Research and Development Program of China under grant 2020AAA0107600. C.-G. Li is supported by the National Natural Science Foundation of China under grant 61876022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Xue .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xue, C., Wang, X., Yan, J., Li, CG. (2022). A Max-Flow Based Approach for Neural Architecture Search. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13680. Springer, Cham. https://doi.org/10.1007/978-3-031-20044-1_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20044-1_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20043-4

  • Online ISBN: 978-3-031-20044-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics