Abstract
Autonomous Machine Learning (AML) alludes to a learning framework having an adaptable trademark to develop the construction and boundaries on the fly. It is empowered by the way that AMLs means to adjust among stability and plasticity of a learning system. At present, deep learning-based unsupervised learning is another intriguing issue in the field of Autonomous Machine Learning, among which multi-architecture optimization is of extraordinary trouble in this research area. At the point when the current calculations face multi-intellectual model issues, it frequently sets aside a great deal of effort to ceaselessly set diverse looking through functions of different boundaries to look for the ideal model. In this work, we propose a novel technique for multi-model contrastive learning, which can get diverse unsupervised learning structures that are fit for deploying under different budgets, through a single shot super-network. Experiments reveal that our algorithm can boost the performance of the existing methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Higgins, I., et al.: Beta-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR 2017: International Conference on Learning Representations (2017)
Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27(5), 2672–2680 (2014)
He, K., et al.: Momentum contrast for unsupervised visual representation learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9729–9738 (2020)
Chen, T., et al.: A simple framework for contrastive learning of visual representations. In: ICML 2020: 37th International Conference on Machine Learning, vol. 1, pp. 1597–1607 (2020)
Grill, J.-B., et al.: Bootstrap your own latent: a new approach to self-supervised learning. In: Advances in Neural Information Processing Systems, vol. 33, pp. 21271–21284 (2020)
Chen, X., He, K.: Exploring simple siamese representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750–15758 (2020)
Zoph, B., et al.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710 (2018)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141 (2018)
Real, E., et al.: Regularized evolution for image classifier architecture search. In: Proceedings of the AAAI Conference on Artificial Intelligence, p. 4780–4789 (2019)
Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: International Conference on Learning Representations (ICLR), p. 3 (2019)
Krizhevsky, A., Hinton, G.: Convolutional deep belief networks on CIFAR-10. Unpublished Manuscript 40(7), 1–9 (2010)
Liu, C., et al.: Auto-DeepLab: hierarchical neural architecture search for semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p. 82–92 (2019)
Yu, J., et al.: BigNAS: scaling up neural architecture search with big single-stage models. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 702–717. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_41
Acknowledgement
This work was partially supported by the National Natural Science Foundation of China (61632011, 61876053, 62006062), the Shenzhen Foundational Research Funding (JCYJ20180507183527919), China Postdoctoral Science Foundation (2020M670912), Joint Lab of HITSZ and China Merchants Securities.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, S. et al. (2022). Contrastive Learning for Multiple Models in One Supernet. In: Xu, R., Cai, C., Zhang, LJ. (eds) Cognitive Computing – ICCC 2021. ICCC 2021. Lecture Notes in Computer Science(), vol 12992. Springer, Cham. https://doi.org/10.1007/978-3-030-96419-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-96419-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-96418-4
Online ISBN: 978-3-030-96419-1
eBook Packages: Computer ScienceComputer Science (R0)