skip to main content
10.1145/3583780.3614749acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

NP-SSL: A Modular and Extensible Self-supervised Learning Library with Neural Processes

Published: 21 October 2023 Publication History

Abstract

Neural Processes (NPs) are a family of supervised density estimators devoted to probabilistic function approximation with meta-learning. Despite extensive research on the subject, the absence of a unified framework for NPs leads to varied architectural solutions across diverse studies. This non-consensus poses challenges to reproducing and benchmarking different NPs. Moreover, existing codebases mainly prioritize generative density estimation, yet rarely consider expanding the capability of NPs to self-supervised representation learning, which however has gained growing importance in data mining applications. To this end, we present NP-SSL, a modular and configurable framework with built-in support, requiring minimal effort to 1) implement classical NPs architectures; 2) customize specific components; 3) integrate hybrid training scheme (e.g., contrastive); and 4) extend NPs to act as a self-supervised learning toolkit, producing latent representations of data, and facilitating diverse downstream predictive tasks. To illustrate, we discuss a case study that applies NP-SSL to model time-series data. We interpret that NP-SSL can handle different predictive tasks such as imputation and forecasting, by a simple switch in data samplings, without significant change to the underlying structure. We hope this study can reduce the workload of future research on leveraging NPs to tackle more a broader range of real-world data mining applications. Code and documentation are at https://github.com/zyecs/NP-SSL.

References

[1]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[2]
Jing Du, Zesheng Ye, Bin Guo, Zhiwen Yu, and Lina Yao. 2023. IDNP: Interest Dynamics Modeling using Generative Neural Processes for Sequential Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining. 481--489.
[3]
Yann Dubois, Jonathan Gordon, and Andrew YK Foong. 2020. Neural Process Family. http://yanndubs.github.io/Neural-Process-Family/.
[4]
Andrew Foong, Wessel Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, and Richard Turner. 2020. Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 8284--8295. https://proceedings. neurips.cc/paper/2020/file/5df0385cba256a135be596dbe28fa7aa-Paper.pdf
[5]
Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and SM Ali Eslami. 2018. Conditional neural processes. In International Conference on Machine Learning. PMLR, 1704--1713.
[6]
Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. 2018. Neural Processes. arXiv:1807.01622 [cs.LG]
[7]
Muhammad Waleed Gondal, Shruti Joshi, Nasim Rahaman, Stefan Bauer, Manuel Wuthrich, and Bernhard Schölkopf. 2021. Function contrastive learning of transferable meta-representations. In International Conference on Machine Learning. PMLR, 3755--3765.
[8]
Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, and Richard E. Turner. 2020. Convolutional Conditional Neural Processes. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https://openreview. net/forum?id=Skey4eBYPS
[9]
Michael Gutmann and Aapo Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 297--304.
[10]
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000--16009.
[11]
Peter Holderrieth, Michael J Hutchinson, and Yee Whye Teh. 2021. Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes. In International Conference on Machine Learning. PMLR, 4297--4307.
[12]
Saurav Jha, Dong Gong, Xuesong Wang, Richard E Turner, and Lina Yao. 2022. The neural process family: Survey, applications and perspectives. arXiv preprint arXiv:2209.00517 (2022).
[13]
Konstantinos Kallidromitis, Denis Gudovskiy, Kozuka Kazuki, Ohama Iku, and Luca Rigazio. 2021. Contrastive neural processes for self-supervised learning. In Asian Conference on Machine Learning. PMLR, 594--609.
[14]
Makoto Kawano,Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, and Yutaka Matsuo. 2021. Group equivariant conditional neural processes. arXiv preprint arXiv:2102.08759 (2021).
[15]
Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. 2019. Attentive Neural Processes. In Proceedings of the International Conference on Learning Representations (ICLR).
[16]
Stratis Markou, James Requeima, Wessel P Bruinsma, Anna Vaughan, and Richard E Turner. 2022. Practical Conditional Neural Processes Via Tractable Dependent Predictions. arXiv preprint arXiv:2203.08775 (2022).
[17]
Emile Mathieu, Adam Foster, and Yee Teh. 2021. On contrastive representations of stochastic processes. Advances in Neural Information Processing Systems 34 (2021), 28823--28835.
[18]
Tung Nguyen and Aditya Grover. 2022. Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling. In International Conference on Machine Learning. PMLR, 16569--16594.
[19]
Alexander Norcliffe, Cristian Bodnar, Ben Day, Jacob Moss, and Pietro Liò. 2021. Neural ODE processes. arXiv preprint arXiv:2103.12413 (2021).
[20]
Shenghao Qin, Jiacheng Zhu, Jimmy Qin, Wenshuo Wang, and Ding Zhao. 2019. Recurrent attentive neural process for sequential data. arXiv preprint arXiv:1910.09323 (2019).
[21]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).
[22]
Michael Volpp, Fabian Flürenbrock, Lukas Grossberger, Christian Daniel, and Gerhard Neumann. 2021. Bayesian Context Aggregation for Neural Processes. In ICLR.
[23]
Qi Wang and Herke Van Hoof. 2020. Doubly stochastic variational inference for neural processes with hierarchical latent variables. In International Conference on Machine Learning. PMLR, 10018--10028.
[24]
Zesheng Ye, Jing Du, and Lina Yao. 2023. Adversarially Contrastive Estimation of Conditional Neural Processes. arXiv preprint arXiv:2303.13004 (2023).
[25]
Zesheng Ye and Lina Yao. 2022. Contrastive Conditional Neural Processes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9687--9696.

Index Terms

  1. NP-SSL: A Modular and Extensible Self-supervised Learning Library with Neural Processes

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
      October 2023
      5508 pages
      ISBN:9798400701245
      DOI:10.1145/3583780
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. neural processes
      2. probabilistic meta-learning
      3. self-supervised learning

      Qualifiers

      • Short-paper

      Conference

      CIKM '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      CIKM '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 109
        Total Downloads
      • Downloads (Last 12 months)51
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 01 Mar 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media