Skip to main content

Advertisement

Log in

A glance at in-context learning

  • Letter
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8748–8763

  2. Sun K, Luo X, Luo M Y. A survey of pretrained language models. In: Proceedings of International Conference on Knowledge Science, Engineering and Management. 2022, 442–456

  3. Brown T B, Mann B, Ryder N, Subbiah M, Kaplan J D, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D M, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 159

  4. Hofstadter D R, Sander E. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. New York: Basic Books, 2013

    Google Scholar 

  5. .${ref.title_en}.

  6. Xie S M, Raghunathan A, Liang P, Ma T. An explanation of in-context learning as implicit Bayesian inference. In: Proceedings of the 10th International Conference on Learning Representations. 2021

  7. Yang X, Wu Y, Yang M, Chen H, Geng X. Exploring diverse in-context configurations for image captioning. In: Proceedings of the 37th Conference on Neural Information Processing Systems. 2024

  8. Wang L, Li L, Dai D, Chen D, Zhou H, Meng F, Zhou J, Sun X. Label words are anchors: An information flow perspective for understanding in-context learning. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 9840–9855

  9. Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, Aleman F L, Almeida D, Altenschmidt J, Altman S, Anadkat S, others. Gpt-4 Technical Report. 2023, arXiv preprint arXiv:2303.08774

  10. Li L, Peng J, Chen H, Gao C, Yang X. How to configure good incontext sequence for visual question answering. 2023, arXiv preprint arXiv: 2312.01571

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 62206048), Natural Science Foundation of Jiangsu Province (BK20220819), Young Elite Scientists Sponsorship Program of Jiangsu Association for Science and Technology (Tj-2022-027), and the Big Data Computing Center of Southeast University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xu Yang.

Ethics declarations

Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Y., Yang, X. A glance at in-context learning. Front. Comput. Sci. 18, 185347 (2024). https://doi.org/10.1007/s11704-024-40013-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-024-40013-9