References
Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8748–8763
Sun K, Luo X, Luo M Y. A survey of pretrained language models. In: Proceedings of International Conference on Knowledge Science, Engineering and Management. 2022, 442–456
Brown T B, Mann B, Ryder N, Subbiah M, Kaplan J D, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D M, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 159
Hofstadter D R, Sander E. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. New York: Basic Books, 2013
.${ref.title_en}.
Xie S M, Raghunathan A, Liang P, Ma T. An explanation of in-context learning as implicit Bayesian inference. In: Proceedings of the 10th International Conference on Learning Representations. 2021
Yang X, Wu Y, Yang M, Chen H, Geng X. Exploring diverse in-context configurations for image captioning. In: Proceedings of the 37th Conference on Neural Information Processing Systems. 2024
Wang L, Li L, Dai D, Chen D, Zhou H, Meng F, Zhou J, Sun X. Label words are anchors: An information flow perspective for understanding in-context learning. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 9840–9855
Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, Aleman F L, Almeida D, Altenschmidt J, Altman S, Anadkat S, others. Gpt-4 Technical Report. 2023, arXiv preprint arXiv:2303.08774
Li L, Peng J, Chen H, Gao C, Yang X. How to configure good incontext sequence for visual question answering. 2023, arXiv preprint arXiv: 2312.01571
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 62206048), Natural Science Foundation of Jiangsu Province (BK20220819), Young Elite Scientists Sponsorship Program of Jiangsu Association for Science and Technology (Tj-2022-027), and the Big Data Computing Center of Southeast University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.
Rights and permissions
About this article
Cite this article
Wu, Y., Yang, X. A glance at in-context learning. Front. Comput. Sci. 18, 185347 (2024). https://doi.org/10.1007/s11704-024-40013-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-024-40013-9