Abstract
Advertisement (ads) and recommendation are important for companies to drive their business objectives and improve user loyalty. A key strategy for these services is semantic modeling, which involves extracting useful knowledge or information from text. Large language models (LLMs) such as GPT-3 and LaMDA have incredible natural language understanding capabilities and their text embeddings have achieved excellent performance in various NLP tasks. Despite their potential, the discussion about whether text embeddings of LLMs can help ads and recommendation services is limited. In order to explore the utilization of GPT embeddings for ads and recommendation, we propose three strategies to integrate LLMs’ knowledge into basic PLMs and improve their performance. These strategies consider GPT embedding as a feature (EaaF) to enrich text semantics, as a regularization (EaaR) to guide text token embedding aggregation, and as a pre-training task (EaaP) to replicate the capability of LLMs, respectively. Our experiments demonstrate that, by incorporating GPT embeddings, basic PLMs can improve their performance in both ads and recommendation tasks. Our code is available at https://github.com/Wenjun-Peng/GPT4SM
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ai, Q., Hill, D.N., Vishwanathan, S., Croft, W.B.: A zero attention model for personalized product search. In: CIKM, pp. 379–388 (2019)
Ai, Q., Zhang, Y., Bi, K., Chen, X., Croft, W.B.: Learning a hierarchical embedding model for personalized product search. In: SIGIR, pp. 645–654 (2017)
Brown, T., et al.: Language models are few-shot learners. NIPS 33, 1877–1901 (2020)
Chowdhery, A., et al.: Palm: scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL, pp. 4171–4186 (2019)
Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. NIPS 32 (2019)
Jia, Q., Li, J., Zhang, Q., He, X., Zhu, J.: RmBERT: news recommendation via recurrent reasoning memory network over BERT. In: SIGIR, pp. 1773–1777 (2021)
Li, D., et al.: VIRT: improving representation-based text matching via virtual interaction. In: EMNLP, pp. 914–925 (2022)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Liu, Y., Jia, J., Liu, H., Gong, N.Z.: Stolenencoder: stealing pre-trained encoders in self-supervised learning. In: CCS, pp. 2115–2128 (2022)
Okura, S., Tagami, Y., Ono, S., Tajima, A.: Embedding-based news recommendation for millions of users. In: SIGKDD, pp. 1933–1942 (2017)
Qi, T., Wu, F., Wu, C., Huang, Y.: Personalized news recommendation with knowledge-aware interactive matching. In: SIGIR, pp. 61–70 (2021)
Qiao, Y., Xiong, C., Liu, Z., Liu, Z.: Understanding the behaviors of BERT in ranking. arXiv preprint arXiv:1904.07531 (2019)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR 21(1), 5485–5551 (2020)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. arXiv preprint arXiv:1908.10084 (2019)
Taylor, R., et al.: Galactica: a large language model for science. arXiv preprint arXiv:2211.09085 (2022)
Thoppilan, R., et al.: Lamda: language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)
Touvron, H., et al.: Llama: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)
Wallace, E., Stern, M., Song, D.: Imitation attacks and defenses for black-box machine translation systems. In: EMNLP, pp. 5531–5546 (Nov 2020)
Wang, H., Wu, F., Liu, Z., Xie, X.: Fine-grained interest matching for neural news recommendation. In: ACL, pp. 836–845 (2020)
Wang, H., Zhang, F., Xie, X., Guo, M.: DKN: deep knowledge-aware network for news recommendation. In: WWW, pp. 1835–1844 (2018)
Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: Minilm: deep self-attention distillation for task-agnostic compression of pre-trained transformers. NIPS 33, 5776–5788 (2020)
Wu, C., Wu, F., Qi, T., Huang, Y.: User modeling with click preference and reading satisfaction for news recommendation. In: IJCAI, pp. 3023–3029 (2020)
Wu, C., Wu, F., Qi, T., Huang, Y.: Empowering news recommendation with pre-trained language models. In: SIGIR, pp. 1652–1656 (2021)
Wu, F., et al.: Mind: a large-scale dataset for news recommendation. In: ACL, pp. 3597–3606 (2020)
Xi, Y., et al.: Multi-level interaction reranking with user behavior history. In: SIGIR, pp. 1336–1346 (2022)
Xu, Q., He, X., Lyu, L., Qu, L., Haffari, G.: Beyond model extraction: Imitation attack for black-box NLP APIs. arXiv e-prints arXiv-2108 (2021)
Zanella-Béguelin, S., et al.: Analyzing information leakage of updates to natural language models. In: CCS, pp. 363–375 (2020)
Zhang, Q., et al.: UnBERT: user-news matching BERT for news recommendation. In: IJCAI, pp. 3356–3362 (2021)
Zhu, Q., Zhou, X., Song, Z., Tan, J., Guo, L.: Dan: deep attention neural network for news recommendation. In: AAAI, vol. 33, pp. 5973–5980 (2019)
Zhuang, S., Zuccon, G.: CharacterBERT and self-teaching for improving the robustness of dense retrievers on queries with typos. In: SIGIR, pp. 1444–1454 (2022)
Acknowledgments
This work was supported by the grants from National Natural Science Foundation of China (No. 62222213, 62072423), and the USTC Research Funds of the Double First-Class Initiative (No. YD2150002009).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Peng, W., Xu, D., Xu, T., Zhang, J., Chen, E. (2023). Are GPT Embeddings Useful for Ads and Recommendation?. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14120. Springer, Cham. https://doi.org/10.1007/978-3-031-40292-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-40292-0_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40291-3
Online ISBN: 978-3-031-40292-0
eBook Packages: Computer ScienceComputer Science (R0)