Skip to main content

Are GPT Embeddings Useful for Ads and Recommendation?

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14120))

  • 1051 Accesses

Abstract

Advertisement (ads) and recommendation are important for companies to drive their business objectives and improve user loyalty. A key strategy for these services is semantic modeling, which involves extracting useful knowledge or information from text. Large language models (LLMs) such as GPT-3 and LaMDA have incredible natural language understanding capabilities and their text embeddings have achieved excellent performance in various NLP tasks. Despite their potential, the discussion about whether text embeddings of LLMs can help ads and recommendation services is limited. In order to explore the utilization of GPT embeddings for ads and recommendation, we propose three strategies to integrate LLMs’ knowledge into basic PLMs and improve their performance. These strategies consider GPT embedding as a feature (EaaF) to enrich text semantics, as a regularization (EaaR) to guide text token embedding aggregation, and as a pre-training task (EaaP) to replicate the capability of LLMs, respectively. Our experiments demonstrate that, by incorporating GPT embeddings, basic PLMs can improve their performance in both ads and recommendation tasks. Our code is available at https://github.com/Wenjun-Peng/GPT4SM

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://api.openai.com/v1/embeddings.

  2. 2.

    https://openai.com/blog/new-and-improved-embedding-model/.

References

  1. Ai, Q., Hill, D.N., Vishwanathan, S., Croft, W.B.: A zero attention model for personalized product search. In: CIKM, pp. 379–388 (2019)

    Google Scholar 

  2. Ai, Q., Zhang, Y., Bi, K., Chen, X., Croft, W.B.: Learning a hierarchical embedding model for personalized product search. In: SIGIR, pp. 645–654 (2017)

    Google Scholar 

  3. Brown, T., et al.: Language models are few-shot learners. NIPS 33, 1877–1901 (2020)

    Google Scholar 

  4. Chowdhery, A., et al.: Palm: scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022)

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL, pp. 4171–4186 (2019)

    Google Scholar 

  6. Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. NIPS 32 (2019)

    Google Scholar 

  7. Jia, Q., Li, J., Zhang, Q., He, X., Zhu, J.: RmBERT: news recommendation via recurrent reasoning memory network over BERT. In: SIGIR, pp. 1773–1777 (2021)

    Google Scholar 

  8. Li, D., et al.: VIRT: improving representation-based text matching via virtual interaction. In: EMNLP, pp. 914–925 (2022)

    Google Scholar 

  9. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  10. Liu, Y., Jia, J., Liu, H., Gong, N.Z.: Stolenencoder: stealing pre-trained encoders in self-supervised learning. In: CCS, pp. 2115–2128 (2022)

    Google Scholar 

  11. Okura, S., Tagami, Y., Ono, S., Tajima, A.: Embedding-based news recommendation for millions of users. In: SIGKDD, pp. 1933–1942 (2017)

    Google Scholar 

  12. Qi, T., Wu, F., Wu, C., Huang, Y.: Personalized news recommendation with knowledge-aware interactive matching. In: SIGIR, pp. 61–70 (2021)

    Google Scholar 

  13. Qiao, Y., Xiong, C., Liu, Z., Liu, Z.: Understanding the behaviors of BERT in ranking. arXiv preprint arXiv:1904.07531 (2019)

  14. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)

    Google Scholar 

  15. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR 21(1), 5485–5551 (2020)

    MathSciNet  MATH  Google Scholar 

  16. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. arXiv preprint arXiv:1908.10084 (2019)

  17. Taylor, R., et al.: Galactica: a large language model for science. arXiv preprint arXiv:2211.09085 (2022)

  18. Thoppilan, R., et al.: Lamda: language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)

  19. Touvron, H., et al.: Llama: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)

  20. Wallace, E., Stern, M., Song, D.: Imitation attacks and defenses for black-box machine translation systems. In: EMNLP, pp. 5531–5546 (Nov 2020)

    Google Scholar 

  21. Wang, H., Wu, F., Liu, Z., Xie, X.: Fine-grained interest matching for neural news recommendation. In: ACL, pp. 836–845 (2020)

    Google Scholar 

  22. Wang, H., Zhang, F., Xie, X., Guo, M.: DKN: deep knowledge-aware network for news recommendation. In: WWW, pp. 1835–1844 (2018)

    Google Scholar 

  23. Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: Minilm: deep self-attention distillation for task-agnostic compression of pre-trained transformers. NIPS 33, 5776–5788 (2020)

    Google Scholar 

  24. Wu, C., Wu, F., Qi, T., Huang, Y.: User modeling with click preference and reading satisfaction for news recommendation. In: IJCAI, pp. 3023–3029 (2020)

    Google Scholar 

  25. Wu, C., Wu, F., Qi, T., Huang, Y.: Empowering news recommendation with pre-trained language models. In: SIGIR, pp. 1652–1656 (2021)

    Google Scholar 

  26. Wu, F., et al.: Mind: a large-scale dataset for news recommendation. In: ACL, pp. 3597–3606 (2020)

    Google Scholar 

  27. Xi, Y., et al.: Multi-level interaction reranking with user behavior history. In: SIGIR, pp. 1336–1346 (2022)

    Google Scholar 

  28. Xu, Q., He, X., Lyu, L., Qu, L., Haffari, G.: Beyond model extraction: Imitation attack for black-box NLP APIs. arXiv e-prints arXiv-2108 (2021)

  29. Zanella-Béguelin, S., et al.: Analyzing information leakage of updates to natural language models. In: CCS, pp. 363–375 (2020)

    Google Scholar 

  30. Zhang, Q., et al.: UnBERT: user-news matching BERT for news recommendation. In: IJCAI, pp. 3356–3362 (2021)

    Google Scholar 

  31. Zhu, Q., Zhou, X., Song, Z., Tan, J., Guo, L.: Dan: deep attention neural network for news recommendation. In: AAAI, vol. 33, pp. 5973–5980 (2019)

    Google Scholar 

  32. Zhuang, S., Zuccon, G.: CharacterBERT and self-teaching for improving the robustness of dense retrievers on queries with typos. In: SIGIR, pp. 1444–1454 (2022)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the grants from National Natural Science Foundation of China (No. 62222213, 62072423), and the USTC Research Funds of the Double First-Class Initiative (No. YD2150002009).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tong Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peng, W., Xu, D., Xu, T., Zhang, J., Chen, E. (2023). Are GPT Embeddings Useful for Ads and Recommendation?. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14120. Springer, Cham. https://doi.org/10.1007/978-3-031-40292-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40292-0_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40291-3

  • Online ISBN: 978-3-031-40292-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics