skip to main content
10.1145/3640457.3688043acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
extended-abstract

Leveraging LLM generated labels to reduce bad matches in job recommendations

Published: 08 October 2024 Publication History

Abstract

Negative signals are increasingly employed to enhance recommendation quality. However, explicit negative feedback is often sparse and may disproportionately reflect the preferences of more vocal users. Commonly used implicit negative feedback, such as impressions without positive interactions, has the limitation of not accurately capturing users’ true negative preferences because users mainly pursue information they consider interesting. In this work, we present an approach that leverages fine-tuned Large Language Models (LLMs) to evaluate recommendation quality and generate negative signals at scale while maintaining cost efficiency. We demonstrate significant improvements in our recommendation systems by deploying a traditional classifier trained using LLM-generated labels.

Supplemental Material

PDF File
Appendix on our data privacy practice.

References

[1]
Cheng-Han Chiang and Hung yi Lee. 2023. Can Large Language Models Be an Alternative to Human Evaluations?. In Annual Meeting of the Association for Computational Linguistics. https://api.semanticscholar.org/CorpusID:258461287
[2]
Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). In Proceedings of the 16th ACM Conference on Recommender Systems (, Seattle, WA, USA,) (RecSys ’22). Association for Computing Machinery, New York, NY, USA, 299–315. https://doi.org/10.1145/3523227.3546767
[3]
OpenAI. [n. d.]. Fine-tuning. https://platform.openai.com/docs/guides/fine-tuning
[4]
Bibek Paudel, Sandro Luck, and Abraham Bernstein. 2018. Loss Aversion in Recommender Systems: Utilizing Negative User Preference to Improve Recommendation Quality. ArXiv abs/1812.11422 (2018). https://api.semanticscholar.org/CorpusID:57189422
[5]
Shreya Shankar, J. D. Zamfirescu-Pereira, Björn Hartmann, Aditya G. Parameswaran, and Ian Arawjo. 2024. Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences. arxiv:2404.12272 [cs.HC] https://arxiv.org/abs/2404.12272
[6]
Yueqi Wang, Yoni Halpern, Shuo Chang, Jingchen Feng, Elaine Ya Le, Longfei Li, Xujian Liang, Min-Cheng Huang, Shane Li, Alex Beutel, Yaping Zhang, and Shuchao Bi. 2023. Learning from Negative User Feedback and Measuring Responsiveness for Sequential Recommenders. In Proceedings of the 17th ACM Conference on Recommender Systems (Singapore, Singapore) (RecSys ’23). Association for Computing Machinery, New York, NY, USA, 1049–1053. https://doi.org/10.1145/3604915.3610244
[7]
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Haotong Zhang, Joseph Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. ArXiv abs/2306.05685 (2023). https://api.semanticscholar.org/CorpusID:259129398

Index Terms

  1. Leveraging LLM generated labels to reduce bad matches in job recommendations

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    RecSys '24: Proceedings of the 18th ACM Conference on Recommender Systems
    October 2024
    1438 pages
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 October 2024

    Check for updates

    Author Tags

    1. Large Language Models
    2. Recommender systems

    Qualifiers

    • Extended-abstract
    • Research
    • Refereed limited

    Conference

    Acceptance Rates

    Overall Acceptance Rate 254 of 1,295 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 225
      Total Downloads
    • Downloads (Last 12 months)225
    • Downloads (Last 6 weeks)15
    Reflects downloads up to 18 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media