skip to main content
10.1145/2124295.2124314acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

Collaborative ranking

Published:08 February 2012Publication History

ABSTRACT

Typical recommender systems use the root mean squared error (RMSE) between the predicted and actual ratings as the evaluation metric. We argue that RMSE is not an optimal choice for this task, especially when we will only recommend a few (top) items to any user. Instead, we propose using a ranking metric, namely normalized discounted cumulative gain (NDCG), as a better evaluation metric for this task. Borrowing ideas from the learning to rank community for web search, we propose novel models which approximately optimize NDCG for the recommendation task. Our models are essentially variations on matrix factorization models where we also additionally learn the features associated with the users and the items for the ranking task. Experimental results on a number of standard collaborative filtering data sets validate our claims. The results also show the accuracy and efficiency of our models and the benefits of learning features for ranking.

References

  1. Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137--1155, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. L. Breiman. Random forests. In Machine Learning, volume 45(1), 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Burges. From ranknet to lambdarank to lambdamart: An overview. In Microsoft Research Technical Report MSR-TR-2010-82, 2010.Google ScholarGoogle Scholar
  4. R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML '08: Proceedings of the 25th international conference on Machine learning, pages 160--167, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. D. Cossock and T. Zhang. Statistical analysis of bayes optimal subset ranking. IEEE Transactions on Information Theory, 54(11):5140--5154, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, RecSys '10, pages 39--46, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Donmez, K. M. Svore, and C. J. Burges. On the local optimality of lambdarank. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09, pages 460--467, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res., 4:933--969, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189--1232, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  10. S. Hacker and L. von Ahn. Matchin: eliciting user preferences with an online game. In CHI '09: Proceedings of the 27th international conference on Human factors in computing systems, pages 1207--1216, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In P. J. Bartlett, B. Schölkopf, D. Schuurmans, and A. J. Smola, editors, Advances in Large Margin Classifiers, pages 115--132. MIT Press, 2000.Google ScholarGoogle Scholar
  12. K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20:422--446, October 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '02, pages 133--142, New York, NY, USA, 2002. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. Koren, R. Bell, and C. Volinsky. Matrix Factorization Techniques for Recommender Systems. Computer, 42(8):30--37, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Li, C. J. C. Burges, and Q. Wu. Mcrank: Learning to rank using multiple classification and gradient boosting, 2007.Google ScholarGoogle Scholar
  16. B. M. Marlin and R. S. Zemel. Collaborative prediction and ranking with non-random missing data. Proceedings of the third ACM conference on Recommender systems (RecSys '09), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. Melville and V. Sindhwani. Recommender Systems. 2010.Google ScholarGoogle Scholar
  18. A. Mohan, Z. Chen, and K. Q. Weinberger. Web-search ranking with initialized gradient boosted regression trees. Journal of Machine Learning Research, Workshop and Conference Proceedings, 14:77--89, 2011.Google ScholarGoogle Scholar
  19. M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: estimating the click-through rate for new ads. In Proceedings of the 16th international conference on World Wide Web, WWW '07, pages 521--530, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Neural Information Processing Systems (NIPS), 2007.Google ScholarGoogle Scholar
  21. H. Steck and R. S. Zemel. A generalized probabilistic framework and its variants for training top-k recommender systems. Workshop on the Practical Use of Recommender Systems, Algorithms and Technologies (PRSAT 2010) in conjunction with RecSys 2010, 2010.Google ScholarGoogle Scholar
  22. M. Weimer, A. Karatzoglou, Q. V. Le, and A. Smola. Cofi-rank: Maximum margin matrix factorization for collaborative ranking. In Neural Information Processing Systems (NIPS), 2007.Google ScholarGoogle Scholar
  23. J. Xu and H. Li. Adarank: a boosting algorithm for information retrieval. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '07, pages 391--398, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. J. Xu, T.-Y. Liu, M. Lu, H. Li, and W.-Y. Ma. Directly optimizing evaluation measures in learning to rank. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '08, pages 107--114, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Collaborative ranking

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        WSDM '12: Proceedings of the fifth ACM international conference on Web search and data mining
        February 2012
        792 pages
        ISBN:9781450307475
        DOI:10.1145/2124295

        Copyright © 2012 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 8 February 2012

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate498of2,863submissions,17%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader