Abstract
We focus on nDCG choice of gains, and in particular on the fracture between large differences in exponential gains of high relevance labels and the not-so-small confusion, or inconsistency, between these labels in data. We show that better gains can be derived from data by measuring the label inconsistency, to the point that virtually indistinguishable labels correspond to equal gains. Our derived optimal gains make a better nDCG objective for training Learning to Rank algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bailey, P., Craswell, N., Soboroff, I., Thomas, P., de Vries, A.P., Yilmaz, E.: Relevance assessment: Are judges exchangeable and does it matter? In: SIGIR (2008)
Sakai, T.: Evaluating evaluation metrics based on the bootstrap. In: SIGIR (2006)
Wu, Q., Burges, C., Svore, K., Gao, J.: Ranking, boosting, and model adaptation
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Metrikov, P., Pavlu, V., Aslam, J.A. (2013). Optimizing nDCG Gains by Minimizing Effect of Label Inconsistency. In: Serdyukov, P., et al. Advances in Information Retrieval. ECIR 2013. Lecture Notes in Computer Science, vol 7814. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36973-5_78
Download citation
DOI: https://doi.org/10.1007/978-3-642-36973-5_78
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36972-8
Online ISBN: 978-3-642-36973-5
eBook Packages: Computer ScienceComputer Science (R0)