skip to main content
10.1145/2684822.2685308acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

Listwise Approach for Rank Aggregation in Crowdsourcing

Published: 02 February 2015 Publication History

Abstract

Inferring a gold-standard ranking over a set of objects, such as documents or images, is a key task to build test collections for various applications like Web search and recommender systems. Crowdsourcing services provide an efficient and inexpensive way to collect judgments via labeling by sets of annotators. We thus study the problem of finding a consensus ranking from crowdsourced judgments. In contrast to conventional rank aggregation methods which minimize the distance between predicted ranking and input judgments from either pointwise or pairwise perspective, we argue that it is critical to consider the distance in a listwise way to emphasize the position importance in ranking. Therefore, we introduce a new listwise approach in this paper, where ranking measure based objective functions are utilized for optimization. In addition, we also incorporate the annotator quality into our model since the reliability of annotators can vary significantly in crowdsourcing. For optimization, we transform the optimization problem to the Linear Sum Assignment Problem, and then solve it by a very efficient algorithm named CrowdAgg guaranteeing the optimal solution. Experimental results on two benchmark data sets from different crowdsourcing tasks show that our algorithm is much more effective, efficient and robust than traditional methods.

References

[1]
J. A. Aslam and M. Montague. Models for metasearch. SIGIR2001, pages 276--284.
[2]
M. Bashir, J. Anderton, J. Wu, P. B. Golbus, V. Pavlu, and J. A. Aslam. A document rating system for preference judgements. SIGIR '13, pages 909--912.
[3]
B. Carterette, P. N. Bennett, D. M. Chickering, and S. T. Dumais. Here or there: Preference judgments for relevance. ECIR'08, pages 16--27, 2008.
[4]
X. Chen, P. N. Bennett, K. Collins-Thompson, and E. Horvitz. Pairwise ranking aggregation in a crowdsourced setting. WSDM '13, pages 193--202.
[5]
W. W. Cohen, R. E. Schapire, and Y. Singer. Learning to order things. JAIR1999, 10(1):243--270, May.
[6]
C. D. Manning, P. Raghavan, and H. Schütze. Introduction to Information Retrieval. Cambridge University Press, 2008.
[7]
R. Fagin, R. Kumar, and D. Sivakumar. Efficient similarity search and classification via rank aggregation. pages 301--312.
[8]
M. Farah and D. Vanderpooten. An outranking approach for rank aggregation in information retrieval. SIGIR2007, pages 591--598.
[9]
D. F. Gleich and L.-h. Lim. Rank aggregation via nuclear norm minimization. KDD2011, pages 60--68.
[10]
J. Guiver and E. Snelson. Bayesian inference for plackett-luce ranking models. ICML2009, pages 377--384.
[11]
G. H. Hardy, J. E. Littlewood, and G. Pólya. Inequalities. Cambridge University Press, 1952.
[12]
K. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM TOIS2002, 20(4):422--446.
[13]
J. Kek\"al\"ainen. Binary and graded relevance in ir evaluations-comparison of the effects on ranking of ir systems. IPM, 41(5):1019--1033, Sept. 2005.
[14]
H. W. Kuhn. The hungarian method for the assignment problem. Naval Research Logistics, 2(1--2):83--97, 1955.
[15]
Q. Le and A. Smola. Direct optimization of ranking measures. arXiv Preprint arXiv: 0704.3359, 2007.
[16]
G. Lebanon and J. D. Lafferty. Cranking: Combining rankings using conditional probability models on permutations. ICML2002, pages 363--370.
[17]
A. Moffat and J. Zobel. Rank-biased precision for measurement of retrieval effectiveness. ACM TOIS2008, 27(1):2:1--2:27, Dec.
[18]
M. Montague and J. A. Aslam. Condorcet fusion for improved retrieval. pages 538--548.
[19]
S. Niu, J. Guo, Y. Lan, and X. Cheng. Top-k learning to rank: labeling, ranking and evaluation. SIGIR '12, pages 751--760.
[20]
S. Niu, Y. Lan, J. Guo, and X. Cheng. Stochastic rank aggregation. UAI2013, pages 478--487.
[21]
A. Papoulis. Random Variables and Stochastic Processes. McGraw-Hill, 1991.
[22]
P. R.L. The analysis of permutations. Applied Statistics, 24(2):193--202, 1974.
[23]
L. L. Thurstone. The method of paired comparisons for social values. The Journal of Abnormal and Social Psychology, 21(4):384, 1927.
[24]
R. Typke, M. den Hoed, J. de Nooijer, F. Wiering, and R. C. Veltkamp. A ground truth for half a million musical incipits. JDIM, 3(1):34--38, 2005.
[25]
J. Urbano, J. Morato, M. Marrero, and D. Martín. Crowdsourcing preference judgments for evaluation of music similarity tasks. In SIGIR CSE, pages 9--16.
[26]
M. N. Volkovs and R. S. Zemel. A flexible generative model for preference aggregation. WWW2012, pages 479--488.
[27]
J. Wu. Applying em to compute document relevance from crowdsourced pair preferences. Master's thesis, Northeastern University, 2013.

Cited By

View all
  • (2023)Robust and High-Accessibility Ranking Method for Crowdsourcing-Based Decision MakingGroup Decision and Negotiation10.1007/s10726-023-09840-232:5(1211-1236)Online publication date: 27-Jun-2023
  • (2020)Is Rank Aggregation Effective in Recommender Systems? An Experimental AnalysisACM Transactions on Intelligent Systems and Technology10.1145/336537511:2(1-26)Online publication date: 10-Jan-2020
  • (2020)New Model for Identifying Critical Success Factors Influencing BIM Adoption from Precast Concrete Manufacturers’ ViewJournal of Construction Engineering and Management10.1061/(ASCE)CO.1943-7862.0001773146:4Online publication date: Apr-2020
  • Show More Cited By

Index Terms

  1. Listwise Approach for Rank Aggregation in Crowdsourcing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WSDM '15: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining
    February 2015
    482 pages
    ISBN:9781450333177
    DOI:10.1145/2684822
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 February 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourced labeling
    2. evaluation measures
    3. rank aggregation

    Qualifiers

    • Research-article

    Conference

    WSDM 2015

    Acceptance Rates

    WSDM '15 Paper Acceptance Rate 39 of 238 submissions, 16%;
    Overall Acceptance Rate 498 of 2,863 submissions, 17%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 08 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Robust and High-Accessibility Ranking Method for Crowdsourcing-Based Decision MakingGroup Decision and Negotiation10.1007/s10726-023-09840-232:5(1211-1236)Online publication date: 27-Jun-2023
    • (2020)Is Rank Aggregation Effective in Recommender Systems? An Experimental AnalysisACM Transactions on Intelligent Systems and Technology10.1145/336537511:2(1-26)Online publication date: 10-Jan-2020
    • (2020)New Model for Identifying Critical Success Factors Influencing BIM Adoption from Precast Concrete Manufacturers’ ViewJournal of Construction Engineering and Management10.1061/(ASCE)CO.1943-7862.0001773146:4Online publication date: Apr-2020
    • (2017)How to Find the Best Rated Items on a Likert Scale and How Many Ratings Are EnoughDatabase and Expert Systems Applications10.1007/978-3-319-64471-4_28(351-359)Online publication date: 2-Aug-2017
    • (2015)Aggregation of Crowdsourced Ordinal Assessments and Integration with Learning to RankProceedings of the 24th ACM International on Conference on Information and Knowledge Management10.1145/2806416.2806492(1391-1400)Online publication date: 17-Oct-2015

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media