skip to main content
10.1145/3539618.3591801acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

A Preference Judgment Tool for Authoritative Assessment

Published:18 July 2023Publication History

ABSTRACT

Preference judgments have been established as an effective method for offline evaluation of information retrieval systems with advantages to graded or binary relevance judgments. Graded judgments assign each document a pre-defined grade level, while preference judgments involve assessing a pair of items presented side by side and indicating which is better. However, leveraging preference judgments may require a more extensive number of judgments, and there are limitations in terms of evaluation measures. In this study, we present a new preference judgment tool called JUDGO, designed for expert assessors and researchers. The tool is supported by a new heap-like preference judgment algorithm that assumes transitivity and allows for ties. An earlier version of the tool was employed by NIST to determine up to the top-10 best items for each of the 38 topics for the TREC 2022 Health Misinformation track, with over 2,200 judgments collected. The current version has been applied in a separate research study to collect almost 10,000 judgments, with multiple assessors completing each topic. The code and resources are available at https://judgo-system.github.io.

Skip Supplemental Material Section

Supplemental Material

SIGIR23-dep3093.mp4

mp4

45.3 MB

References

  1. Omar Alonso, Daniel E Rose, and Benjamin Stewart. 2008. Crowdsourcing for relevance evaluation. In ACM SigIR forum, Vol. 42. ACM New York, NY, USA, 9--15.Google ScholarGoogle Scholar
  2. Negar Arabzadeh, Alexandra Vtyurina, Xinyi Yan, and Charles LA Clarke. 2022. Shallow pooling for sparse labels. Information Retrieval Journal, Vol. 25, 4 (2022), 365--385.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Peter Bailey, Nick Craswell, Ian Soboroff, Paul Thomas, Arjen P de Vries, and Emine Yilmaz. 2008. Relevance assessment: are judges exchangeable and does it matter. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. 667--674.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Maryam Bashir, Jesse Anderton, Jie Wu, Peter B Golbus, Virgil Pavlu, and Javed A Aslam. 2013. A document rating system for preference judgements. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. 909--912.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Mike Bendersky, Xuanhui Wang, Marc Najork, and Don Metzler. 2018. Learning with Sparse and Biased Feedback for Personal Search. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI). 5219--5223.Google ScholarGoogle ScholarCross RefCross Ref
  6. Pia Borlund. 2003. The concept of relevance in IR. Journal of the American Society for information Science and Technology, Vol. 54, 10 (2003), 913--925.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Rocío Ca namares, Pablo Castells, and Alistair Moffat. 2020. Offline evaluation options for recommender systems. Information Retrieval Journal, Vol. 23, 4 (2020), 387--410.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ben Carterette. 2011. System effectiveness, user models, and user utility: a conceptual framework for investigation. In Proceedings of the 34th international ACM SIGIR conference on Research and development in information retrieval. 903--912.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ben Carterette, Paul N Bennett, David Maxwell Chickering, and Susan T Dumais. 2008. Here or there. In European Conference on Information Retrieval. Springer, 16--27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Praveen Chandar and Ben Carterette. 2012. Using preference judgments for novel document retrieval. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. 861--870.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Charles LA Clarke, Chengxi Luo, and Mark D Smucker. 2021a. Evaluation measures based on preference graphs. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1534--1543.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Charles LA Clarke, Maria Maistro, Mahsa Seifikar, and Mark D Smucker. 2022. Overview of the TREC 2022 health misinformation track. In TREC.Google ScholarGoogle Scholar
  13. Charles LA Clarke, Mark D Smucker, and Alexandra Vtyurina. 2020a. Offline evaluation by maximum similarity to an ideal ranking. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 225--234.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Charles LA Clarke, Alexandra Vtyurina, and Mark D Smucker. 2020b. Offline evaluation without gain. In Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval. 185--192.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Charles LA Clarke, Alexandra Vtyurina, and Mark D Smucker. 2021b. Assessing Top-k Preferences. ACM Transactions on Information Systems (TOIS), Vol. 39, 3 (2021), 1--21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Zhicheng Dou, Ruihua Song, Xiaojie Yuan, and Ji-Rong Wen. 2008. Are click-through data adequate for learning web search rankings?. In Proceedings of the 17th ACM conference on Information and knowledge management. 73--82.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ahmed Hassan Awadallah and Imed Zitouni. 2014. Machine-assisted search preference evaluation. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. 51--60.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Kai Hui and Klaus Berberich. 2017. Transitivity, time consumption, and quality of preference judgments in crowdsourcing. In European Conference on Information Retrieval. Springer, 239--251.Google ScholarGoogle ScholarCross RefCross Ref
  19. Thorsten Joachims. 2003. Evaluating Retrieval Performance Using Clickthrough Data. In Text Mining. Physica/Springer Verlag, 79--96.Google ScholarGoogle Scholar
  20. Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, Filip Radlinski, and Geri Gay. 2007. Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems (TOIS), Vol. 25, 2 (2007), 7--es.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. 2017. Unbiased learning-to-rank with biased feedback. In Proceedings of the tenth ACM international conference on web search and data mining. 781--789.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Jaana Kekäläinen. 2005. Binary and graded relevance in IR evaluations-comparison of the effects on ranking of IR systems. Information processing & management, Vol. 41, 5 (2005), 1019--1033.Google ScholarGoogle Scholar
  23. Jaana Kekäläinen and Kalervo Järvelin. 2002. Using graded relevance assessments in IR evaluation. Journal of the American Society for Information Science and Technology, Vol. 53, 13 (2002), 1120--1129.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Jinyoung Kim, Gabriella Kazai, and Imed Zitouni. 2013. Relevance dimensions in preference-based IR evaluation. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. 913--916.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Caitlin Kuhlman, Diana Doherty, Malika Nurbekova, Goutham Deva, Zarni Phyo, Paul-Henry Schoenhagen, MaryAnn VanValkenburg, Elke Rundensteiner, and Lane Harrison. 2019. Evaluating preference collection methods for interactive ranking analytics. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Matthew Lease and Emine Yilmaz. 2012. Crowdsourcing for information retrieval. In ACM SIGIR Forum, Vol. 45. ACM New York, NY, USA, 66--75.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Yan Li, Hao Wang, Ngai Meng Kou, Zhiguo Gong, et al. 2021. Crowdsourced top-k queries by pairwise preference judgments with confidence and budget control. The VLDB Journal, Vol. 30, 2 (2021), 189--213.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Eddy Maddalena, Stefano Mizzaro, Falk Scholer, and Andrew Turpin. 2017. On crowdsourcing relevance magnitudes for information retrieval evaluation. ACM Transactions on Information Systems (TOIS), Vol. 35, 3 (2017), 1--32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Mariana Neves and Jurica vS eva. 2021. An extensive review of tools for manual annotation of documents. Briefings in bioinformatics, Vol. 22, 1 (2021), 146--163.Google ScholarGoogle Scholar
  30. Shuzi Niu, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2012. Top-k learning to rank: labeling, ranking and evaluation. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. 751--760.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Kira Radinsky and Nir Ailon. 2011. Ranking from pairs and triplets: Information quality, evaluation methods and query complexity. In Proceedings of the fourth ACM international conference on Web search and data mining. 105--114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Kevin Roitero, Alessandro Checco, Stefano Mizzaro, and Gianluca Demartini. 2022. Preferences on a Budget: Prioritizing Document Pairs when Crowdsourcing Relevance Judgments. In Proceedings of the ACM Web Conference 2022. 319--327.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Tetsuya Sakai and Zhaohao Zeng. 2020. Good evaluation measures based on document preferences. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 359--368.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Mark Sanderson and Justin Zobel. 2005. Information retrieval system evaluation: effort, sensitivity, and reliability. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. 162--169.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Ruihua Song, Qingwei Guo, Ruochi Zhang, Guomao Xin, Ji-Rong Wen, Yong Yu, and Hsiao-Wuen Hon. 2011. Select-the-Best-Ones: A new way to judge relative relevance. Information processing & management, Vol. 47, 1 (2011), 37--52.Google ScholarGoogle Scholar
  36. Paul Thomas and David Hawking. 2006. Evaluation by comparing result sets in context. In Proceedings of the 15th ACM international conference on Information and knowledge management. 94--101.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Xinyi Yan, Chengxi Luo, Charles LA Clarke, Nick Craswell, Ellen M Voorhees, and Pablo Castells. 2022. Human preferences as dueling bandits. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 567--577.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Ziying Yang, Alistair Moffat, and Andrew Turpin. 2018. Pairwise crowd judgments: Preference, absolute, and ratio. In Proceedings of the 23rd Australasian Document Computing Symposium. 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Zhaohui Zheng, Keke Chen, Gordon Sun, and Hongyuan Zha. 2007. A regression framework for learning ranking functions using relative relevance judgments. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. 287--294.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A Preference Judgment Tool for Authoritative Assessment

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval
      July 2023
      3567 pages
      ISBN:9781450394086
      DOI:10.1145/3539618

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 July 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%
    • Article Metrics

      • Downloads (Last 12 months)119
      • Downloads (Last 6 weeks)10

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader