Skip to main content

Noise-Reduction for Automatically Transferred Relevance Judgments

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13390))

Abstract

The TREC Deep Learning tracks used MS MARCO Version 1 as their official training data until 2020 and switched to Version 2 in 2021. For Version 2, all previously judged documents were re-crawled. Interestingly, in the track’s 2021 edition, models trained on the new data were less effective than models trained on the old data. To investigate this phenomenon, we compare the predicted relevance probabilities of monoT5 for the two versions of the judged documents and find substantial differences. A further manual inspection reveals major content changes for some documents (e.g., the new version being off-topic). To analyze whether these changes may have contributed to the observed effectiveness drop, we conduct experiments with different document version selection strategies. Our results show that training a retrieval model on the “wrong” version can reduce the nDCG@10 by up to 75%.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://paperswithcode.com/sota/ad-hoc-information-retrieval-on-trec-robust04

  2. 2.

    https://github.com/webis-de/CLEF-22

  3. 3.

    https://archive.readme.io/docs/memento

  4. 4.

    https://github.com/grill-lab/trec-cast-tools

  5. 5.

    https://github.com/castorini/pygaggle

  6. 6.

    https://huggingface.co/castorini/monot5-3b-msmarco

  7. 7.

    https://github.com/castorini/pygaggle

  8. 8.

    https://github.com/allenai/ir_datasets

References

  1. Arabzadeh, N., Vtyurina, A., Yan, X., Clarke, C.: Shallow pooling for sparse labels. CoRR abs/2109.00062 (2021)

    Google Scholar 

  2. Bevendorff, J., Potthast, M., Stein, B.: FastWARC: optimizing large-scale web archive analytics. In: Proceeding of OSSYM 2021. OSF (2021)

    Google Scholar 

  3. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: search engine for the clueweb and the common crawl. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 820–824. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_83

    Chapter  Google Scholar 

  4. Cen, R., Liu, Y., Zhang, M., Zhou, B., Ru, L., Ma, S.: Exploring relevance for clicks. In: Proceeding of CIKM 2009, pp. 1847–1850. ACM (2009)

    Google Scholar 

  5. Cho, J., Garcia-Molina, H.: The evolution of the web and implications for an incremental crawler. In: Proceeding of VLDB 2000, pp. 200–209 (2000)

    Google Scholar 

  6. Craswell, N., Mitra, B., Yilmaz, E., Campos, D.: Overview of the TREC 2020 deep learning track. In: Proceeding of TREC 2020. NIST (2020)

    Google Scholar 

  7. Craswell, N., Mitra, B., Yilmaz, E., Campos, D., Lin, J.: MS MARCO: benchmarking ranking models in the large-data regime. In: Proceeding of SIGIR 2021, pp. 1566–1576. ACM (2021)

    Google Scholar 

  8. Craswell, N., Mitra, B., Yilmaz, E., Campos, D., Voorhees, E.: Overview of the TREC 2019 deep learning track. In: Proceeding of TREC 2019. NIST (2019)

    Google Scholar 

  9. Craswell, N., Mitra, B., Yilmaz, E., Campos, D.: Overview of the TREC 2021 deep learning track. In: Voorhees, E.M., Ellis, A. (eds.) Notebook. NIST (2021)

    Google Scholar 

  10. Dai, Z., Callan, J.: Context-aware document term weighting for ad-hoc search. In: Proceeding of WWW 2020, pp. 1897–1907. ACM (2020)

    Google Scholar 

  11. Feng, L., Shu, S., Lin, Z., Lv, F., Li, L., An, B.: Can cross entropy loss be robust to label noise? In: Proceeding of IJCAI 2020, pp. 2206–2212. IJCAI (2020)

    Google Scholar 

  12. Fetterly, D., Manasse, M., Najork, M.: On the evolution of clusters of near-duplicate web pages. In: Proceeding of LA-WEB 2003, pp. 37–45 (2003)

    Google Scholar 

  13. Fetterly, D., Manasse, M., Najork, M., Wiener, J.: A large-scale study of the evolution of web pages. In: Proceeding of WWW 2003, pp. 669–678 (2003)

    Google Scholar 

  14. Frénay, B., Verleysen, M.: Classification in the presence of label noise: a survey. IEEE Trans. Neural Netw. Learn. Syst. 25(5), 845–869 (2014)

    Article  Google Scholar 

  15. Fröbe, M., et al.: CopyCat: near-duplicates within and between the clueweb and the common crawl. In: Proceeding of SIGIR 2021, pp. 2398–2404. ACM (2021)

    Google Scholar 

  16. Gao, L., Dai, Z., Fan, Z., Callan, J.: Complementing lexical retrieval with semantic residual embedding. CoRR abs/2004.13969 (2020)

    Google Scholar 

  17. Kaszkiel, M., Zobel, J.: Passage retrieval revisited. In: Proceeding of SIGIR 1997, pp. 178–185. ACM (1997)

    Google Scholar 

  18. Lin, J., Ma, X., Lin, S., Yang, J., Pradeep, R., Nogueira, R.: Pyserini: a python toolkit for reproducible information retrieval research with sparse and dense representations. In: Proceeding of SIGIR 2021, pp. 2356–2362. ACM (2021)

    Google Scholar 

  19. MacAvaney, S., Macdonald, C., Ounis, I.: Reproducing personalised session search over the AOL query log. In: Proceeding of ECIR 2022 (2022). https://doi.org/10.1007/978-3-030-99736-6_42

  20. MacAvaney, S., Yates, A., Feldman, S., Downey, D., Cohan, A., Goharian, N.: Simplified data wrangling with ir_datasets. In: Proceeding of SIGIR 2021, pp. 2429–2436. ACM (2021)

    Google Scholar 

  21. Macdonald, C., Tonellotto, N., MacAvaney, S., Ounis, I.: PyTerrier: declarative experimentation in python from BM25 to dense retrieval. In: Proceeding of CIKM 2021, pp. 4526–4533. ACM (2021)

    Google Scholar 

  22. Mokrii, I., Boytsov, L., Braslavski, P.: A systematic evaluation of transfer learning and pseudo-labeling with BERT-based ranking models. In: Proceeding of SIGIR 2021, pp. 2081–2085. ACM (2021)

    Google Scholar 

  23. Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., Deng, L.: MS MARCO: a human generated machine reading comprehension dataset. In: Proceeding of CoCo@N(eur)IPS 2016. CEUR, vol. 1773. CEUR-WS.org (2016)

    Google Scholar 

  24. Nogueira, R., Jiang, Z., Pradeep, R., Lin, J.: Document ranking with a pretrained sequence-to-sequence model. In: Findings of EMNLP 2020, pp. 708–718. ACL (2020)

    Google Scholar 

  25. Nogueira, R., Yang, W., Cho, K., Lin, J.: Multi-stage document ranking with BERT, pp. 1–13. CoRR abs/1910.14424 (2019)

    Google Scholar 

  26. Ntoulas, A., Cho, J., Olston, C.: What’s new on the web? the evolution of the web from a search engine perspective. In: Proceeding of WWW 2004, pp. 1–12. ACM (2004)

    Google Scholar 

  27. Olston, C., Pandey, S.: Recrawl scheduling based on information longevity. In: Proceeding of WWW 2008, pp. 437–446. ACM (2008)

    Google Scholar 

  28. Pradeep, R., Nogueira, R., Lin, J.: The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models, pp. 1–23. CoRR abs/2101.05667 (2021)

    Google Scholar 

  29. Qu, Y., et al.: Rocketqa: An optimized training approach to dense passage retrieval for open-domain question answering. In: Proceeding of NAACL 2021, pp. 5835–5847 (2021)

    Google Scholar 

  30. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 140:1–140:67 (2020)

    Google Scholar 

  31. Rudra, K., Anand, A.: Distant supervision in bert-based adhoc document retrieval. In: Proceeding of CIKM 2020, pp. 2197–2200. ACM (2020)

    Google Scholar 

  32. Singla, A., White, R.: Sampling high-quality clicks from noisy click data. In: Proceeding of WWW 2010, pp. 1187–1188. ACM (2010)

    Google Scholar 

  33. Voorhees, E., Craswell, N., Lin, J.: Too many relevants: whither cranfield test collections? In: Proceeding of SIGIR 2022. ACM (2022)

    Google Scholar 

  34. Wu, X., Liu, Q., Qin, J., Yu, Y.: PeerRank: robust learning to rank with peer loss over noisy labels. IEEE Access 10, 6830–6841 (2022)

    Article  Google Scholar 

  35. Yates, A., Arora, S., Zhang, X., Yang, W., Jose, K., Lin, J.: Capreolus: a toolkit for end-to-end neural ad hoc retrieval. In: Proceeding of WSDM 2020, pp. 861–864. ACM (2020)

    Google Scholar 

  36. Yates, A., Nogueira, R., Lin, J.: Pretrained transformers for text ranking: BERT and beyond. In: Proceeding of SIGIR 2021, pp. 2666–2668. ACM (2021)

    Google Scholar 

  37. Zhan, J., Mao, J., Liu, Y., Guo, J., Zhang, M., Ma, S.: Optimizing dense retrieval model training with hard negatives. In: Proceeding of SIGIR 2021, pp. 1503–1512. ACM (2021)

    Google Scholar 

  38. Zhan, J., Mao, J., Liu, Y., Zhang, M., Ma, S.: Repbert: contextualized text embeddings for first-stage retrieval. CoRR abs/2006.15498 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maik Fröbe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fröbe, M., Akiki, C., Potthast, M., Hagen, M. (2022). Noise-Reduction for Automatically Transferred Relevance Judgments. In: Barrón-Cedeño, A., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2022. Lecture Notes in Computer Science, vol 13390. Springer, Cham. https://doi.org/10.1007/978-3-031-13643-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-13643-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-13642-9

  • Online ISBN: 978-3-031-13643-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics