Skip to main content

ranx: A Blazing-Fast Python Library for Ranking Evaluation and Comparison

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13186))

Included in the following conference series:

Abstract

This paper presents ranx, a Python evaluation library for Information Retrieval built on top of Numba. ranx provides a user-friendly interface to the most common ranking evaluation metrics, such as MAP, MRR, and NDCG. Moreover, it offers a convenient way of managing the evaluation results, comparing different runs, performing statistical tests between them, and exporting LaTeX tables ready to be used in scientific publications, all in a few lines of code. The efficiency brought by Numba, a just-in-time compiler for Python code, makes the adoption ranx convenient even for industrial applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/usnistgov/trec_eval.

  2. 2.

    https://github.com/AmenRa/ranx.

References

  1. Aycock, J.: A brief history of just-in-time. ACM Comput. Surv. 35(2), 97–113 (2003)

    Article  Google Scholar 

  2. Breuer, T., Ferro, N., Maistro, M., Schaer, P.: repro_eval: a python interface to reproducibility measures of system-oriented IR experiments. In: Hiemstra, D., Moens, M.-F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) ECIR 2021. LNCS, vol. 12657, pp. 481–486. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72240-1_51

    Chapter  Google Scholar 

  3. Burges, C.J.C., et al.: Learning to rank using gradient descent. In: ICML, ACM International Conference Proceeding Series, vol. 119, pp. 89–96. ACM (2005)

    Google Scholar 

  4. Gysel, C.V., de Rijke, M.: Pytrec_eval: an extremely fast python interface to trec_eval. In: SIGIR, pp. 873–876. ACM (2018)

    Google Scholar 

  5. Harman, D.: Information Retrieval Evaluation. Synthesis Lectures on Information Concepts, Retrieval, and Services. Morgan & Claypool Publishers, San Rafael (2011)

    Google Scholar 

  6. Harris, C.R.: Array programming with numpy. Nature 585, 357–362 (2020)

    Article  Google Scholar 

  7. Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20(4), 422–446 (2002)

    Article  Google Scholar 

  8. Lam, S.K., Pitrou, A., Seibert, S.: Numba: a llvm-based python JIT compiler. In: LLVM@SC, pp. 7:1–7:6. ACM (2015)

    Google Scholar 

  9. Lucchese, C., Muntean, C.I., Nardini, F.M., Perego, R., Trani, S.: Rankeval: an evaluation and analysis framework for learning-to-rank solutions. In: SIGIR, pp. 1281–1284. ACM (2017)

    Google Scholar 

  10. Lucchese, C., Muntean, C.I., Nardini, F.M., Perego, R., Trani, S.: Rankeval: evaluation and investigation of ranking models. SoftwareX 12, 100614 (2020)

    Google Scholar 

  11. MacAvaney, S., Yates, A., Feldman, S., Downey, D., Cohan, A., Goharian, N.: Simplified data wrangling with ir_datasets. In: SIGIR, pp. 2429–2436. ACM (2021)

    Google Scholar 

  12. Macdonald, C., Tonellotto, N.: Declarative experimentation in information retrieval using pyterrier. In: ICTIR, pp. 161–168. ACM (2020)

    Google Scholar 

  13. Macdonald, C., Tonellotto, N., MacAvaney, S., Ounis, I.: Pyterrier: declarative experimentation in python from BM25 to dense retrieval. In: CIKM, pp. 4526–4533. ACM (2021)

    Google Scholar 

  14. McKinney, W., et al.: Pandas: a foundational python library for data analysis and statistics. Python High Perf. Sci. Comput. 14(9), 1–9 (2011)

    Google Scholar 

  15. Oliphant, T.E.: A guide to NumPy, vol. 1. Trelgol Publishing USA (2006)

    Google Scholar 

  16. Oliphant, T.E.: Python for scientific computing. Comput. Sci. Eng. 9(3), 10–20 (2007)

    Article  Google Scholar 

  17. Palotti, J.R.M., Scells, H., Zuccon, G.: Trectools: an open-source python library for information retrieval practitioners involved in trec-like campaigns. In: SIGIR, pp. 1325–1328. ACM (2019)

    Google Scholar 

  18. Sanderson, M.: Test collection based evaluation of information retrieval systems. Found. Trends Inf. Retr. 4(4), 247–375 (2010)

    Article  MATH  Google Scholar 

  19. Smucker, M.D., Allan, J., Carterette, B.: A comparison of statistical significance tests for information retrieval evaluation. In: CIKM, pp. 623–632. ACM (2007)

    Google Scholar 

  20. Van Rossum, G., Drake Jr, F.L.: Python reference manual. Centrum voor Wiskunde en Informatica Amsterdam (1995)

    Google Scholar 

  21. Voorhees, E., Harman, D.: Experiment and evaluation in information retrieval (2005)

    Google Scholar 

  22. van der Walt, S., Colbert, S.C., Varoquaux, G.: The numpy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13(2), 22–30 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elias Bassani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bassani, E. (2022). ranx: A Blazing-Fast Python Library for Ranking Evaluation and Comparison. In: Hagen, M., et al. Advances in Information Retrieval. ECIR 2022. Lecture Notes in Computer Science, vol 13186. Springer, Cham. https://doi.org/10.1007/978-3-030-99739-7_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-99739-7_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-99738-0

  • Online ISBN: 978-3-030-99739-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics