Skip to main content

Collaborative Benchmarking Rule-Reasoners with B-Runner

  • Conference paper
  • First Online:
Rules and Reasoning (RuleML+RR 2024)

Abstract

Conducting experimental analysis on rule reasoners is a mainstream task for validating novel algorithms and systems. Nevertheless, providing robust, verifiable, and reproducible experiments can still raise a sensible challenge. This paper introduces B-Runner, an open library for collaborative benchmarking focusing on the deployment of extended tests for knowledge and rule-based systems with low cost and high robustness. B-Runner reduces the benchmarking setup time while guaranteeing experiment repeatability. Also, it improves the scrutability of experimental protocols thereby enhancing the fairness of system comparisons.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    reproducibility is stricter than repeatability: not only it ensures that the experiment can be re-run, but that it also yields the same result.

  2. 2.

    To illustrate, consider the factbase \(F=\{P(a)\}\), the rule \(\forall x. P(x)\rightarrow R(x)\), and the boolean query \(Q=\exists y. R(y)\). The chase yields \(\{ P(a) , R(a) \}\) where Q answers true. Query rewriting yields reformulation \(\exists y. R(y)\vee P(y)\) answering true on F.

  3. 3.

    typical errors are including/omitting optimizations and/or parsing time, writing on disk or standard output (logging, result export), improper cold/warm measures [14].

  4. 4.

    To illustrate, lines 6–8 of Fig. 2 can be written in JSON as the record {scenario: {s10M: {data:data10.dlgp, rule:ontology.dlgp, workload:queries.dlgp}}}.

  5. 5.

    In the example, the .dlgp extension stands for the Datalog-Plus language [4].

References

  1. B-runner repository (2024). https://gitlab.inria.fr/rules/brunner

  2. Angele, K., Angele, J., Simsek, U., Fensel, D.: RUBEN: a rule engine benchmarking framework. In: Rule Challenge @ RuleML+RR 2022 (2022)

    Google Scholar 

  3. Baget, J.F., et al.: InteGraal: a tool for data-integration and reasoning on heterogeneous and federated sources. In: BDA 2023. Montpellier, France (2023)

    Google Scholar 

  4. Baget, J., Gutierrez, A., Leclère, M., Mugnier, M., Rocher, S., Sipieter, C.: Datalog+: formats and translations for existential rules. In: RuleML (2015)

    Google Scholar 

  5. Barker, A., van Hemert, J.: Scientific workflow: a survey and research directions. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Wasniewski, J. (eds.) PPAM 2007. LNCS, vol. 4967, pp. 746–753. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-68111-3_78

    Chapter  Google Scholar 

  6. Beeri, C., Vardi, M.Y.: A proof procedure for data dependencies. J. ACM (1984)

    Google Scholar 

  7. Benedikt, M., et al.: Benchmarking the chase. In: PODS (2017)

    Google Scholar 

  8. Cohen-Boulakia, S., et al.: Scientific workflows for computational reproducibility in the life sciences: status, challenges and opportunities. Futur. Gener. Comput. Syst. 75, 284–298 (2017)

    Article  Google Scholar 

  9. König, M., Leclère, M., Mugnier, M., Thomazo, M.: Sound, complete and minimal UCQ-rewriting for existential rules. Semantic Web 6(5), 451–475 (2015)

    Article  Google Scholar 

  10. Lenzerini, M.: Data integration: a theoretical perspective. In: PODS (2002)

    Google Scholar 

  11. Liang, S., Fodor, P., Wan, H., Kifer, M.: Openrulebench: an analysis of the performance of rule engines. In: WWW (2009)

    Google Scholar 

  12. Liew, C.S., Atkinson, M.P., Galea, M., Ang, T.F., Martin, P., Hemert, J.I.V.: Scientific workflows: moving across paradigms. ACM Comput. Surv. (2016)

    Google Scholar 

  13. Liu, J., Lu, S., Che, D.: A survey of modern scientific workflow scheduling algorithms and systems in the era of big data. In: 2020 IEEE International Conference on Services Computing (SCC), pp. 132–141. IEEE (2020)

    Google Scholar 

  14. Manegold, S., Manolescu, I.: Performance evaluation in database research: principles and experience. In: EDBT (2009)

    Google Scholar 

  15. Mugnier, M., Thomazo, M.: An introduction to ontology-based query answering with existential rules. In: RR Summer School (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Federico Ulliana .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ulliana, F., Bisquert, P., Charoensit, A., Colin, R., Tornil, F., Yeche, Q. (2024). Collaborative Benchmarking Rule-Reasoners with B-Runner. In: Kirrane, S., Šimkus, M., Soylu, A., Roman, D. (eds) Rules and Reasoning. RuleML+RR 2024. Lecture Notes in Computer Science, vol 15183. Springer, Cham. https://doi.org/10.1007/978-3-031-72407-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72407-7_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72406-0

  • Online ISBN: 978-3-031-72407-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics