Skip to main content

A Survey of Benchmarks for Graph-Processing Systems

  • Chapter
  • First Online:

Part of the book series: Data-Centric Systems and Applications ((DCSA))

Abstract

Benchmarking is a process that informs the public about the capabilities of systems-under-test, focuses on expected and unexpected system-bottlenecks, and promises to facilitate system tuning and new systems designs. In this chapter, we survey benchmarking approaches for graph-processing systems. First, we describe the main features of a benchmark for graph-processing systems. Then, we systematically survey across these features a diverse set of benchmarks for RDF databases, benchmarks for graph databases, benchmarks for parallel and distributed graph-processing systems, and data-only benchmarks. We trace in our survey not only the important benchmarks, but also their innovative approaches and how their core ideas evolved from previous benchmarking approaches. Last, we identify ongoing and future research directions for benchmarking initiatives.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Abadi D, Agrawal R, Ailamaki A, Balazinska M, Bernstein PA, Carey MJ, Chaudhuri S, Chaudhuri S, Dean J, Doan A, Franklin MJ, Gehrke J, Haas LM, Halevy AY, Hellerstein JM, Ioannidis YE, Jagadish HV, Kossmann D, Madden S, Mehrotra S, Milo T, Naughton JF, Ramakrishnan R, Markl V, Olston C, Ooi BC, Ré C, Suciu D, Stonebraker M, Walter T, Widom J (2016) The Beckman report on database research. Commun ACM 59(2):92–99. http://doi.acm.org/10.1145/2845915

    Article  Google Scholar 

  • Akoglu L, Faloutsos C (2009) RTG: a recursive realistic graph generator using random typing. Data Min Knowl Discov 19(2):194–209. http://dx.doi.org/10.1007/s10618-009-0140-7

    Article  MathSciNet  Google Scholar 

  • Aluç G, Hartig O, Özsu MT, Daudjee K (2014) Diversified stress testing of RDF data management systems. In: ISWC, pp 197–212

    Google Scholar 

  • Ammar K, Özsu MT (2013) WGB: towards a universal graph benchmark. In: Advancing big data benchmarks - proceedings of the 2013 workshop series on big data benchmarking, WBDB.cn, Xi’an, July 16–17, 2013 and WBDB.us, San José, CA, October 9–10, 2013 Revised Selected Papers, pp 58–72

    Google Scholar 

  • Angles R, Boncz PA, Larriba-Pey J, Fundulaki I, Neumann T, Erling O, Neubauer P, Martínez-Bazan N, Kotsev V, Toma I (2014) The linked data benchmark council: a graph and RDF industry benchmarking effort. SIGMOD Record 43(1):27–31. http://doi.acm.org/10.1145/2627692.2627697

    Article  Google Scholar 

  • Bader DA, Madduri K (2005) Design and implementation of the HPCS graph analysis benchmark on symmetric multiprocessors. In: High performance computing - HiPC 2005, 12th international conference, proceedings, India, December 18–21, 2005, pp 465–476

    Google Scholar 

  • Bader DA, Feo J, Gilbert J, Kepner J, Koester D, Loh E, Madduri K, Mann B, Meuse T, Robinson E (2009) HPC scalable graph analysis benchmark. Online technical specification, v.1.0, Feb 24. http://www.graphanalysis.org/benchmark/GraphAnalysisBenchmark-v1.0.pdf

  • Bader et al DA (2010) Graph500. Online technical specification, v.0.1 (2010) through 1.2 (2011). http://www.graph500.org/specifications

  • Bagan G, Bonifati A, Ciucanu R, Fletcher GHL, Lemay A, Advokaat N (2017) gmark: schema-driven generation of graphs and queries. IEEE Trans Knowl Data Eng 29(4):856–869

    Article  Google Scholar 

  • Barbosa D, Manolescu I, Yu JX (2009) XML benchmarks. In: Liu L, Özsu MT (eds) Encyclopedia of database systems. Springer, Berlin, pp 3576–3579

    Google Scholar 

  • Bizer C, Schultz A (2009) The Berlin SPARQL benchmark. Int J Semant Web Inf Syst 5(2):1–24

    Article  Google Scholar 

  • Blum D, Cohen S (2011) Grr: generating random RDF. In: ESWC, pp 16–30

    Chapter  Google Scholar 

  • Brickley D, Guha RV (2014) Rdf schema 1.1. W3C recommendation. https://www.w3.org/TR/rdf-schema/

  • Capota M, Hegeman T, Iosup A, Prat-Pérez A, Erling O, Boncz PA (2015) Graphalytics: a big data benchmark for graph-processing platforms. In: Proceedings of the third international workshop on graph data management experiences and systems, GRADES 2015, Melbourne, May 31–June 4, 2015, pp 7:1–7:6

    Google Scholar 

  • Carey MJ, DeWitt DJ, Naughton JF (1993) The oo7 benchmark. In: Proceedings of the 1993 ACM SIGMOD international conference on management of data, Washington, May 26–28, 1993, pp 12–21

    Google Scholar 

  • Cattell RGG, Skeen J (1992) Object operations benchmark. ACM Trans Database Syst 17(1):1–31

    Article  Google Scholar 

  • Ciglan M, Averbuch A, Hluchý L (2012) Benchmarking traversal operations over graph databases. In: Workshops proceedings of the IEEE 28th international conference on data engineering, ICDE 2012, Arlington, April 1–5, 2012, pp 186–189. http://dx.doi.org/10.1109/ICDEW.2012.47

  • Cyganiak R, Wood D, Lanthaler M (2014) RDF 1.1 concepts and abstract syntax. W3C recommendation. https://www.w3.org/TR/rdf11-concepts/

  • Duan S, Kementsietsidis A, Srinivas K, Udrea O (2011) Apples and oranges: a comparison of RDF benchmarks and real RDF datasets. In: SIGMOD, pp 145–156

    Google Scholar 

  • Elser B, Montresor A (2013) An evaluation study of bigdata frameworks for graph processing. In: Big data

    Google Scholar 

  • Erling O, Averbuch A, Larriba-Pey J, Chafi H, Gubichev A, Prat A, Pham MD, Boncz P (2015) The LDBC social network benchmark: interactive workload. In: SIGMOD, pp 619–630

    Google Scholar 

  • Ferdman et al M (2012) Clearing the clouds: a study of emerging scaleout workloads on modern hardware. In: ASPLOS

    Google Scholar 

  • Gray J (ed) (1993) The benchmark handbook for database and transaction systems, 2nd edn. Morgan Kaufmann, San Mateo

    MATH  Google Scholar 

  • Gubichev A, Boncz P (2014) Parameter curation for benchmark queries. In: TPCTC, pp 113–129

    Google Scholar 

  • Guo Y, Iosup A (2012) The game trace archive. In: 11th annual workshop on network and systems support for games, NetGames 2012, Venice, November 22–23, 2012, pp 1–6. http://dx.doi.org/10.1109/NetGames.2012.6404027

  • Guo Y, Pan Z, Heflin J (2005) LUBM: a benchmark for OWL knowledge base systems. J Web Sem 3(2–3):158–182

    Article  Google Scholar 

  • Guo et al Y (2014) How well do graph-processing platforms perform? In: IPDPS

    Google Scholar 

  • Guo et al Y (2015) An empirical performance evaluation of gpu-enabled graph-processing systems. In: CCGrid

    Google Scholar 

  • Han M, Daudjee K, Ammar K, Özsu MT, Wang X, Jin T (2014) An experimental comparison of pregel-like graph processing systems. PVLDB 7(12):1047–1058

    Google Scholar 

  • Hofler T et al (2014) GreenGraph500. Online technical specification, v.1.1 (2014). http://green.graph500.org/greengraph500rules.pdf

  • Iosup A, van de Bovenkamp R, Shen S, Jia AL, Kuipers FA (2014) Analyzing implicit social networks in multiplayer online games. IEEE Int Comput 18(3):36–44. http://dx.doi.org/10.1109/MIC.2014.19

    Article  Google Scholar 

  • Iosup A, Hegeman T, Ngai WL, Heldens S, Prat-Pérez A, Manhardt T, Chafi H, Capota M, Sundaram N, Anderson MJ, Tanase IG, Xia Y, Nai L, Boncz PA (2016) LDBC graphalytics: a benchmark for large-scale graph analysis on parallel and distributed platforms. PVLDB 9(13):1317–1328. http://www.vldb.org/pvldb/vol9/p1317-iosup.pdf

    Article  Google Scholar 

  • Jia AL, Shen S, van de Bovenkamp R, Iosup A, Kuipers FA, Epema DHJ (2015) Socializing by gaming: revealing social relationships in multiplayer online games. TKDD 10(2):11. http://doi.acm.org/10.1145/2736698

    Article  Google Scholar 

  • Käfer T, Harth A (2014) Billion Triples Challenge data set. Downloaded from http://km.aifb.kit.edu/projects/btc-2014/

  • Lu Y, Cheng J, Yan D, Wu H (2014) Large-scale distributed graph computing systems: an experimental evaluation. PVLDB 8(3):281–292. http://www.vldb.org/pvldb/vol8/p281-lu.pdf

    Google Scholar 

  • Nai L, Xia Y, Tanase IG, Kim H, Lin C (2015) Graphbig: understanding graph computing in the context of industrial solutions. In: Proceedings of the international conference for high performance computing, networking, storage and analysis, SC 2015, Austin, November 15–20, 2015, pp 69:1–69:12

    Google Scholar 

  • Pérez J, Arenas M, Gutierrez C (2010) nSPARQL: a navigational language for RDF. J Web Semant 8(4):255–270

    Article  Google Scholar 

  • Qiao S, Özsoyoglu ZM (2015) RBench: application-specific RDF benchmarking. In: SIGMOD, pp 1825–1838

    Google Scholar 

  • Satish N et al (2014) Navigating the maze of graph analytics frameworks using massive datasets. In: SIGMOD

    Google Scholar 

  • Schmidt A, Waas F, Kersten ML, Carey MJ, Manolescu I, Busse R (2002) XMark: a benchmark for XML data management. In: VLDB, pp 974–985

    Google Scholar 

  • Schmidt M, Hornung T, Lausen G, Pinkel C (2009) SP2Bench: a SPARQL performance benchmark. In: ICDE, pp 222–233

    Google Scholar 

  • Sinha A, Shen Z, Song Y, Ma H, Eide D, Hsu BJP, Wang K (2015) An overview of microsoft academic service (MAS) and applications. In: Proceedings of the 24th international conference on World Wide Web, WWW ’15 Companion. ACM, New York, pp 243–246. http://doi.acm.org/10.1145/2740908.2742839

    Chapter  Google Scholar 

  • The W3C SPARQL Working Group (2013) SPARQL 1.1 overview. W3C recommendation. https://www.w3.org/TR/sparql11-overview/

  • Transaction Processing Performance Council (TPC) (2016) TPC benchmark. http://www.tpc.org/

  • van Leeuwen W, Bonifati A, Fletcher GHL, Yakovets N (2017) Stability notions in synthetic graph generation: a preliminary study. In: EDBT, pp 486–489

    Google Scholar 

  • Wilson C, Sala A, Puttaswamy KPN, Zhao BY (2012) Beyond social graphs: user interactions in online social networks and their implications. TWEB 6(4):17. http://doi.acm.org/10.1145/2382616.2382620

    Article  Google Scholar 

  • Yao BB, Özsu MT, Khandelwal N (2004) XBench benchmark and performance testing of XML DBMSs. In: ICDE, pp 621–632

    Google Scholar 

  • Zhang JW, Tay YC (2016) GSCALER: synthetically scaling a given graph. In: EDBT 2016, pp 53–64

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Angela Bonifati .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bonifati, A., Fletcher, G., Hidders, J., Iosup, A. (2018). A Survey of Benchmarks for Graph-Processing Systems. In: Fletcher, G., Hidders, J., Larriba-Pey, J. (eds) Graph Data Management. Data-Centric Systems and Applications. Springer, Cham. https://doi.org/10.1007/978-3-319-96193-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96193-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96192-7

  • Online ISBN: 978-3-319-96193-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics