Abstract
Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Benchmark of XML compression tools, http://xmlcompbench.sourceforge.net/
SourceForge: A free repository of open source software, http://sourceforge.net/
Transaction Processing Performance Council, http://www.tpc.org/default.asp
XMark Benchmark, http://monetdb.cwi.nl/xml/
Abadi, D.J., Marcus, A., Madden, S., Hollenbach, K.J.: Scalable Semantic Web Data Management Using Vertical Partitioning. In: VLDB, pp. 411–422 (2007)
Aggarwal, C.C., Wang, H. (eds.): Managing and Mining Graph Data. Springer, Heidelberg (2010)
Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R.H., Konwinski, A., Lee, G., Patterson, D.A., Rabkin, A., Stoica, I., Zaharia, M.: Above the Clouds: A Berkeley View of Cloud Computing. Technical Report UCB/EECS-2009-28, EECS Department, University of California, Berkeley (February 2009)
Barbosa, D., Manolescu, I., Yu, J.X.: Microbenchmark. In: Encyclopedia of Database Systems, p. 1737 (2009)
Carey, M.J., DeWitt, D.J., Naughton, J.F.: The oo7 Benchmark. In: SIGMOD, pp. 12–21 (1993)
Crolotte, A.: Issues in Benchmark Metric Selection. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 146–152. Springer, Heidelberg (2009)
Dolog, P., Krötzsch, M., Schaffert, S., Vrandecic, D.: Social Web and Knowledge Management. In: Weaving Services and People on the World Wide Web, pp. 217–227 (2008)
He, H., Singh, A.K.: Closure-Tree: An Index Structure for Graph Queries. In: ICDE, p. 38 (2006)
Huppler, K.: The Art of Building a Good Benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009)
La, H.J., Kim, S.D.: A Systematic Process for Developing High Quality SaaS Cloud Services. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) CloudCom 2009. LNCS, vol. 5931, pp. 278–289. Springer, Heidelberg (2009)
MahmoudiNasab, H., Sakr, S.: An Experimental Evaluation of Relational RDF Storage and Querying Techniques. In: Proceedings of the 2nd International Workshop on Benchmarking of XML and Semantic Web Applications (BenchmarX 2010), DASFAA Workshops (2010)
Manolescu, I., Afanasiev, L., Arion, A., Dittrich, J., Manegold, S., Polyzotis, N., Schnaitter, K., Senellart, P., Zoupanos, S., Shasha, D.: The repeatability experiment of sigmod 2008. SIGMOD Record 37(1), 39–45 (2008)
Miles, S., Groth, P.T., Deelman, E., Vahi, K., Mehta, G., Moreau, L.: Provenance: The Bridge Between Experiments and Data. Computing in Science and Engineering 10(3), 38–46 (2008)
Pavlo, A., Paulson, E., Rasin, A., Abadi, D.J., DeWitt, D.J., Madden, S., Stonebraker, M.: A comparison of approaches to large-scale data analysis. In: SIGMOD, pp. 165–178 (2009)
Sakr, S.: XML compression techniques: A survey and comparison. J. Comput. Syst. Sci. 75(5), 303–322 (2009)
Sakr, S., Al-Naymat, G.: Relational Processing of RDF Queries: A Survey. SIGMOD Record (2009)
Schmidt, M., Hornung, T., Lausen, G., Pinkel, C.: SP2Bench: A SPARQL Performance Benchmark. In: ICDE, pp. 222–233 (2009)
Sidirourgos, L., Goncalves, R., Kersten, M.L., Nes, N., Manegold, S.: Column-store support for RDF data management: not all swans are white. PVLDB 1(2), 1553–1563 (2008)
Stonebraker, M.: A new direction for TPC? In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 11–17. Springer, Heidelberg (2009)
Yan, X., Yu, P.S., Han, J.: Graph Indexing: A Frequent Structure-based Approach. In: SIGMOD, pp. 335–346 (2004)
Zhang, S., Hu, M., Yang, J.: TreePi: A Novel Graph Indexing Method. In: ICDE, pp. 966–975 (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sakr, S., Casati, F. (2011). Liquid Benchmarks: Towards an Online Platform for Collaborative Assessment of Computer Science Research Results. In: Nambiar, R., Poess, M. (eds) Performance Evaluation, Measurement and Characterization of Complex Systems. TPCTC 2010. Lecture Notes in Computer Science, vol 6417. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-18206-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-18206-8_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-18205-1
Online ISBN: 978-3-642-18206-8
eBook Packages: Computer ScienceComputer Science (R0)