Skip to main content
Log in

Reverse-engineering conference rankings: what does it take to make a reputable conference?

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://www.core.edu.au/.

  2. Subsequently, tiers A+ and A were merged and tier L was removed.

  3. http://www.arc.gov.au/era/era_2010.htm.

  4. http://www.latin.dcc.ufmg.br/perfilccranking/.

  5. http://www3.ntu.edu.sg/home/assourav/crank.htm.

  6. Multiple versions of this ranking are circulating in the Web while its real origin and the ranking methodology is unknown.

  7. Note that a slightly different taxonomy for conference categorization into subdisciplines is used than in Microsoft Academic Search.

References

  • Batista, G. E., Prati, R. C., & M. C. Monard. (2004). A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explorations Newsletter, 6(1), 20–29.

    Article  Google Scholar 

  • Breiman, L., Friedman, J. H., Olshen, R. A., & C. J. Stone. (1984). Classification and Regression Trees, Wadsworth.

  • Chen, P., Xie, H., Maslov, S. & S. Redner. (2007). Finding scientific gems with Google’s PageRank algorithm. Journal of Informetrics, 1(1), 8–15.

    Article  Google Scholar 

  • Eckmann, M., Rocha, A., & J. Wainer. (2012). Relationship between high-quality journals and conferences in computer vision. Scientometrics, 90(2), 617–630.

    Article  Google Scholar 

  • Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1):131–152.

    Article  MathSciNet  Google Scholar 

  • Garfield, E. (1972). Citation analysis as a tool in journal evaluation, American Association for the Advancement of Science.

  • Goodrum, A., McCain, K. W., Lawrence, S., & C.L. Giles. (2001). Scholarly publishing in the internet age: a citation analysis of computer science literature. Inf. Process. Manage, 37(5), 661–675.

    Article  MATH  Google Scholar 

  • Hamermesh, D. S., Pfann, (2000) G. A. Markets for reputation: evidence on quality and quantity in academe, SSRN eLibrary. http://ssrn.com/paper=1533208.

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572.

    Article  Google Scholar 

  • Jacso, P. (2011). The pros and cons of Microsoft Academic Search from a bibliometric perspective. Online Information Review, 35(6), 983–997.

    Article  Google Scholar 

  • Jensen, P., Rouquier, J. B., & Y. Croissant. (2009). Testing bibliometric indicators by their prediction of scientists promotions. Scientometrics, 78(3), 467–479.

    Article  Google Scholar 

  • Laender, A. H. F., de Lucena, C. J. P., Maldonado, J. C., de Souza e Silva, E., & N. Ziviani. (2008). Assessing the research and education quality of the top brazilian computer science graduate programs. SIGCSE Bull., 40, 135–145.

    Article  Google Scholar 

  • Laloë, F., & R. Mosseri. (2009). Bibliometric evaluation of individual researchers: not even right... not even wrong!. Europhysics News, 40(5), 26–29.

    Article  Google Scholar 

  • Liu, X., Bollen, J., Nelson, M.L., & H. Van de Sompel. (2005). Co-authorship networks in the digital library research community. Information Processing & Management, 41(6), 1462–1480.

    Article  Google Scholar 

  • Ma, N., Guan, J., & Y. Zhao. (2008). Bringing PageRank to the citation analysis. Information Processing & Management, 44(2), 800–810.

    Article  Google Scholar 

  • Moed, H. F., & M. S. Visser. (2007). Developing bibliometric indicators of research performance in computer science: An exploratory study, Tech. Rep. CWTS Report 2007-01, Centre for Science and Technology Studies (CWTS), Leiden University, the Netherlands. http://www.cwts.nl/pdf/NWO_Inf_Final_Report_V_210207.pdf.

  • Page, L., Brin, S., Motwani, R. & T. Winograd. (1998). The PageRank citation ranking: bringing order to the web, Tech. rep., Stanford Digital Library Technologies Project.

  • Rahm, E., & A. Thor. (2005). Citation analysis of database publications. ACM Sigmod Record, 34(4), 48–53.

    Article  Google Scholar 

  • Sakr, S., & M. Alomari. (2012). A decade of database conferences: a look inside the program committees. Scientometrics, 91(1), 173–184.

    Article  Google Scholar 

  • Shi, X., Tseng, B., & L. Adamic (2009). Information diffusion in computer science citation networks. In: Proceedings of the International Conference on Weblogs and Social Media (ICWSM 2009).

  • Shi, X., Leskovec, J., & D. A. McFarland. (2010). Citing for high impact. In: Proceedings of the 10th Annual Joint Conference on Digital Libraries, ACM, pp. 49–58.

  • Sidiropoulos, A., & Y. Manolopoulos. (2005). A new perspective to automatically rank scientific conferences using digital libraries. Information Processing & Management, 41(2), 289–312.

    Article  Google Scholar 

  • Silva Martins, W., Gonçalves, M. A., Laender, A. H. F., & G. L. Pappa. (2009). Learning to assess the quality of scientific conferences: a case study in computer science. In: Proceedings of the Joint International Conference on Digital Libraries (JCDL), Austin, pp. 193–202.

  • Silva Martins, W., Gonçalves, M. A., Laender, A. H. F., & N. Ziviani. (2010). Assessing the quality of scientific conferences based on bibliographic citations. Scientometrics, 83(1), 133–155.

    Article  Google Scholar 

  • Vanclay, J. K. (2011). An evaluation of the australian research council’s journal ranking. Journal of Informetrics, 5(2), 265–274.

    Article  Google Scholar 

  • Zhou, D., Orshanskiy, S. A., Zha, H., & C. L. Giles. (2008) Co-ranking authors and documents in a heterogeneous network. In: Seventh IEEE International Conference on Data Mining, ICDM 2007, IEEE, pp. 739–744.

  • Zhuang, Z., Elmacioglu, E., Lee, D., & C. L. Giles. (2007). Measuring conference quality by mining program committee characteristics. In: Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries, ACM, pp. 225–234.

Download references

Acknowledgments

The authors thank Luciano García-Bañuelos, Marju Valge, Svetlana Vorotnikova and Karina Kisselite for their input during the initial phase of this work. This work was partially funded by the EU FP7 project LiquidPublication (FET-Open grant number 213360).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peep Küngas.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Küngas, P., Karus, S., Vakulenko, S. et al. Reverse-engineering conference rankings: what does it take to make a reputable conference?. Scientometrics 96, 651–665 (2013). https://doi.org/10.1007/s11192-012-0938-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-012-0938-8

Keywords

Mathematics Subject Classification (2000)

Navigation