Abstract
Nowadays, conference publications have gained importance both quantitatively and qualitatively. People are seeking ways to distinguish the quality and impact of different conferences. Some bibliometrics like conference impact factor have been proposed to assess them. Meanwhile, associations in some countries have implemented several projects to build conference ranking systems that classify conferences based on certain quality measures. The Computing Research and Education Association of Australasia and China Computer Federation lists are two such well-known and widely used conference rating systems. They can serve as a guide when researchers need to publish their work. At the same time, they can influence researchers’ publication decisions. In this paper, we try to find out how publication patterns in different countries have been influenced by these two lists as well as by some other factors. A random-effect Negative Binominal Regression Model is used to identify the level of the impact caused by different factors.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11192-018-2763-1/MediaObjects/11192_2018_2763_Fig1_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11192-018-2763-1/MediaObjects/11192_2018_2763_Fig2_HTML.gif)
Similar content being viewed by others
References
Abt, H. A., & Garfield, E. (2002). Is the relationship between numbers of references and paper lengths the same for all sciences? Journal of the Association for Information Science and Technology, 53(13), 1106–1112.
Barbosa, S. D. J., Silveira, M. S., & Gasparini, I. (2017). What publications metadata tell us about the evolution of a scientific community: The case of the brazilian human–computer interaction conference series. Scientometrics, 110(1), 275–300.
Bollen, J., Rodriquez, M. A., & Van de Sompel, H. (2006). Journal status. Scientometrics, 69(3), 669–687.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Fixed-effect versus random-effects models (pp. 77–86). Hoboken: Wiley.
Cheng, H. (2014). Analysis of panel data. Cambridge: Cambridge University Press.
Chen, Z., & Guan, J. (2010). The impact of small world on innovation: An empirical study of 16 countries. Journal of Informetrics, 4(1), 97–106.
Clausen, H., & Wormell, I. (2001). A bibliometric analysis of iolim conferences 1977–1999. Journal of Information Science, 27(3), 157–169.
Costas, R., & Bordons, M. (2007). The \(h\)-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of informetrics, 1(3), 193–203.
Eckmann, M., Rocha, A., & Wainer, J. (2011). Relationship between high-quality journals and conferences in computer vision. Scientometrics, 90(2), 617–630.
Fairclough, R., & Thelwall, M. (2015). More precise methods for national research citation impact comparisons. Journal of Informetrics, 9(4), 895–906.
Feist, G. J. (1997). Quantity, quality, and depth of research as influences on scientific eminence: Is quantity most important? Creativity Research Journal, 10(4), 325–335.
Field, A. P. (2003). The problems in using fixed-effects models of meta-analysis on real-world data. Understanding Statistics, 2(2), 105–124.
Frame, J. D. (1977). Mainstream research in Latin America and the Caribbean. Interciencia, 2(3), 143–148.
Franceschet, M. (2010). The role of conference publications in CS. Communications of the ACM, 53(12), 129–132.
Freyne, J., Coyle, L., Smyth, B., & Cunningham, P. (2010). Relative status of journal and conference publications in computer science. Communications of the ACM, 53(11), 124–132.
Frondel, M., & Vance, C. (2010). Fixed, random, or something in between? A variant of hausman’s specification test for panel data estimators. Economics Letters, 107(3), 327–329.
Garfield, E. (1955). Citation indexes for science. Science, 122, 108–111.
Gazni, A., Sugimoto, C. R., & Didegah, F. (2012). Mapping world scientific collaboration: Authors, institutions, and countries. Journal of the Association for Information Science and Technology, 63(2), 323–335.
Gu, Y. (2002). An exploratory study of malaysian publication productivity in computer science and information technology. Journal of the Association for Information Science and Technology, 53(12), 974–986.
Guan, J., & Gao, X. (2008). Comparison and evaluation of chinese research performance in the field of bioinformatics. Scientometrics, 75(2), 357–379.
Guan, J., & Ma, N. (2004). A comparative study of research performance in computer science. Scientometrics, 61(3), 339–359.
Guan, J., & Ma, N. (2007). A bibliometric study of China’s semiconductor literature compared with other major Asian countries. Scientometrics, 70(1), 107–124.
Harzing, A. W. (2016). Microsoft academic (search): A phoenix arisen from the ashes? Scientometrics, 108(3), 1–11.
Harzing, A. W., & Alakangas, S. (2017). Microsoft academic: Is the phoenix getting wings? Scientometrics, 110, 1–13.
Harzing, A. W., & Giroud, A. (2014). The competitive advantage of nations: An application to academia. Journal of Informetrics, 8(1), 29–42.
He, Y., & Guan, J. (2008). Contribution of chinese publications in computer science: A case study on lncs. Scientometrics, 75(3), 519–534.
Hilbe, J. M. (2011). Negative binomial regression. Cambridge: Cambridge University Press.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National academy of Sciences of the United States of America, 102(46), 16,569.
Holsapple, C. W., & O’Leary, D. (2009). How much and where? Private versus public universities’ publication patterns in the information systems discipline. Journal of the Association for Information Science and Technology, 60(2), 318–331.
Hug, S. E., Ochsner, M., & Brndle, M. P. (2017). Citation analysis with microsoft academic. Scientometrics, 111(1), 371–378.
Kumar, S., & Garg, K. (2005). Scientometrics of computer science research in india and china. Scientometrics, 64(2), 121–132.
Küngas, P., Karus, S., Vakulenko, S., Dumas, M., Parra, C., & Casati, F. (2013). Reverse-engineering conference rankings: What does it take to make a reputable conference? Scientometrics, 96(2), 651–665.
Larsen, P. O., & Von Ins, M. (2010). The rate of growth in scientific publication and the decline in coverage provided by science citation index. Scientometrics, 84(3), 575–603.
Loizides, O. S., & Koutsakis, P. (2017). On evaluating the quality of a computer science/computer engineering conference. Journal of Informetrics, 11(2), 541–552.
Martins, W. S., Gonçalves, M. A., Laender, A. H., & Ziviani, N. (2010). Assessing the quality of scientific conferences based on bibliographic citations. Scientometrics, 83(1), 133–155.
Martinson, B. C., Anderson, M. S., & De Vries, R. (2005). Scientists behaving badly. Nature, 435(7043), 737–738.
Moskowitz, H., & Chun, Y. H. (2015). A poisson regression model for two attribute warranty policies. Naval Research Logistics, 41(3), 355–376.
Onodera, N., & Yoshikane, F. (2014). Factors affecting citation rates of research articles. Journal of the Association for Information Science and Technology, 66(4), 739–764.
Park, I. U., Peacey, M. W., & Munafò, M. R. (2014). Modelling the effects of subjective and objective decision making in scientific peer review. Nature, 506(7486), 93–96.
Perlin, M. S., Santos, A. A., Imasato, T., Borenstein, D., & Da Silva, S. (2017). The brazilian scientific output published in journals: A study based on a large CV database. Journal of Informetrics, 11(1), 18–31.
Perry, M., & Reny, P. J. (2016). How to count citations if you must. The American Economic Review, 106(9), 2722–2741.
Qian, Y., Rong, W., Jiang, N., Tang, J., & Xiong, Z. (2017). Citation regression analysis of computer science publications in different ranking categories and subfields. Scientometrics, 110(3), 1351–1374.
Schmidt, F. L., Oh, I. S., & Hayes, T. L. (2009). Fixed- versus random-effects models in meta-analysis: Model properties and an empirical comparison of differences in results. British Journal of Mathematical and Statistical Psychology, 62(1), 97–128.
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384.
Sønderstrup-Andersen, E. M., & Sønderstrup-Andersen, H. H. (2008). An investigation into diabetes researcher’s perceptions of the journal impact factor reconsidering evaluating research. Scientometrics, 76(2), 391–406.
van Wesel, M. (2016). Evaluation by citation: Trends in publication behavior, evaluation criteria, and the strive for high impact publications. Science and Engineering Ethics, 22(1), 199–225.
Vrettas, G., & Sanderson, M. (2015). Conferences versus journals in computer science. Journal of the Association for Information Science and Technology, 66(12), 2674–2684.
Yan, S., & Lee, D. (2007). Toward alternative measures for ranking venues: A case of database research community. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries (pp. 235–244). ACM.
Zhuang, Z., Elmacioglu, E., Lee, D., & Giles, C. L. (2007). Measuring conference quality by mining program committee characteristics. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries (pp. 225–234). ACM.
Zitt, M. (2012). The journal impact factor: Angel, devil, or scapegoat? a comment on JK Vanclay’s article. Scientometrics, 92(2), 485–503.
Acknowledgements
This work was partially supported by the State Key Laboratory of Software Development Environment of China (No. SKLSDE-2017ZX-15), the National Social Science Foundation of China (No. 13&ZD190), and a Royal Society-Newton Advanced Fellowship Award.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Li, X., Rong, W., Shi, H. et al. The impact of conference ranking systems in computer science: a comparative regression analysis. Scientometrics 116, 879–907 (2018). https://doi.org/10.1007/s11192-018-2763-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-018-2763-1