Abstract
Scientific journals receive an increasing number of submissions and many of them will be desk rejected without receiving detailed feedback from reviewers. In fact, the number of desk rejections has risen dramatically in the last decade. In this paper, we contribute to the literature by examining an editor’s incentives either to issue a desk decision that is based solely on their imperfect private information about the manuscript’s quality or to send the paper to external peer reviewers that better reveal its quality. In our model, without external review, the journal editor receives an informative but imperfect signal of the manuscript’s quality. We focus on the case in which editors may differ in their decision accuracy, and highlight the consideration that even an editor with the best expertise and the best intentions may be unable to reach a perfect assessment of the manuscript’s quality. In our baseline model, the journal editor is not driven by financial interests but is nevertheless impurely altruistic in that the editor has a reputation consideration that may be tied to authors’ observational learning of the editor’s decision pathway (i.e., the process to reach the eventual editorial decision, which may or may not involve external peer review). Also, the editor in our setting is imperfectly informed about the manuscript’s quality unless they send the manuscript out to review. Our paper shows that high-ability editors tend to send fewer papers to external review than they should as a way to signal their ability. This is so because external peer review and editor’s decision expertise may substitute for each other.
Similar content being viewed by others
References
Arinaminpathy, N., Deo, S., Singh, S., et al. (2019). Modelling the impact of effective private provider engagement on tuberculosis control in urban India. Scientific Reports, 9(1), 3810.
Arora, A., & Fosfuri, A. (2005). Pricing diagnostic information. Management Science, 51(7), 1092–1100.
Ayabakan, S., Bardhan, I. R., Zheng, Z., & Kirksey, K. (2017). The impact of health information sharing on duplicate testing. MIS Quarterly, 41(4), 1083–1103.
Azar, O. H. (2007). The slowdown in first-response times of economics Journals: Can it be beneficial? Economic Inquiry, 45(1), 179–187.
Bohannon, J. (2013). Who’s afraid of peer review? Science, 342(6154), 60–5. https://doi.org/10.1126/science.342.6154.60.
Bornmann, L. (2008). Scientific peer review: An analysis of the peer review process from the perspective of sociology of science theories. Human Architecture: Journal of the Sociology of Self-Knowledge, 6(2), 23–38.
Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45(1), 197–245.
Chamorro-Padial, J., Rodriguez-Sanchez, R., Fdez-Valdivia, J., & Garcia, J. A. (2019). An evolutionary explanation of assassins and zealots in peer review. Scientometrics, 120, 1373–1385. https://doi.org/10.1007/s11192-019-03171-3.
Clark, J., & Smith, R. (2015). Firm action needed on predatory journals. BMJ, 350, h210. https://doi.org/10.1136/bmj.h210.
Dai, T., & Singh, S. (2020). Conspicuous by its absence: Diagnostic expert testing under uncertainty. Marketing Science. https://doi.org/10.1287/mksc.2019.1201.
Dai, T., Wang, X., & Hwang, C. (2019). Clinical ambiguity and conflicts of interest in interventional cardiology decision-making. Johns Hopkins University Working Paper.
Davis, P. (2009). Open access publisher accepts nonsense manuscript for dollars. Scholarly Kitchen. Retrieved from http://scholarlykitchen.sspnet.org/2009/06/10/nonsense-for-dollars.
Doyle, J. J., Ewer, S. M., & Wagner, T. H. (2010). Returns to physician human capital: Evidence from patients randomized to physician teams. Journal of Health Economics, 29(6), 866–882.
Ellison, G. (2002). The slowdown of the economics publishing process. Journal of Political Economy, 110(5), 947–993.
Eriksson, S., & Helgesson, G. (2017). The false academy: Predatory publishing in science and bioethics. Medicine, Health Care and Philosophy, 20(2), 163–170. https://doi.org/10.1007/s11019-016-9740-3.
Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015). The author-editor game. Scientometrics, 104, 361–380. https://doi.org/10.1007/s11192-015-1566-x.
Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2020a). Confirmatory bias in peer review. Scientometrics, 123, 517–533. https://doi.org/10.1007/s11192-020-03357-0.
Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2020b). The author-reviewer game. Scientometrics, 124, 2409–2431. https://doi.org/10.1007/s11192-020-03559-6.
Grant, B. (2009). Elsevier published 6 fake journals. TheScientist, 27383. Retrieved from http://www.the-scientist.com/?articles.view/articleNo/27383/title/Elsevier-published-6-fake-journals/
Habibzadeh, F., & Simundic, A. M. (2017). Predatory journals and their effects on scientific research community. Biochemia Medica, 27(2), 270–272. https://doi.org/10.11613/BM.2017.028.
Harzing, A.-W. (2020). How to avoid a desk-reject in seven steps [1/8]. Harzing.com, Research in International Management. https://harzing.com/blog/2020/05/how-to-avoid-a-desk-reject-in-seven-steps.
Huisman, J., & Smits, J. (2017). Duration and quality of the peer review process: The author’s perspective. Scientometrics, 113, 633–650. https://doi.org/10.1007/s11192-017-2310-5.
Jiang, B., Ni, J., & Srinivasan, K. (2014). Signaling through pricing by service providers with social preferences. Marketing Science, 33(5), 641–654.
Laine, C., & Winker, M. A. (2017). Identifying predatory or pseudo-journals. Biochemia Medica, 27(2), 285–291. https://doi.org/10.11613/BM.2017.031.
Miklos-Thal, J., & Zhang, J. (2013). (De)marketing to manage consumer quality inferences. Journal of Marketing Research, 50(1), 55–69.
Research Information Network. (2008). Activities, costs and funding flows in the scholarly communications system in the UK. Retrieved from http://www.rin.ac.uk/our-work/communicating-and-disseminating-research/activitiescosts-and-funding-flows-scholarly-commu
Rosenbaum, L. (2017). The less-is-more crusade—Are we overmedicalizing or oversimplifying? The New England Journal of Medicine, 377(24), 2392–2397.
Ross-White, A., Godfrey, C. M., Sears, K. A., & Wilson, R. (2019). Predatory publications in evidence syntheses. Journal of the Medical Library Association?: JMLA, 107(1), 57–61. https://doi.org/10.5195/jmla.2019.491.
Sarvary, M. (2002). Temporal differentiation and the market for second opinions. Journal of Marketing Research, 39(1), 129–136.
Shen, C., & Pjork, B. C. (2015). ‘Predatory’ open access: A longitudinal study of article volumes and market characteristics. BMC Medicine, 13, 230. https://doi.org/10.1186/s12916-015-0469-2.
Shumsky, R. A., & Pinker, E. J. (2003). Gatekeepers and referrals in services. Management Science, 49(7), 839–856.
Silver, D. (2016). Haste or waste? Peer pressure and the distribution of marginal returns to health care. Princeton University Working Paper. https://www.ucl.ac.uk/economics/sites/economics/files/jmp-david-silver.pdf
Wallace, J. (2012). PEER project: Final report. Retrieved from http://www.peerproject.eu/reports/
Ware, M., & Mabe, M. (2015). The STM report: An overview of scientific and scholarly journal publishing. The Hague: International Association of Scientific, Technical and Medical Publishers. http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf).
Acknowledgements
This research was sponsored by the Spanish Board for Science, Technology, and Innovation under grant PID2020-112579GB-I00, and co-financed with European FEDER funds. We would like to thank the reviewers for their thoughtful comments and efforts towards improving our manuscript.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix
Appendix A: Definitions for the benchmark scenario: the full-information case in which the editor’s type is common knowledge
Firstly, we consider the case in which the type-e editor receives a private signal \(s_e = 0\), for \(e = h, l\). In this setting, the editor compares the journal’s utility from three possible manuscript-decision pathways, that is, (1) \(t = 0, a = 1\), (2) \(t = 1\), and (3) \(t = 0, a = 0\), before issuing their editorial decision. The journal’s utility for the three possible manuscript-decision pathways is as follows:
where \(b_e (\alpha | s_e)\) is the type-e editor’s belief that the manuscript’s quality is acceptable (\(\theta = 1\)), given the editor’s private signal \(s_e\).
Following Dai and Singh (2020), a comparison of the journal’s expected utility corresponding to the three possible manuscript decision pathways reveals:
-
(i)
the editor does not send the paper to external peer reviewers and issues a desk-accept (\(t = 0\) and \(a = 1\)) if \(\alpha > {\overline{\alpha }}^e_{0}\), where \({\overline{\alpha }}^e_{0} = \frac{\rho _e(d-c)}{\rho _e(d-c) + (1-\rho _e) c},\)
-
(ii)
they do not send the paper to external reviewers and issues a desk-reject (\(t = 0\) and \(a = 0\)) if \(\alpha \le {\underline{\alpha }}^e_{0}\), where \({\underline{\alpha }}^e_{0} = \frac{\rho _e c}{\rho _e c + (1-\rho _e)(B +D - c)}\), and
-
(iii)
if neither of the above then the editor sends the manuscript out to external review (\(t = 1\)).
The proof for the case in which the editor receives a positive private signal \(s_e = 1\) proceeds in the same manner. The corresponding thresholds are \({\overline{\alpha }}^e_{1} = \frac{(1-\rho _e)(d-c)}{(1-\rho _e)(d-c) + \rho _e c}\) and \({\underline{\alpha }}^e_{1} = \frac{(1-\rho _e)c}{(1-\rho _e)c + \rho _e (B +D - c)}\).
Appendix B: Definitions for a real-life scenario
We define an upper bound \({\overline{B}}\) (derived in Dai and Singh 2020), beyond which the separating equilibrium would not exist, as
where \(w=(1-\phi )/\phi\) is the relative weight the editor puts on their own reputational payoff compared to the journal’s utility, with \(\phi \in [0,1]\).
A lower bound \({\underline{B}}\) (also derived in Dai and Singh 2020), exists such that if \(B < {\underline{B}}\), the low-ability editor’s expected payoff from sending the paper to external peer reviewers would be so low that they would not have the incentive to send the manuscript out to external review, seen as:
A unique separating equilibrium exists if and only if \(\alpha \in [{\underline{\alpha }},{\overline{\alpha }}]\) where
and
with
The proof for this result is similar to that of Proposition 3 in Dai and Singh (2020) and therefore we will skip it for the sake of brevity.
Appendix C: Could a peer-reviewed journal receive an even lower expected utility from assigning a high-ability journal editor as opposed to a low-ability one?
Following Dai and Singh (2020), we compare the journal’s expected utility from assigning each type of editor in the separating equilibrium (i.e., either a high-ability editor who does not send the paper to external peer reviewers and issues a desk-decision consistent with their private signal, or a low-ability editor who sends the manuscript out to external review regardless of their private signal). We have derived the journal’s expected utility in Sect. 2.
Consider an academic journal in which the manuscript decision is issued by a low-ability editor. In this case, the editor sends the manuscript out to external review. If the manuscript’s quality is acceptable (\(\theta =1\)) with probability of \(\alpha\), the journal’s utility is \(B-c\). However, if the manuscript’s quality is unacceptable (\(\theta =0\)) with probability of \(1-\alpha\), the journal’s utility is \(-c\). Hence, the journal’s expected utility is \(U_l = \alpha B -c\) when the manuscript decision happens to be issued by a low-ability editor.
Now suppose the manuscript decision is issued by a high-ability editor. In the separating equilibrium of the real-life case, the high-ability editor does not send the paper to external peer reviewers and issues a desk-decision consistent with their private signal. Therefore, if the manuscript’s quality is acceptable (\(\theta =1\)) with probability of \(\alpha\), the journal’s utility is \(\rho _h B + (1-\rho _h)(-D)\). However, if the manuscript’s quality is unacceptable (\(\theta =0\)) with probability of \(1-\alpha\), the journal’s utility is \((1-\rho _h)(-d)\). Thus, the journal’s expected utility is \(U_h = \alpha [\rho _h B - (1-\rho _h)D] - (1-\alpha )(1-\rho _h)d\) when the manuscript decision happens to be issued by a high-ability editor.
By comparing the journal’s expected utilities \(U_l\) and \(U_h\), we find that the journal has a lower expected utility if the manuscript decision happens to be issued by a high-ability editor than if the manuscript decision had been issued by a low-ability editor if \(U_h < U_l\), or equivalently, if the cost of the external review c is low enough
Rights and permissions
About this article
Cite this article
Garcia, J.A., Rodriguez-Sánchez, R. & Fdez-Valdivia, J. The editor-manuscript game. Scientometrics 126, 4277–4295 (2021). https://doi.org/10.1007/s11192-021-03918-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-021-03918-x