Skip to main content
Log in

Confirmatory bias in peer review

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

A reduction in reviewer’s recommendation quality may be caused by a limitation of time or cognitive overload that comes from the level of redundancy, contradiction and inconsistency in the research. Some adaptive mechanisms by reviewers who deal with situations of information overload may be chunking single pieces of manuscript information into generic terms, unsystematic omission of research details, queuing of information processing, and prematurely stop the manuscript evaluation. Then, how would a reviewer optimize attention to positive and negative attributes of a manuscript before making a recommendation? How a reviewer’s characteristics such as her prior belief about the manuscript quality and manuscript evaluation cost, affect her attention allocation and final recommendation? To answer these questions, we use a probabilistic model in which a reviewer chooses the optimal evaluation strategy by trading off the value and cost of review information about the manuscript quality. We find that a reviewer could exhibit a confirmatory behavior under which she pays more attention to the type of manuscript attributes that favor her prior belief about the manuscript quality. Then, confirmatory bias could be an optimal behavior of the reviewers that optimize attention to positive and negative manuscript attributes under information overload. We also show that reviewer’s manuscript evaluation cost plays a key role in determining whether she may exhibit confirmatory bias. Moreover, when the reviewer’s prior belief about the manuscript quality is low enough, the probability of obtaining a positive review signal decreases with reviewer’s manuscript evaluation cost, for a sufficiently high cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

Download references

Acknowledgements

This research was sponsored by the Spanish Board for Science, Technology, and Innovation under Grant TIN2017-85542-P, and co-financed with European FEDER funds. Sincere thanks are due to the reviewers for their constructive suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. A. Garcia.

Appendices

Appendix 1: Confirmatory bias in peer review

To find out the optimal manuscript evaluation strategy given by \(\delta _0\) and \(\delta _1\), the reviewer should trade off the value and cost of review information about the manuscript quality. Then, the optimal evaluation strategy chosen by the reviewer is to maximize the expected utility of learning about the research quality:

$$\begin{aligned} \sup _{\delta _0, \delta _1} [ (\text{Value of review }) - (\text{Cost of review }) ] \end{aligned}$$

subject to \(\delta _0 + \delta _1 >1\), and where

$$\begin{aligned} \text{Value of review } = \Pr (S=1) \left[ U_1 \cdot \Pr ( X=1 \ | \ S=1) + U_0 \cdot \Pr ( X=0 \ | \ S=1) \right] \end{aligned}$$

and

$$\begin{aligned} \text{Cost of review } = \lambda I(X,S) \end{aligned}$$

There, the value of review satisfies:

$$\begin{aligned} \text{Value of review }&= \Pr (S=1) \Pr ( X=1 \ | \ S=1) \ U_1 + \Pr (S=1) \Pr ( X=0 \ | \ S=1) \ U_0 \\ &= q \delta _1 U_1 + (1-q) (1- \delta _0)U_0 \end{aligned}$$

since (by Bayes’ theorem)

$$\begin{aligned} \Pr (S=1) \Pr ( X=1 \ | \ S=1) = \Pr (X=1) \Pr ( S=1 \ | \ X=1) = q \delta _1 \end{aligned}$$

and

$$\begin{aligned} \Pr (S=1) \Pr ( X=0 \ | \ S=1) = \Pr (X=0) \Pr ( S=1 \ | \ X=0) = (1-q) (1- \delta _0) \end{aligned}$$

where \(q = \Pr ( X=1)\), \(\delta _0 = \Pr ( S=0 \ | \ X=0)\), and, \(\delta _1 = \Pr ( S=1 \ | \ X=1)\). Therefore, it follows that the expected value of review information increases as the review signal becomes more accurate (i.e., with both \(\delta _0\) and \(\delta _1\)) since

$$\begin{aligned} \frac{d (\text{Value of review })}{d \delta _1} = q U_1 > 0 \end{aligned}$$

and

$$\begin{aligned} \frac{d (\text{Value of review })}{d \delta _0} = - U_0 (1-q) > 0 \end{aligned}$$

On the other hand, the Shannon mutual information is given by Cover and Thomas (2006):

$$\begin{aligned} I(X,S) = H(S) - H(S|X) \end{aligned}$$

where

$$\begin{aligned} H(S) = - \sum _s p(s) \log (p(s)) \end{aligned}$$

and

$$\begin{aligned} H(S|X) = - \sum _x p(x) \sum _s p(s|x) \log (p(s|x)) \end{aligned}$$

Therefore, it follows that

$$\begin{aligned} I(X,S)&= - [ q \delta _1 + (1-q) (1- \delta _0) ] \log [ q \delta _1 + (1-q) (1- \delta _0) ] \\&- [ q (1- \delta _1) + (1-q) \delta _0 ] \log [ q (1- \delta _1) + (1-q) \delta _0 ] \\&+ q [ \delta _1 \log \delta _1 + (1- \delta _1) \log (1- \delta _1) ] \\&+ (1-q) [ \delta _0 \log \delta _0 + (1- \delta _0) \log (1- \delta _0) ] \end{aligned}$$

Following Jerath and Ren (2018), we have that the objective function is strictly concave for the optimization problem. Therefore, the optimal solution for this maximization is simply obtained by the first order condition of the optimization problem (a mathematical condition for optimization stating that the first derivative is zero).

Then, the value \(\delta _0^*\) of \(\delta _0\) that maximizes the objective function must satisfy

$$\begin{aligned} 0= \frac{d [(\text{Value of review }) - (\text{Cost of review }) ]}{d \delta _0} \end{aligned}$$

or equivalently

$$\begin{aligned} e^{ \frac{-U_0}{\lambda }} = \frac{1- q + q \frac{\delta _1}{1-\delta _0}}{1- q + q \frac{1-\delta _1}{\delta _0}}. \end{aligned}$$

Similarly, the value \(\delta _1^*\) of \(\delta _1\) that maximizes the objective function must satisfy

$$\begin{aligned} 0= \frac{d [ (\text{Value of review }) - (\text{Cost of review }) ]}{d \delta _1} \end{aligned}$$

or equivalently

$$\begin{aligned} e^{ \frac{U_1}{\lambda }} = \frac{q + (1- q) \frac{\delta _0}{1-\delta _1}}{q + (1- q) \frac{1-\delta _0}{\delta _1}}. \end{aligned}$$

Solving the equations we get

$$\begin{aligned} \delta _0^* = \frac{1}{1-e^{-\frac{U_1-U_0}{\lambda }} } \left[ 1- \frac{q}{1-q} \frac{e^{\frac{U_1-U_0}{\lambda }} - e^{\frac{-U_0}{\lambda }} }{e^{\frac{U_1-U_0}{\lambda }} \left( e^{\frac{-U_0}{\lambda }} -1 \right) } \right] \end{aligned}$$

and

$$\begin{aligned} \delta _1^* = \frac{1}{1-e^{-\frac{U_1-U_0}{\lambda }} } \left[ 1- \frac{1-q}{q} \frac{e^{\frac{-U_0}{\lambda }} - 1}{e^{\frac{U_1-U_0}{\lambda }} - e^{\frac{-U_0}{\lambda }} } \right] \end{aligned}$$

Then, taking into account the constraints \(\delta _0^* + \delta _1^* >1\), \(0< \delta _1^* < 1\), and \(0< \delta _0^* < 1\), it follows that

$$\begin{aligned} \frac{e^{\frac{-U_0}{\lambda }} - 1}{e^{\frac{U_1-U_0}{\lambda }} - 1 }< q < \frac{ 1 - e^{\frac{U_0}{\lambda }} }{1- e^{-\frac{U_1-U_0}{\lambda }} } \end{aligned}$$

and

$$\begin{aligned} \Pr (S=1)&= \Pr (S=1) \left[ \Pr ( X=1 \ | \ S=1) + \Pr ( X=0 \ | \ S=1) \right] \\&= \Pr (S=1) \Pr ( X=1 \ | \ S=1) + \Pr (S=1) \Pr ( X=0 \ | \ S=1) \\&= q \delta _1^* + (1-q) (1- \delta _0^*) \\&= \frac{q}{1- e^{-\frac{-U_0 }{\lambda }}} - \frac{1-q}{e^{\frac{U_1 }{\lambda }}-1} \\&= \frac{q}{1- e^{-k}} + \frac{1-q}{1- e^{l-k}} \end{aligned}$$

where \(k = \frac{-U_0 }{\lambda }\), \(l = \frac{U_1-U_0 }{\lambda }\).

Appendix 2: Probability of positive review signal

From “Appendix 1”, we have that

$$\begin{aligned} \frac{d \Pr (S=1) }{d \lambda } = \frac{1}{\lambda } \left[ q \frac{- U_0 e^{U_0/\lambda }}{\lambda (e^{U_0/\lambda } -1)^2} - (1-q) \frac{U_1 e^{U_1/\lambda }}{\lambda (e^{U_1/\lambda } -1)^2} \right] \end{aligned}$$

Following (Jerath and Ren 2018), denoting

$$\begin{aligned} \sigma = \frac{ \frac{U_1 e^{U_1/\lambda }}{\lambda (e^{U_1/\lambda } -1)^2}}{ \frac{- U_0 e^{U_0/\lambda }}{\lambda (e^{U_0/\lambda } -1)^2}} \end{aligned}$$

it follows that if \(q/(1-q) > \sigma\), then \(\frac{d \Pr (S=1) }{d \lambda } >0\) and \(\Pr (S=1)\) increases with evaluation cost \(\lambda\); otherwise, if \(q/(1-q) < \sigma\), then \(\frac{d \Pr (S=1) }{d \lambda } <0\) and \(\Pr (S=1)\) decreases with evaluation cost \(\lambda\).

We have that \(\sigma \rightarrow \frac{-U_0}{U_1}\) as \(\lambda \rightarrow \infty\). Besides, if \(U_1 > |U_0|\) then \(\sigma\) increases with \(\lambda\). Similarly, if \(U_1 < |U_0|\) then \(\sigma\) decreases with \(\lambda\).

Therefore, given the gain of accepting a quality manuscript is larger than the loss of accepting a poor-quality work \(U_1 > | U_0 |\), if \(q/(1-q) < \frac{-U_0}{U_1}\) (or equivalently, \(q < \frac{- U_0}{U_1 - U_0}\)), there exists \(\hat{\lambda }\) such that for \(\lambda > \hat{\lambda }\) (i.e., when evaluation cost is high enough) then \(\frac{d \Pr (S=1) }{d \lambda } <0\) and \(\Pr (S=1)\) decreases with evaluation cost \(\lambda\).

Furthermore, given the loss of accepting a poor-quality work is larger than the gain of accepting a quality manuscript \(U_1 < | U_0 |\), if \(q/(1-q) < \frac{-U_0}{U_1}\) (or equivalently, \(q < \frac{- U_0}{U_1 - U_0}\)), then it must always be that \(q/(1-q) < \sigma\) which implies that \(\frac{d \Pr (S=1) }{d \lambda } <0\) and thus the positive review probability \(\Pr (S=1)\) always decreases with manuscript evaluation cost \(\lambda\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Garcia, J.A., Rodriguez-Sánchez, R. & Fdez-Valdivia, J. Confirmatory bias in peer review. Scientometrics 123, 517–533 (2020). https://doi.org/10.1007/s11192-020-03357-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-020-03357-0

Keywords

Navigation