Skip to main content

Do applied statisticians prefer more randomness or less? Bootstrap or Jackknife?

  • Original Paper
  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Bootstrap and Jackknife estimates, \(T_{n,B}^*\) and \(T_{n,J},\) respectively, of a population parameter \(\theta \) are both used in statistical computations; n is the sample size, B is the number of Bootstrap samples. For any \(n_0\) and \(B_0,\) Bootstrap samples do not add new information about \(\theta \) being observations from the original sample and when \(B_0<\infty ,\) \(T_{n_0,B_0}^*\) includes also resampling variability, an additional source of uncertainty not affecting \(T_{n_0, J}.\) These are neglected in theoretical papers with results for the utopian \(T_{n, \infty }^*, \) that do not hold for \(B<\infty .\) The consequence is that \(T^*_{n_0, B_0}\) is expected to have larger mean squared error (MSE) than \(T_{n_0,J},\) namely \(T_{n_0,B_0}^*\) is inadmissible. The amount of inadmissibility may be very large when populations’ parameters, e.g. the variance, are unbounded and/or with big data. A palliating remedy is increasing B,  the larger the better, but the MSEs ordering remains unchanged for \(B<\infty .\) This is confirmed theoretically when \(\theta \) is the mean of a population, and is observed in the estimated total MSE for linear regression coefficients. In the latter, the chance the estimated total MSE with \(T_{n,B}^*\) improves that with \(T_{n,J}\) decreases to 0 as B increases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

Download references

Acknowledgements

Many thanks are due to Professor Ajay Jasra, Editor-in-Chief, the Associate Editor and a referee for useful suggestions improving the presentation of this work. Many thanks are also due to a research assistant at Tsinghua University who helped in the simulations but preferred to remain anonymous.

Author information

Authors and Affiliations

Authors

Contributions

The manuscript was prepared by the author.

Corresponding author

Correspondence to Yannis G. Yatracos.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Part of this work was done when the author was at YMSC, Tsinghua University.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yatracos, Y.G. Do applied statisticians prefer more randomness or less? Bootstrap or Jackknife?. Stat Comput 34, 83 (2024). https://doi.org/10.1007/s11222-024-10388-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11222-024-10388-7

Keywords