Abstract
Bootstrap and Jackknife estimates, \(T_{n,B}^*\) and \(T_{n,J},\) respectively, of a population parameter \(\theta \) are both used in statistical computations; n is the sample size, B is the number of Bootstrap samples. For any \(n_0\) and \(B_0,\) Bootstrap samples do not add new information about \(\theta \) being observations from the original sample and when \(B_0<\infty ,\) \(T_{n_0,B_0}^*\) includes also resampling variability, an additional source of uncertainty not affecting \(T_{n_0, J}.\) These are neglected in theoretical papers with results for the utopian \(T_{n, \infty }^*, \) that do not hold for \(B<\infty .\) The consequence is that \(T^*_{n_0, B_0}\) is expected to have larger mean squared error (MSE) than \(T_{n_0,J},\) namely \(T_{n_0,B_0}^*\) is inadmissible. The amount of inadmissibility may be very large when populations’ parameters, e.g. the variance, are unbounded and/or with big data. A palliating remedy is increasing B, the larger the better, but the MSEs ordering remains unchanged for \(B<\infty .\) This is confirmed theoretically when \(\theta \) is the mean of a population, and is observed in the estimated total MSE for linear regression coefficients. In the latter, the chance the estimated total MSE with \(T_{n,B}^*\) improves that with \(T_{n,J}\) decreases to 0 as B increases.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Beran, R.: Jackknife approximations to Bootstrap estimates. Ann. Stat. 12(12), 101–118 (1984)
Csörgő, M., Nasari, M.N.: Inference from small and big data sets with error rates. Electron. J. Stat. 9, 535–566 (2015)
Efron, B.: Bootstrap methods: another look at the jackknife. Ann. Stat. 7, 1–26 (1979)
Hall, P.: On the number of bootstrap simulations required to construct a confidence interval. Ann. Stat. 14, 1453–1462 (1986)
Hall, P., Presnell, B.: Intentionally biased bootstrap methods. JRSS B 61, 143–158 (1999)
Mammen, E.: Bootstrap and wild bootstrap for high dimensional linear models. Ann. Stat. 21, 255–285 (1993)
Quenouille, M.H.: Approximate tests of correlation in time series. J. R. Stat. Soc. B 11, 18–44 (1949)
Tukey, J.W.: Bias and confidence in not quite large samples (abstract). Ann. Math. Stat. 29, 614–614 (1958)
Wasserman, L.: Model-Free Predictive Inference. Stanford Statistics Seminar, January 11th 2019. (2019). https://statistics.stanford.edu/events/statistics-seminar/model-free-predictive-inference
Wu, C.F.J.: Jackknife, Bootstrap and other resampling methods in regression analysis. Ann. Stat. 14, 1261–1295 (1986)
Xia, X., Chen, X., Zhang, Y., Wang, Z.: Grey bootstrap method of evaluation of uncertainty in dynamic measurement. Measurement 41, 687–696 (2008)
Yatracos, Y.G.: Artificially augmented samples, shrinkage, and mean squared error reduction. J. Am. Stat. Assoc. 100, 1168–1175 (2005)
Yatracos, Y.G.: Assessing the quality of bootstrap samples and of the bootstrap estimates obtained with finite resampling. Stat. Prob. Lett. 59, 281–292 (2002)
Acknowledgements
Many thanks are due to Professor Ajay Jasra, Editor-in-Chief, the Associate Editor and a referee for useful suggestions improving the presentation of this work. Many thanks are also due to a research assistant at Tsinghua University who helped in the simulations but preferred to remain anonymous.
Author information
Authors and Affiliations
Contributions
The manuscript was prepared by the author.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Part of this work was done when the author was at YMSC, Tsinghua University.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yatracos, Y.G. Do applied statisticians prefer more randomness or less? Bootstrap or Jackknife?. Stat Comput 34, 83 (2024). https://doi.org/10.1007/s11222-024-10388-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11222-024-10388-7