Skip to main content

Bias Estimation for Neural Network Predictions

  • Conference paper
Book cover Artificial Neural Nets and Genetic Algorithms
  • 232 Accesses

Abstract

This paper looks at the problem of performance assessment in the use of neural networks for classification tasks. It is well-known that the prediction obtained from a trained neural network is subject to errors, both in terms of bias and variance in the estimated error rate.

In order to estimate these measures, it is customary to reserve some data as a test set. This is reasonable if data are plentiful, but when the data set is small in size, this is likely to reduce the accuracy of the network estimates, simply because there are not enough data left for adequate training. An alternative approach will allows the use of all the data, is to employ the bootstrap method.

Here we give a brief introduction to the bootstrap, and then report on some computational experiments on artificial data sets in order to investigate the potential of this approach for the estimation of error bias.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. C.R. Reeves and N.C. Steele (1993) Neural networks for multivariate analysis: results of some cross-validation studies. Proc. of 6th International Symposium on Applied Stochastic Models and Data Analysis, World Scientific Publishing, Singapore, Vol II, 780–791.

    Google Scholar 

  2. S. Geman, E. Bienenstock and R. Doursat (1992) Neural networks and the bias/variance dilemma. Neural Computation, 4, 1–58.

    Article  Google Scholar 

  3. M. Stone (1974) Cross-validatory choice and assessment of statistical predictions (with Discussion). J.R.Statist.Soc. B, 36, 111–147.

    MATH  Google Scholar 

  4. B. Efron (1979) Bootstrap methods: another look at the jackknife. Annals of Statistics, 7, 1–26.

    Article  MathSciNet  MATH  Google Scholar 

  5. P. Hall (1992) The Bootstrap and Edgeworth Expansion, Springer-Verlag, New York.

    Google Scholar 

  6. C.R. Reeves and J. O’Brien (1993) Estimation of misclassification rates in neural network applications. British Neural Net Society Symposium on Neural Networks, Birmingham, UK, January 29, 1993.

    Google Scholar 

  7. G. Paass (1993) Assessing and improving neural network predictions by the bootstrap algorithm. In S.J.Hanson, J.D.Cowan and C.L.Giles (1993) Advances in Neural Information Processing Systems 5, 196–203.

    Google Scholar 

  8. B. Efron (1983) Estimating the error rate of a prediction rule: improvement on cross-validation. J.Amer.Statist.Ass., 78, 316–331.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag/Wien

About this paper

Cite this paper

Reeves, C.R. (1995). Bias Estimation for Neural Network Predictions. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7535-4_64

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-7535-4_64

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-82692-8

  • Online ISBN: 978-3-7091-7535-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics