Skip to main content
Log in

Default Priors for Neural Network Classification

  • Published:
Journal of Classification Aims and scope Submit manuscript

Abstract

Feedforward neural networks are a popular tool for classification, offering a method for fully flexible modeling. This paper looks at the underlying probability model, so as to understand statistically what is going on in order to facilitate an intelligent choice of prior for a fully Bayesian analysis. The parameters turn out to be difficult or impossible to interpret, and yet a coherent prior requires a quantification of this inherent uncertainty. Several approaches are discussed, including flat priors, Jeffreys priors and reference priors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lee, H. Default Priors for Neural Network Classification. Journal of Classification 24, 53–70 (2007). https://doi.org/10.1007/s00357-007-0001-2

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00357-007-0001-2

Keywords

Navigation