Abstract
Feedforward neural networks are a popular tool for classification, offering a method for fully flexible modeling. This paper looks at the underlying probability model, so as to understand statistically what is going on in order to facilitate an intelligent choice of prior for a fully Bayesian analysis. The parameters turn out to be difficult or impossible to interpret, and yet a coherent prior requires a quantification of this inherent uncertainty. Several approaches are discussed, including flat priors, Jeffreys priors and reference priors.
Similar content being viewed by others
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Lee, H. Default Priors for Neural Network Classification. Journal of Classification 24, 53–70 (2007). https://doi.org/10.1007/s00357-007-0001-2
Issue Date:
DOI: https://doi.org/10.1007/s00357-007-0001-2