Abstract:
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y, with (X, Y) pairs observed in a training set. We can ...Show MoreMetadata
Abstract:
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y, with (X, Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X=x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X=x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Published in: IEEE Transactions on Neural Networks ( Volume: 20, Issue: 7, July 2009)