Skip to main content

Bayesian Quantile Estimation in Deconvolution

  • Conference paper
  • First Online:
Studies in Theoretical and Applied Statistics (SIS 2021)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 406))

Included in the following conference series:

  • 360 Accesses

Abstract

Estimating quantiles of a population is a fundamental problem of high practical relevance in nonparametric statistics. This chapter addresses the problem of quantile estimation in deconvolution models with known error distributions taking a Bayesian approach. We develop the analysis for error distributions with characteristic functions decaying polynomially fast, the so-called ordinary smooth error distributions that lead to mildly ill-posed inverse problems. Using Fourier inversion techniques, we derive an inequality relating the sup-norm distance between mixture densities to the Kolmogorov distance between the corresponding mixing cumulative distribution functions. Exploiting this smoothing inequality, we show that a careful choice of the prior law acting as an efficient approximation scheme for the sampling density leads to adaptive posterior contraction rates to the regularity level of the latent mixing density, thus yielding a new adaptive quantile estimation procedure.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al Labadi, L., Abdelrazeq, I.: On functional central limit theorems of Bayesian nonparametric priors. Stat. Methods Appl. 26(2), 215–229 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  2. Al Labadi, L., Zarepour, M.: On asymptotic properties and almost sure approximation of the normalized inverse-Gaussian process. Bayesian Anal. 8(3), 553–568 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Conti, P.L.: Approximated inference for the quantile function via Dirichlet processes. Metron 62(2), 201–222 (2004)

    MathSciNet  MATH  Google Scholar 

  4. Dattner, I., Goldenshluger, A., Juditsky, A.: On deconvolution of distribution functions. Ann. Statist. 39(5), 2477–2501 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Dattner, I., Reiß, M., Trabs, M.: Adaptive quantile estimation in deconvolution with unknown error distribution. Bernoulli 22(1), 143–192 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dedecker, J., Fischer, A., Michel, B.: Improved rates for Wasserstein deconvolution with ordinary smooth error in dimension one. Electron. J. Statist. 9(1), 234–265 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fan, J.: On the optimal rates of convergence for nonparametric deconvolution problems. Ann. Statist. 19(3), 1257–1272 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ferguson, T.S.: A Bayesian analysis of some nonparametric problems. Ann. Statist. 1, 209–230 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  9. Ghosal, S., van der Vaart, A.: Fundamentals of Nonparametric Bayesian Inference. Cambridge Series in Statistical and Probabilistic Mathematics, vol. 44. Cambridge University Press, Cambridge (2017)

    Google Scholar 

  10. Giné, E., Nickl, R.: Rates of contraction for posterior distributions in \(L^r\)-metrics, \(1\le r\le \infty \). Ann. Statist. 39(6), 2883–2911 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Giné, E., Nickl, R.: Mathematical Foundations of Infinite-Dimensional Statistical Models. Cambridge Series in Statistical and Probabilistic Mathematics, vol. 40. Cambridge University Press, New York (2016)

    MATH  Google Scholar 

  12. Hall, P., Lahiri, S.N.: Estimation of distributions, moments and quantiles in deconvolution problems. Ann. Statist. 36(5), 2110–2134 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hjort, N.L., Petrone, S.: Nonparametric quantile inference using Dirichlet processes. In: Advances in Statistical Modeling and Inference. Ser. Biostatistics, vol. 3, pp. 463–492. World Sci. Publ., Hackensack, NJ (2007)

    Google Scholar 

  14. Meister, A.: Deconvolution Problems in Nonparametric Statistics. Lecture Notes in Statistics, vol. 193. Springer, Berlin (2009)

    Google Scholar 

  15. Rousseau, J., Scricciolo, C.: Wasserstein Convergence in Bayesian Deconvolution Models. https://arxiv.org/abs/2111.06846 (2021)

  16. Scricciolo, C.: Adaptive Bayesian density estimation in \(L^p\)-metrics with Pitman-Yor or normalized inverse-Gaussian process kernel mixtures. Bayesian Anal. 9(2), 475–520 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  17. Scricciolo, C.: Bayesian Kantorovich deconvolution in finite mixture models. In: New Statistical Developments in Data Science. Springer Proceedings in Mathematics & Statistics, vol. 288, pp. 119–134. Springer, Cham (2019)

    Google Scholar 

Download references

Acknowledgements

The author would like to thank the Editors and two anonymous referees for valuable comments and remarks. She is a member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catia Scricciolo .

Editor information

Editors and Affiliations

Appendix

Appendix

The following lemma provides sufficient conditions on the true cumulative distribution function \(F_{0Y}\) and the prior law \(\varPi _n\) so that the posterior measure concentrates on Kolmogorov neighborhoods of \(F_{0Y}\). It is a modification of Lemma 1 in [17], pp. 123–125, and of Lemma B.1 in [15], pp. 24–25, with a weaker condition on the prior concentration rate. In fact, Kullback-Leibler type neighborhoods, which involve also the second moment of the log-ratio \(\log (f_{0Y}/f_Y)\), can be replaced by Kullback-Leibler neighborhoods.

Lemma 1

Let \(F_{0Y}\) be a continuous cumulative distribution function. Let \(\varPi _n\) be a prior law on a set \(\mathscr {P}_1\subseteq \mathscr {P}_0\) of probability measures with continuous cumulative distribution functions. If, for a constant \(C>0\) and a positive sequence \(\epsilon _n\rightarrow 0\) such that \(n\epsilon _n^2\rightarrow \infty \), we have

$$\begin{aligned} \varPi _n(B_{\textrm{KL}}(P_{0Y};\,\epsilon _n^2))\gtrsim \exp {(-Cn\epsilon _n^2)}, \end{aligned}$$

then, for a sequence \(M_n:=\xi (1-\theta )^{-1}(C/2+L_n)^{1/2}\), with \(\theta \in (0,\,1)\), \(\xi >1\) and \(L_n\rightarrow \infty \) such that \(L_n^{1/2}\epsilon _n\rightarrow 0\), we have

$$\begin{aligned} \varPi _n(\Vert F_Y-F_{0Y}\Vert _\infty >M_n\epsilon _n\mid Y^{(n)})=o_{\textbf{P}}(1). \end{aligned}$$
(16)

Proof

By Lemma 6.26 of [9], p. 145, with \(P_{0Y}^n\)-probability at least equal to \((1-L_n^{-1})\), we have

$$\begin{aligned} \int \limits _{\mathscr {P}_1}\prod _{i=1}^n \frac{f_Y}{f_{0Y}}(Y_i)\,\varPi _n(\textrm{d}\mu _Y)\gtrsim \exp {(-(C+2L_n)n\epsilon _n^2)}. \end{aligned}$$
(17)

Following the proofs of Lemma 1 in [17], pp. 123–125, Lemma B.1 in [15], pp. 24–25, and applying the lower bound in (17), the convergence statement in (16) holds true.    \(\square \)

Remark 1

Lemma 1 shows that, by taking \(L_n\) to be a slowly varying sequence, Kullback-Leibler type neighborhoods can be replaced by Kullback-Leibler neighborhoods at the cost of an additional factor in the rate not affecting the power of n, which is thus of the order \(L_n^{1/2}\epsilon _n\).

The next lemma assesses the order of the sup-norm of the bias of a cumulative distribution function with density in a Sobolev-type space. It is the sup-norm version of Lemma C.2 in [15], which, instead, considers the \(L^1\)-norm.

Lemma 2

Let \(F_{0X}\) be the cumulative distribution function of a probability measure \(\mu _{0X}\in \mathscr {P}_0\) with density \(f_{0X}\). Suppose that there exists \(\alpha >0\) such that \(\int _{\mathbb R}|t|^\alpha |\hat{f}_{0X}(t)|\,\textrm{d}t<\infty \). Let \(K\in L^1(\mathbb R)\) be symmetric, with \(\hat{K}\in L^1(\mathbb R)\) such that \(\hat{K}\equiv 1\) on \([-1,\,1]\). Then, for every \(b>0\),

$$\begin{aligned} \Vert F_{0X}*K_b-F_{0X}\Vert _\infty =O(b^{\alpha +1}). \end{aligned}$$

Proof

Let \(b_{F_{0X}}:=(F_{0X}*K_b-F_{0X})\) be the bias of \(F_{0X}\). By the same arguments used for the function \(G_{2,b}\) in [6], pp. 251–252, we have

$$\begin{aligned}\begin{aligned} \Vert b_{F_{0X}}\Vert _\infty :=\sup _{x\in \mathbb R}|b_{F_{0X}}(x)|&=\sup _{x\in \mathbb R} \left| \frac{1}{2\pi }\int \limits _{|t|>1/b}\exp {(-\imath t x)} \frac{[1-\hat{K}(bt)]}{(-\imath t)}\hat{f}_{0X}(t)\,\textrm{d}t\right| \\&\le \frac{1}{2\pi }\int \limits _{|t|>1/b} \frac{|1-\hat{K}(bt)|}{|(-\imath t)|}|\hat{f}_{0X}(t)|\,\textrm{d}t, \end{aligned}\end{aligned}$$

where the mapping \(t\mapsto [1-\hat{K}(bt)][\hat{f}_{0X}(t)\textbf{1}_{[-1,\,1]^c}(bt)/t]\) is in \(L^1(\mathbb R)\) by the assumption that \((|\cdot |^\alpha \hat{f}_{0X})\in L^1(\mathbb R)\). Note that

$$\begin{aligned}\begin{aligned} \Vert b_{F_{0X}}\Vert _\infty&\le \frac{1}{2\pi }\int \limits _{|t|>1/b} \frac{|1-\hat{K}(bt)|}{|(-\imath t)|^{\alpha +1}}\underbrace{|(-\imath t)^{\alpha }\hat{f}_{0X}(t)|}_ {=|\widehat{D^{\alpha }f_{0X}}(t)|}\,\textrm{d}t\\&< \frac{ b^{\alpha +1}}{2\pi }\int \limits _{|t|>1/b} [1+|\hat{K}(bt)|]|t|^\alpha |\hat{f}_{0X}(t)|\,\textrm{d}t\lesssim b^{\alpha +1}\int \limits _{|t|>1/b} |t|^\alpha |\hat{f}_{0X}(t)|\,\textrm{d}t \lesssim b^{\alpha +1} \end{aligned} \end{aligned}$$

because \(\Vert \hat{K}\Vert _\infty \le \Vert K\Vert _1<\infty \). The assertion follows.    \(\square \)

The following lemma establishes the order of the sup-norm of the bias of the cumulative distribution function of a Gaussian mixture, when the mixing distribution is any probability measure on the real line and the scale parameter is chosen as a multiple of the kernel bandwidth, up to a logarithmic factor. It is analogous to Lemma G.1 in [15], p. 46, which, instead, considers the \(L^1\)-norm. Both results rely upon the fact that a Gaussian density has exponentially decaying tails.

Lemma 3

Let \(F_{X}\) be the cumulative distribution function of \(\mu _X=\mu _H*\phi _\sigma \), with \(\mu _{H}\in \mathscr {P}\) and \(\sigma >0\). Let \(K\in L^1(\mathbb R)\) be symmetric, with \(\hat{K}\in L^1(\mathbb R)\) such that \(\hat{K}\equiv 1\) on \([-1,\,1]\). Given \(\alpha >0\) and a sufficiently small \(b>0\), for \(\sigma =O(2b|\log b^{\alpha +1}|^{1/2})\), we have

$$\begin{aligned} \Vert F_{X}*K_b-F_{X}\Vert _\infty =O(b^{\alpha +1}). \end{aligned}$$

Proof

Let \(b_{F_X}:=(F_X*K_b-F_X)\) be the bias of \(F_X\). Defined for every \(b,\,\sigma >0\) the function

$$\begin{aligned}\widehat{f_{b,\sigma }}(t):=\frac{1-\hat{K}(bt)}{t}\hat{\phi }(\sigma t/\sqrt{2})\textbf{1}_{[-1,\,1]^c}(bt), \quad t\in \mathbb R,\end{aligned}$$

since \(t\mapsto [\hat{\mu }_H(t)\hat{\phi }(\sigma t/\sqrt{2})]\widehat{f_{b,\sigma }}(t)\) is in \(L^1(\mathbb R)\), arguing as for \(G_{2,b}\) in [6], pp. 251–252, we have that

$$\begin{aligned}\begin{aligned} \Vert b_{F_X}\Vert _\infty :=\sup _{x\in \mathbb R}|b_{F_X}(x)|&=\sup _{x\in \mathbb R}\left| \frac{1}{2\pi }\int \limits _{|t|>1/b} \exp {(-\imath t x)}\frac{[1-\hat{K}(bt)]}{(-\imath t)} \hat{\mu }_H(t)\hat{\phi }(\sigma t)\,\textrm{d}t\right| \\&= \sup _{x\in \mathbb R}\left| \frac{1}{2\pi }\int \limits _{|t|>1/b} \exp {(-\imath t x)}\hat{\mu }_H(t)\hat{\phi }(\sigma t/\sqrt{2})\widehat{f_{b,\sigma }}(t)\,\textrm{d}t\right| \\&=\Vert \mu _H*\phi _{\sigma /\sqrt{2}}*f_{b,\sigma }\Vert _\infty , \end{aligned} \end{aligned}$$

where \(f_{b,\sigma }(\cdot ):=(2\pi )^{-1}\int _{\mathbb R}\exp {(-\imath t \cdot )}\widehat{f_{b,\sigma }}(t)\,\textrm{d}t\) because \(\widehat{f_{b,\sigma }}\in L^1(\mathbb R)\). Since \(\Vert \mu _H*\phi _{\sigma /\sqrt{2}}\Vert _1=1\) and \(\Vert f_{b,\sigma }\Vert _\infty \le \Vert \widehat{f_{b,\sigma }}\Vert _1<\infty \) for all \(\mu _H\in \mathscr {P}\) and \(\sigma >0\), by Young’s convolution inequality,

$$\begin{aligned} \Vert b_{F_X}\Vert _\infty =\Vert \mu _H*\phi _{\sigma /\sqrt{2}}*f_{b,\sigma }\Vert _\infty \le \Vert \mu _H*\phi _{\sigma /\sqrt{2}}\Vert _1\times \Vert f_{b,\sigma }\Vert _\infty =\Vert f_{b,\sigma }\Vert _\infty , \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}\Vert f_{b,\sigma }\Vert _\infty \le \Vert \widehat{f_{b,\sigma }}\Vert _1&\le \int \limits _{|t|>1/b} \frac{1+|\hat{K}(bt)|}{|t|}\hat{\phi }(\sigma t/\sqrt{2})\,\textrm{d}t\\&\lesssim b \int \limits _{|t|>1/b} \hat{\phi }(\sigma t/\sqrt{2})\,\textrm{d}t \lesssim (b/\sigma )^2\hat{\phi }(\sigma /(\sqrt{2}b))\lesssim b^{\alpha +1} \end{aligned} \end{aligned}$$

because \(\Vert \hat{K}\Vert _\infty \le \Vert K\Vert _1<\infty \), the upper tail of a Gaussian distribution is bounded above by

$$\begin{aligned}\int \limits _{1/b}^\infty \hat{\phi }(\sigma t)\,\textrm{d}t\lesssim \frac{b}{\sigma ^2}\hat{\phi }(\sigma /b)\end{aligned}$$

and \((\sigma /b)^2=O(\log (1/b^{\alpha +1}))\) by assumption. The assertion follows.    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Scricciolo, C. (2022). Bayesian Quantile Estimation in Deconvolution. In: Salvati, N., Perna, C., Marchetti, S., Chambers, R. (eds) Studies in Theoretical and Applied Statistics . SIS 2021. Springer Proceedings in Mathematics & Statistics, vol 406. Springer, Cham. https://doi.org/10.1007/978-3-031-16609-9_10

Download citation

Publish with us

Policies and ethics