Skip to main content

Advertisement

Log in

Bayesian joint inference for multivariate quantile regression model with L\(_{1/2}\) penalty

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

A Correction to this article was published on 13 November 2021

This article has been updated

Abstract

This paper considers a Bayesian approach for joint estimation of the marginal conditional quantiles from several dependent variables under a linear regression framework. This approach incorporates the dependence among different dependent variables in the regression model which studies how the relationship between dependent variables and a set of explanatory variables can vary across different quantiles of the marginal conditional distribution of the dependent variables. A Bayesian regularization approach with L\(_{1/2}\) penalty is adopted to conduct high-dimensional variable selection. Some simulation studies are conducted to evaluate the performance of our proposed method. We illustrate the proposed estimation approach using a real data set on energy efficiency with two responses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Change history

References

  • Aldrin M (1996) Moderate projection pursuit regression for multivariate response data. Comput Stat Data Anal 21(5):501–531

    Article  MathSciNet  Google Scholar 

  • Alhamzawi R, Ali HTM (2018) Bayesian Tobit quantile regression with \(L_{1/2}\) penalty. Commun Stat-Simul Comput 47(6):1739–1750

    Article  Google Scholar 

  • Alhamzawi R, Algamal ZY (2019) Bayesian bridge quantile regression. Commun Stat-Simul Comput 49(3):944–956

    Article  MathSciNet  Google Scholar 

  • Chen WP, Wu YN, Chen RB (2014) Bayesian variable selection for multi-response linear regression. In: International conference technologies and applications of artificial intelligence, pp 74–88

  • Eck DJ (2018) Bootstrapping for multivariate linear regression models. Stat Probab Lett 134:141–149

    Article  MathSciNet  Google Scholar 

  • Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96(456):1348–1360

    Article  MathSciNet  Google Scholar 

  • Fragiadakis K, Meintanis SG (2011) Goodness-of-fit tests for multivariate Laplace distributions. Math Comput Model 53(5–6):769–779

    Article  MathSciNet  Google Scholar 

  • Fu WJ (1998) Penalized regresssion: the bridge versus lasso. J Comput Graph Stat 7(3):397–416

    Google Scholar 

  • Hoerl AE, Kennard RW (1970) Ridge regression: applications to nonorthogonal problems. Technometrics 12(1):69–82

    Article  Google Scholar 

  • Hurlimann W (2013) A moment method for the multivariate asymmetric Laplace distribution. Stat Probab Lett 83(4):1247–1253

    Article  MathSciNet  Google Scholar 

  • Kim S, Xing EP (2012) Tree-guided group lasso for multi-response regression with structured sparsity, with an application to EQTL mapping. Ann Appl Stat 6(3):1095–1117

    Article  MathSciNet  Google Scholar 

  • Kollo T, Srivastava MS (2005) Estimation and testing of parameters in multivariate Laplace distribution. Commun Stat-Theory Methods 33(10):2363–2387

    Article  MathSciNet  Google Scholar 

  • Kotz S, Kozubowski TJ and Podgorski K (2001) Symmetric multivariate Laplace distribution. Asymmetric multivariate Laplace distribution. In The Laplace distribution and generalizations. Boston, MA: Springer pp 239–272

  • Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge

  • Koenker R, Bassett J (1978) Regression quantiles. Econometrica 46(1):33–50

    Article  MathSciNet  Google Scholar 

  • Kozumi H, Kobayashi G (2011) Gibbs sampling methods for Bayesian quantile regression. J Stat Comput Simul 81:1565–1578

    Article  MathSciNet  Google Scholar 

  • Liang Y, Liu C, Luan XZ (2013) Sparse logistic regression with a \(L_{1/2}\) penalty for gene selection in cancer classification. BMC Bioinform. 14(1):198–198

    Article  Google Scholar 

  • Lestari B, Budiantara IN, Sunaryo S et al (2012) Spline smoothing for multi-response nonparametric regression model in case of heteroscedasticity of variance. J Math Stat 8(3):377–384

    Article  Google Scholar 

  • Li Y, Nan B, Zhu J (2015) Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure. Biometrics 71(2):354–363

    Article  MathSciNet  Google Scholar 

  • Liang Y, Lu C, Luan XZ et al (2013) logistic regression with a \(L_{1/2}\) penalty for gene selection in cancer classification. BMC Bioinform 14(1):198–198

    Article  Google Scholar 

  • Luan XZ, Liang Y, Liu C et al (2014) A novel \(L_{1/2}\) regularization shooting method for Cox’s proportional hazards model. Soft Comput 18(1):143–152

  • Lue HH (2019) Pairwise directions estimation for multivariate response regression data. J Stat Comput Simul 89(5):1–19

    Article  MathSciNet  Google Scholar 

  • Mallick H, Yi N (2018) Bayesian bridge regression. J Appl Stat 45(6):988–1008

    Article  MathSciNet  Google Scholar 

  • Petrella L, Raponi V (2019) Joint estimation of conditional quantiles in multivariate linear regression models with an application to financial distress. J Multivariate Anal 173:70–84

    Article  MathSciNet  Google Scholar 

  • Saran S, Gurjar M, Baronia A et al (2020) Heating, ventilation and air conditioning (HVAC) in intensive care unit. Crit Care. https://doi.org/10.1186/s13054-020-02907-5

    Article  Google Scholar 

  • Semeraro P (2020) A note on the multivariate generalized asymmetric Laplace motion. Commun Stat-Theory Methods 49(10):2339–2355

    Article  MathSciNet  Google Scholar 

  • Simila T and Tikka J (2005) Multiresponse sparse regression with application to multidimensional scaling. International conference on artificial neural networks: formal models & their applications ICANN 2005

  • Tsanas A, Xifara A (2012) Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy Build 49:560–567

    Article  Google Scholar 

  • Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc B 73(1):273–282

    MathSciNet  MATH  Google Scholar 

  • Tian YZ, Song XY (2020) Fully Bayesian \(L_{1/2}\) penalized linear quantile regression analysis with autoregressive errors. Stat Inferace 13(3):271–286

    MATH  Google Scholar 

  • Visk H (2009) On the parameter estimation of the asymmetric multivariate Laplace distribution. Commun Stat-Theory Methods 38(4):461–470

    Article  MathSciNet  Google Scholar 

  • Xu Z, Zhang H, Wang Y et al (2010) \(L_{1/2}\) regularization. Sci China Inf Sci 53(6):1159–1169

    Article  MathSciNet  Google Scholar 

  • Yin X, Bura E (2006) Moment-based dimension reduction for multivariate response regression. J Stat Plan Inference 136(10):3675–3688

    Article  MathSciNet  Google Scholar 

  • Zhang H, Xu Z, Wang Y et al (2014) A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization. Acta Mathematica Sinica 30(7):1242–1258

    Article  MathSciNet  Google Scholar 

  • Zhang Y, Zhu L, Ma Y (2017) Efficient dimension reduction for multivariate response data. J Multivariate Anal 155:187–199

    Article  MathSciNet  Google Scholar 

  • Zou H (2006) The adaptive LASSO and its oracle properties. J Am Stat Assoc 101(476):1418–1429

    Article  MathSciNet  Google Scholar 

  • Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc B 67(2):301–320

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Authors thank editors and referees for their constructive comments and suggestions which have greatly improved the paper. The research of Yu-Zhu Tian was partially supported by grants from the National Natural Science Foundation of China (grant 12061065, 11861042) and Science and Technology Program of Gansu Province (grant 21JR7RA135). The work of Man-Lai Tang was partially supported through grants from the Research Grant Council of the Hong Kong Special Administrative Region (UGC/FDS14/P01/16, UGC/FDS14/P02/18 and The Research Matching Grant Scheme (RMGS)) and a grant from the National Natural Science Foundation of China (grant 11871124).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Zhu Tian.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised as the Grant number mentioned in the acknowledgment section was published incorrectly.

Appendix

Appendix

The posterior distribution (3.3) implies that the fully conditional posterior distribution of \(\beta _{\tau }\) is in proportion as follows

$$\begin{aligned}&\pi (\beta _{\tau }|\Theta _{-}) \propto L_{C}(Y,W|X,\beta _{\tau },D,\Psi )\cdot \pi (\beta _{\tau }|H)\\&\quad \propto \exp \Big \{-\frac{1}{2}\sum _{i=1}^{N}(Y_{i}-\beta _{\tau }X_{i}-D\theta W_{i})^{T}(W_{i}D\Sigma D)^{-1}(Y_{i}-\beta _{\tau }X_{i}-D\theta W_{i})\Big \}\\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}\sum _{i=1}^{N}(\beta _{\tau }X_{i}-\eta _{i})^{T}(W_{i}D\Sigma D)^{-1}(\beta _{\tau }X_{i}-\eta _{i})\Big \} \\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}tr\Big (\sum _{i=1}^{N}(\beta _{\tau }X_{i}-\eta _{i})^{T}(W_{i}D\Sigma D)^{-1}(\beta _{\tau }X_{i}-\eta _{i})\Big )\Big \} \\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}tr\Big (\sum _{i=1}^{N}\Big [(\beta _{\tau }X_{i})^{T}(W_{i}D\Sigma D)^{-1}(\beta _{\tau }X_{i})\\&\quad \quad -2\eta _{i}^{T}(W_{i}D\Sigma D)^{-1}(\beta _{\tau }X_{i})\Big ]\Big )\Big \} \\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}\Big (\sum _{i=1}^{N}tr\Big (W_{i}^{-1}X_{i}X_{i}^{T}\beta _{\tau }^{T}(D\Sigma D)^{-1}\beta _{\tau }\Big )\\&\quad \quad -2\sum _{i=1}^{N}tr\Big (W_{i}^{-1}X_{i}\eta _{i}^{T}(D\Sigma D)^{-1}\beta _{\tau }\Big )\Big \} \\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}tr\Big [\Big (\sum _{i=1}^{N}W_{i}^{-1}X_{i}X_{i}^{T}\Big )\cdot \beta _{\tau }^{T}(D\Sigma D)^{-1}\beta _{\tau }\\&\quad \quad -2\Big (\sum _{i=1}^{N}W_{i}^{-1}X_{i}\eta _{i}^{T}\Big )\cdot (D\Sigma D)^{-1}\beta _{\tau }\Big ]\Big \} \\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}tr\Big [\Big (\sum _{i=1}^{N}W_{i}^{-1}X_{i}X_{i}^{T}\Big )\cdot (\beta _{\tau }-M)^{T}(D\Sigma D)^{-1}(\beta _{\tau }-M)\Big ]\Big \}\\&\quad \quad \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\&\quad \propto \exp \Big \{-\frac{1}{2}tr\Big (\Phi ^{-1}\cdot (\beta _{\tau }-M)^{T}V^{-1}(\beta _{\tau }-M)\Big )\Big \} \cdot \prod _{h=1}^{p}\prod _{s=1}^{k}\text {I}(|\beta _{j,s}|<H_{j,s}^{2})\\\sim & {} N_{p\times k}(M,\Phi \otimes V) \end{aligned}$$

where \(N_{p\times k}(M,\Phi \otimes V)\) denotes a \(p\times k\) matrix normal distribution with parameters M, \(\Phi \) and V and \(\Phi _{k\times k}=\Big (\sum _{i=1}^{N}W_{i}^{-1}X_{i}X_{i}^{T}\Big )^{-1}\), \(M_{p\times k}=\sum _{i=1}^{N}(W_{i}^{-1}\eta _{i}X_{i}^{T})\cdot \Phi \), \(V_{p\times p}=D\Sigma D\), \(\eta _{i}=Y_{i}-D\theta W_{i}\).

The posterior distribution (3.3) implies that the fully conditional posterior distribution of correlation parameter matrix \(\Psi \) is in proportion as follows

$$\begin{aligned}&\pi (\Psi |\Theta _{-}) \propto L_{C}(Y,W|X,\beta _{\tau },D,\Psi )\cdot \pi (\Psi )\\&\quad \propto \prod _{i=1}^{N}\frac{(2\pi )^{-p/2}}{|W_{i}D\nabla \Psi \nabla D|^{1/2}}\exp \Big \{-\frac{1}{2}\sum _{i=1}^{N}\alpha _{i}^{T}(W_{i}D\nabla \Psi \nabla D)^{-1}\alpha _{i}\Big \}\\&\quad \quad \cdot \frac{|\Psi |^{-\frac{m_{0}+p+1}{2}}\exp \{-\frac{1}{2}tr(\Psi ^{-1}\Phi _{0})\}}{2^{\frac{m_{0}p}{2}}|\Phi _{0}|^{-\frac{m_{0}}{2}}\cdot \Gamma _{p}(\frac{m_{0}}{2})}\\&\quad \propto |\Psi |^{-\frac{N+m_{0}+p+1}{2}}\exp \Big \{-\frac{1}{2}\Big [\sum _{i=1}^{N}tr\Big (\alpha _{i}^{T}(W_{i}D\nabla \Psi \nabla D)^{-1}\alpha _{i}\Big )+tr(\Psi ^{-1}\Phi _{0})\Big ]\Big \}\\&\quad \propto |\Psi |^{-\frac{N+m_{0}+p+1}{2}}\exp \Big \{-\frac{1}{2}tr\Big [\Psi ^{-1}\Big ((\nabla D)^{-1}\cdot \sum _{i=1}^{N}(W_{i}^{-1}\alpha _{i}\alpha _{i}^{T})\cdot (D\nabla )^{-1}+\Phi _{0}\Big )\Big ]\Big \}\\&\quad \quad \sim IW\Big (N+m_{0}, (\nabla D)^{-1}\cdot \sum _{i=1}^{N}(W_{i}^{-1}\alpha _{i}\alpha _{i}^{T})\cdot (D\nabla )^{-1}+\Phi _{0}\Big ) \end{aligned}$$

where \(\alpha _{i}=Y_{i}-\beta _{\tau }X_{i}-D\theta W_{i}\).

The posterior distribution (3.3) implies that the fully conditional posterior distribution of correlation parameter matrix \(W_{i}\) is in proportion as follows

$$\begin{aligned}&\pi (W_{i}|\Theta _{-})\\&\quad \propto \frac{1}{|W_{i}D\Sigma D|^{1/2}}\cdot \exp \Big \{-\frac{1}{2}(e_{i}-D\theta W_{i})^{T}(W_{i}D\Sigma D)^{-1}(e_{i}-D\theta W_{i})\Big \}\cdot \exp \{-W_{i}\}\\&\quad \propto |W_{i}|^{-\frac{p}{2}}\cdot \exp \Big \{-\frac{1}{2}(e_{i}-D\theta W_{i})^{T}(W_{i}D\Sigma D)^{-1}(e_{i}-D\theta W_{i})\Big \}\cdot \exp \{-W_{i}\}\\&\quad \propto |W_{i}|^{-\frac{p}{2}}\cdot \exp \Big \{-\frac{1}{2}\Big [e_{i}^{T}(D\Sigma D)^{-1}e_{i}\cdot W_{i}^{-1}+(\theta ^{T}\Sigma ^{-1}\theta +2)\cdot W_{i}\Big ]\Big \}\\&\quad \quad \sim \text {GIG}\Big (1-\frac{p}{2},e_{i}^{T}(D\Sigma D)^{-1}e_{i},\theta ^{T}\Sigma ^{-1}\theta +2\Big ),~i=1,\ldots ,p, \end{aligned}$$

where \(\text {GIG}(\lambda ,\chi ,\psi )\) denote the generalized inverse Gaussian (GIG) distribution and its pdf is expressed as

$$\begin{aligned} X\sim f(x)=\frac{\chi ^{-\lambda }(\sqrt{\chi \psi })^{\lambda }}{2K_{\lambda }(\sqrt{\chi \psi })}x^{\lambda -1}\exp \{-\frac{1}{2}(\chi x^{-1}+\psi x)\},~x>0 \end{aligned}$$

where \(K_{\lambda }\) denotes a modified Bessel function of the third kind with index \(\lambda \) and the parameters satisfy \(\chi>0,\psi >0\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, YZ., Tang, ML. & Tian, MZ. Bayesian joint inference for multivariate quantile regression model with L\(_{1/2}\) penalty. Comput Stat 36, 2967–2994 (2021). https://doi.org/10.1007/s00180-021-01158-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-021-01158-4

Keywords

Navigation