Skip to main content
Log in

Fast Bayesian Inference for Birnbaum-Saunders Distribution

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

In this work, we propose the Bayesian estimator for parameters of the Birnbaum-Saunders distribution based on conjugate, Jeffreys, and reference priors that can be implemented quite fast through Gibbs sampler. The Bayesian estimator based on conjugate prior for the Birnbaum-Saunders is new. Performance of the proposed Bayesian paradigm, based on three priors, is demonstrated by simulation and analyzing two sets of real data. Furthermore, it is shown through an extra real example that the Bayesian estimator can outperform the maximum likelihood estimator for the BS distribution when data are incomplete due to the progressive type-II censoring scheme. An R package called bibs has been uploaded to comprehensive R archive network (CRAN) at https://cran.r-project.org/web/packages/bibs/index.html for computing the Bayesian estimator, corresponding standard errors, and credible intervals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Achcar JA (1993) Inferences for the Birnbaum-Saunders fatigue life model using bayesian methods. Comput Stat Data Anal 15(4):367–380

    MathSciNet  MATH  Google Scholar 

  • Aggarwala R (2001) Progressive interval censoring: some mathematical results with applications to inference. Commun Stat Theory Methods 30(8–9):1921–1935

    MathSciNet  MATH  Google Scholar 

  • Balakrishnan N, Aggarwala R (2000) Progressive censoring: theory, methods, and applications. Springer Science & Business Media, New York

    Google Scholar 

  • Balakrishnan N, Kundu D (2019) Birnbaum-Saunders distribution: a review of models, analysis, and applications. Appl Stoch Models Bus Ind 35(1):4–49

    MathSciNet  MATH  Google Scholar 

  • Balakrishnan N, Zhu X (2013) A simple and efficient method of estimation of the parameters of a bivariate Birnbaum-Saunders distribution based on type-II censored samples. In: Kollo T (ed) Multivariate statistics: theory and applications. World Scientific, New Jersey, pp 34–47

    Google Scholar 

  • Balakrishnan N, Zhu X (2014) On the existence and uniqueness of the maximum likelihood estimates of the parameters of Birnbaum-Saunders distribution based on type-I, type-II and hybrid censored samples. Statistics 48(5):1013–1032

    MathSciNet  MATH  Google Scholar 

  • Barros M, Paula GA, Leiva V (2008) A new class of survival regression models with heavy-tailed errors: robustness and diagnostics. Lifetime Data Anal 14(3):316–332

    MathSciNet  MATH  Google Scholar 

  • Barros M, Paula GA, Leiva V (2009) An R implementation for generalized Birnbaum-Saunders distributions. Comput Stat Data Anal 53(4):1511–1528

    MathSciNet  MATH  Google Scholar 

  • Bernardo JM (1979) Reference posterior distributions for Bayesian inference. J R Stat Soc Ser B 41(2):113–128

    MathSciNet  MATH  Google Scholar 

  • Birnbaum ZW, Saunders SC (1969) Estimation for a family of life distributions with applications to fatigue. J Appl Probab 6(2):328–347

    MathSciNet  MATH  Google Scholar 

  • Chib S, Greenberg E (1995) Understanding the Metropolis-Hastings algorithm. Am Stat 49(4):327–335

    Google Scholar 

  • Congdon P (2007) Bayesian statistical modelling. Wiley, Chichester

    MATH  Google Scholar 

  • Darijani S, Zakerzade H, Torabi H (2019) Goodness-of-fit tests for Birnbaum-Saunders distributions. J Iran Stat Soc 18(2):1–20

    MathSciNet  MATH  Google Scholar 

  • Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser C 39(1):1–22

    MathSciNet  MATH  Google Scholar 

  • Desmond AF (1986) On the relationship between two fatigue-life models. IEEE Trans Reliab 35(2):167–169

    MATH  Google Scholar 

  • Díaz-García JA (2005) A new family of life distributions based on the elliptically contoured distributions. J Stat Plan Inference 128(2):445–457

    MathSciNet  MATH  Google Scholar 

  • Dunn PK, Smyth GK (1996) Randomized quantile residuals. J Stat Comput Simul 5(3):236–244

    Google Scholar 

  • Galea M, Leiva-Sánchez V, Paula G (2004) Influence diagnostics in log-Birnbaum-Saunders regression models. J Appl Stat 31(9):1049–1064

    MathSciNet  MATH  Google Scholar 

  • Gilks WR, Best NG, Tan K (1995) Adaptive rejection Metropolis sampling within Gibbs sampling. J R Stat Soc Ser C 44(4):455–472

    MATH  Google Scholar 

  • Good IJ (1953) The population frequencies of species and the estimation of population parameters. Biometrika 40(3–4):237–264

    MathSciNet  MATH  Google Scholar 

  • Johnson RA, Wichern DW (1999) Applied multivariate analysis. Prentice-Hall, New Jersey

    MATH  Google Scholar 

  • Johnson NL, Kotz S, Balakrishnan N (1995) Continuous Univariate Distributions, vol 2. Wiley, New York

    MATH  Google Scholar 

  • Kundu D, Balakrishnan N, Jamalizadeh A (2010) Bivariate Birnbaum-Saunders distribution and associated inference. J Multivar Anal 101(1):113–125

    MathSciNet  MATH  Google Scholar 

  • Lange K (2010) Numerical analysis for statisticians. Springer, New York

    MATH  Google Scholar 

  • Lawless J (1982) Statistical models and methods for lifetime data. Wiley, New York

    MATH  Google Scholar 

  • Lemonte AJ (2016) A note on the fisher information matrix of the Birnbaum-Saunders distribution. JSTA 15(2):196–205

    MathSciNet  Google Scholar 

  • Lemonte AJ, Cribari-Neto F, Vasconcellos KL (2007) Improved statistical inference for the two-parameter Birnbaum-Saunders distribution. Comput Stat Data Anal 51(9):4656–4681

    MathSciNet  MATH  Google Scholar 

  • McLachlan GJ, Krishnan T (2007) The EM algorithm and extensions. Wiley, Hoboken

    MATH  Google Scholar 

  • Neal RM (2003) Slice sampling. Ann Stat 31(3):705–767

    MathSciNet  MATH  Google Scholar 

  • Ng H, Kundu D, Balakrishnan N (2003) Modified moment estimation for the two-parameter Birnbaum-Saunders distribution. Comput Stat Data Anal 43(3):283–298

    MathSciNet  MATH  Google Scholar 

  • Ng H, Kundu D, Balakrishnan N (2006) Point and interval estimation for the two-parameter Birnbaum-Saunders distribution based on type-II censored samples. Comput Stat Data Anal 50(11):3222–3242

    MathSciNet  MATH  Google Scholar 

  • Ortega EM, Cordeiro GM, Lemonte AJ (2012) A log-linear regression model for the \(\beta \)-Birnbaum-Saunders distribution with censored data. Comput Stat Data Anal 56(3):698–718

    MathSciNet  MATH  Google Scholar 

  • Owen WJ (2006) A new three-parameter extension to the Birnbaum-Saunders distribution. IEEE Trans Reliab 55(3):475–479

    Google Scholar 

  • Padgett W, Tomlinson M (2003) Lower confidence bounds for percentiles of Weibull and Birnbaum-Saunders distributions. J Stat Comput Simul 73(6):429–443

    MathSciNet  MATH  Google Scholar 

  • Pradhan B, Kundu D (2013) Inference and optimal censoring schemes for progressively censored Birnbaum-Saunders distribution. J Stat Plan Inference 143(6):1098–1108

    MathSciNet  MATH  Google Scholar 

  • R Core Team (2018) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria

  • Rieck JR (1995) Parameter estimation for the Birnbaum-Saunders distribution based on symmetrically censored samples. Commun Stat Theory Methods 24(7):1721–1736

    MATH  Google Scholar 

  • Sha N, Ng TL (2017) Bayesian inference for Birnbaum-Saunders distribution and its generalization. J Stat Comput Simul 87(12):2411–2429

    MathSciNet  MATH  Google Scholar 

  • Teimouri M (2021) bccp: an R package for life-testing and survival analysis. Comput Stat. https://doi.org/10.1007/s00180-021-01129-9

    Article  MathSciNet  MATH  Google Scholar 

  • Teimouri M, Nadarajah S (2016) Bias corrected mles under progressive type-II censoring scheme. J Stat Comput Simul 86(14):2714–2726

    MathSciNet  MATH  Google Scholar 

  • Teimouri M, Hosseini SM, Nadarajah S (2013) Ratios of Birnbaum-Saunders random variables. Qual Technol Quant Manag 10(4):457–481

    Google Scholar 

  • Tsionas EG (2001) Bayesian inference in Birnbaum-Saunders regression. Commun Stat Theory Methods 30(1):179–193

    MathSciNet  MATH  Google Scholar 

  • Wang BX (2012) Generalized interval estimation for the Birnbaum-Saunders distribution. Comput Stat Data Anal 56(12):4320–4326

    MathSciNet  MATH  Google Scholar 

  • Wang Z, Desmond AF, Lu X (2006) Modified censored moment estimation for the two-parameter Birnbaum-Saunders distribution. Comput Stat Data Anal 50(4):1033–1051

    MathSciNet  MATH  Google Scholar 

  • Wang M, Sun X, Park C (2016) Bayesian analysis of Birnbaum-Saunders distribution via the generalized ratio-of-uniforms method. Comput Stat 31(1):207–225

    MathSciNet  MATH  Google Scholar 

  • Xu A, Tang Y (2010) Reference analysis for Birnbaum-Saunders distribution. Comput Stat Data Anal 54:185–192

    MathSciNet  MATH  Google Scholar 

  • Xu A, Tang Y (2011) Bayesian analysis of Birnbaum-Saunders distribution with partial information. Comput Stat Data Anal 55(7):2324–2333

    MathSciNet  MATH  Google Scholar 

Download references

Funding

This research received no funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahdi Teimouri.

Ethics declarations

Conflict of interest

The author declares no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Proof of Lemma 2.1

It is known that the pdf of each BS distribution can be represented as a two-component IG distribution with equal mixing (weight) parameters (Desmond 1986). In the following, Lemma 2.1 represents this property in terms of GIG distribution. To do this, we have

$$\begin{aligned} f_{\text {BS}}(t|\alpha ,\beta )=&\frac{\sqrt{\frac{t}{\beta }}+ \sqrt{\frac{\beta }{t}}}{2\sqrt{2\pi }\alpha t} \exp \biggl \{-\frac{1}{2\alpha ^2}\bigg ({\frac{\beta }{t}}+{\frac{t}{\beta }}-2\bigg )\biggr \},\nonumber \\ =&\frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}}{2\sqrt{2\pi }\alpha \sqrt{\beta t}} \exp \biggl \{-\frac{1}{2\alpha ^2}\bigg ({\frac{\beta }{t}}+{\frac{t}{\beta }}\bigg )\biggr \}\nonumber \\&+\frac{\sqrt{\beta }\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}}{2\sqrt{2\pi }\alpha t^{3/2}} \exp \biggl \{-\frac{1}{2\alpha ^2}\bigg ({{\frac{\beta }{t}}+\frac{t}{\beta }}\bigg )\biggr \},\nonumber \\ =&\frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}\text {K}_{\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}{\sqrt{2\pi }\alpha } f_{\text {GIG}}\Bigl (t\Bigl |\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )\nonumber \\&+\frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}\text {K}_{-\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}{\sqrt{2\pi }\alpha } f_{\text {GIG}}\Bigl (t\Bigl |-\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr ). \end{aligned}$$
(A1)

It should be noted that \(\text {K}_{\lambda }(.)\) function is an even function with respect to \(\lambda \). This means that \(\text {K}_{-1/2}\bigl (1/\alpha ^2\bigr )=\text {K}_{1/2}\bigl (1/\alpha ^2\bigr )\). It follows that

$$\begin{aligned} \frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}\text {K}_{\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}{\sqrt{2\pi }\alpha }= \frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}\text {K}_{-\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}{\sqrt{2\pi }\alpha }={{{\mathcal {K}}}}, \end{aligned}$$

where \({{{\mathcal {K}}}}\) is a positive constant. Substituting \({{{\mathcal {K}}}}\) in the right-hand side of (A1) and integrating both sides with respect to t yields

$$\begin{aligned}&1=\int _{0}^{\infty }{{{\mathcal {K}}}} f_{\text {GIG}}\Bigl (t\Bigl |\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )dt+{{{\mathcal {K}}}} f_{\text {GIG}}\Bigl (t\Bigl |-\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )dt\nonumber \\&={{{\mathcal {K}}}}+{{{\mathcal {K}}}}. \end{aligned}$$

This means that \({{{\mathcal {K}}}}=\text {K}_{1/2}\bigl (1/\alpha ^2\bigr )=\text {K}_{-1/2}\bigl (1/\alpha ^2\bigr )=1/2\) and result follows.

Appendix B. Proof of Lemma 2.2

From Lemma 2.1, we know that each BS distribution can be represented as the convex combination of two GIG distributions. Define a latent variable \(Z\sim {\text {Bernoulli}}(\)p=0.5) for each observed value such as \(t_i\) (for \(i=1,\cdots ,n\)) to show that \(t_i\) comes from the first component if \(z_i=1\), otherwise \(t_i\) comes from the second component. So, it follows from Lemma 2.1 that, \(T_i|Z_i=1 \sim \text {GIG}(1/2,\beta /\alpha ^2,1/(\alpha ^2\beta ))\) and \(T_i|Z_i=0 \sim \text {GIG}(-1/2,\beta /\alpha ^2,1/(\alpha ^2\beta ))\). For the complete data likelihood, we can write

$$\begin{aligned} L_{c}(\alpha ,\beta |{\varvec{t}},{\varvec{z}})&=\prod _{i=1}^{n}\Bigl (\frac{1}{2}\Bigr )^{z_{i}} \Bigl (\frac{1}{2}\Bigr )^{1-z_{i}} \Bigl \{f_{T_{i}|Z_{i}}(t_i|Z_i=z_{i},\alpha ,\beta )\Bigr \}^{z_{i}} \Bigl \{f_{T_{i}|Z_{i}}(t_i|Z_i=z_{i},\alpha ,\beta )\Bigr \}^{1-z_{i}} \nonumber \\&=\biggl ( \frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}\text {K}_{\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}{\sqrt{2\pi }\alpha }\biggr )^n\prod _{i=1}^{n} \Bigl [f_{\text {GIG}}\Bigl (t_i\Bigl |\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )\Bigr ]^{z_i}\nonumber \\&\quad \Bigl [f_{\text {GIG}}\Bigl (t_i\Bigl |-\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )\Bigr ]^{1-z_i}\nonumber \\&=\biggl ( \frac{\exp \bigl \{\frac{1}{\alpha ^2}\bigr \}\text {K}_{\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}{\sqrt{2\pi }\alpha }\biggr )^n\prod _{i=1}^{n} \biggl [\frac{1}{2\text {K}_{\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}\frac{t_{i}^{-\frac{1}{2}}}{\sqrt{\beta }} \exp \biggl \{-\frac{\beta }{2\alpha ^2t_{i}}-\frac{t_{i}}{2\alpha ^2\beta }\biggr \}\biggr ]^{z_i}\nonumber \\&\times \biggl [-\frac{\sqrt{\beta }}{2\text {K}_{-\frac{1}{2}}\bigl (\frac{1}{\alpha ^2}\bigr )}t_{i}^{-\frac{3}{2}} \exp \biggl \{-\frac{\beta }{2\alpha ^2t_{i}}-\frac{t_{i}}{2\alpha ^2\beta }\biggr \}\biggr ]^{1-z_i}\nonumber \\&= \frac{\exp \bigl \{\frac{n}{\alpha ^2}\bigr \}}{2^n(2\pi )^{n/2}\alpha ^n} \Bigl (\prod _{i=1}^{n} t_{i}^{z_i-\frac{3}{2}}\Bigr ) \beta ^{\frac{n}{2}-\sum _{i=1}^{n}z_i} \exp \biggl \{-\frac{\beta }{2\alpha ^2}\sum _{i=1}^{n}\frac{1}{t_i}-\frac{\sum _{i=1}^{n}t_i}{2\alpha ^2\beta }\biggr \}. \end{aligned}$$
(B1)

We note that \(z_i\) in right-hand side of (B1) comes from the full conditional given by

$$\begin{aligned} P\bigl (Z_i=1\big | t_i, \alpha ,\beta \bigr )&\propto f_{T_{i}|Z_{i}}(t_i|Z_i=1,\alpha ,\beta )P(Z_i=1),\\&=\frac{f_{\text {GIG}}\Bigl (t_i\Bigl |\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )P(Z_i=1)}{ f_{\text {GIG}}\Bigl (t_i\Bigl |\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )P(Z_i=1) + f_{\text {GIG}}\Bigl (t_i\Bigl |-\frac{1}{2},\frac{\beta }{\alpha ^2},\frac{1}{\alpha ^2\beta }\Bigr )P(Z_i=0)},\\&=\frac{\frac{(\beta t_{i})^{-\frac{1}{2}}}{2\text {K}_{-\frac{1}{2}}(\frac{1}{\alpha ^2})} \exp \bigl \{-\frac{\beta }{2\alpha ^2t_{i}}-\frac{t_{i}}{2\alpha ^2\beta }\bigr \} \times \frac{1}{2}}{ \frac{(\beta t_{i})^{-\frac{1}{2}}}{2\text {K}_{-\frac{1}{2}}(\frac{1}{\alpha ^2})} \exp \bigl \{-\frac{\beta }{2\alpha ^2t_{i}}-\frac{t_{i}}{2\alpha ^2\beta }\bigr \}\times \frac{1}{2} + \frac{\sqrt{\beta }t_{i}^{-\frac{3}{2}}}{2\text {K}_{\frac{1}{2}}(\frac{1}{\alpha ^2})} \exp \bigl \{-\frac{\beta }{2\alpha ^2t_{i}}-\frac{t_{i}}{2\alpha ^2\beta }\bigr \}\times \frac{1}{2}}. \end{aligned}$$

Some simple algebra shows

$$\begin{aligned} \displaystyle P\bigl (Z_i=z_i\big | t_i, \alpha ,\beta \bigr )= \left\{ \begin{array}{c} \displaystyle \frac{t_i}{t_i +\beta }, \ \mathrm {{if}} \ z_i = 1, \\ \displaystyle \frac{\beta }{t_i +\beta }, \ \mathrm {if} \ z_i=0. \end{array} \right. \end{aligned}$$
(B2)

The result follows.

Appendix C. Algorithm for computing the Bayesian estimator using the reference prior

  1. (1)

    Read M and \({\varvec{t}}=\bigl (t_1,t_2,\cdots ,t_n\bigr )^{'}\). Define st=\(\sum _{i=1}^{n} t_i\) and rst=\(\sum _{i=1}^{n}1/t_i \) and set the initial values of \(\beta \) and \(\alpha \) as \(\beta _0=\sqrt{\text {st}/\text {rst}}\) and \(\alpha _0=\sqrt{\text {st}/(n\beta _0)}+\beta _0\sqrt{\text {rst}}/n-2\), respectively;

  2. (2)

    Set \(m=0\);

  3. (3)

    For each \(t_i\), simulate \(z_i \sim \text {Bernoulli}(p_{i})\) where \(p_{i}=t_{i}/\bigl (\beta _{m}+t_{i}\bigr )\) (for \(i=1,2,\cdots ,n\)) and set \(\text {sz}=\sum _{i=1}^{n} z_{i}\);

  4. (4)

    Update \(\beta \) as \(\beta _{m+1}\) by generating from \(\text {GIG}\bigl (\lambda = n/2 - \text {sz}, \chi = \text {st}/\alpha _{m}^{2}, \psi = \text {srt}/\alpha _{m}^{2}\bigr )\);

  5. (5)

    Update \(\alpha \) as \(\alpha _{m+1}\) by generating from \(\sqrt{ \text {IG}\bigl ( n/2, \text {rate}\bigr )}\) where \(\text {rate}=\text {st}/(2\beta _{m+1}) + \beta _{m+1}\text {rst}/2 - n\);

  6. (6)

    If \(m=M\), then go to the next step, otherwise set \(m=m+1\) and repeat the algorithm from step (3);

  7. (7)

    Set \(\sum _{m=M_{0}}^{M} \alpha _{m}/(M-M_{0}+1)\), \(\sum _{m=M_{0}}^{M} \beta _{m}/(M-M_{0}+1)\) as Bayesian estimators of \(\alpha \) and \(\beta \), respectively.

Appendix D. Algorithm for computing the Bayesian estimator using the conjugate prior

  1. (1)

    Read M and \({\varvec{t}}=\bigl (t_1,t_2,\cdots ,t_n\bigr )^{'}\). Define st=\(\sum _{i=1}^{n} t_i\) and rst=\(\sum _{i=1}^{n}1/t_i \) and set the initial values of \(\beta \) and \(\alpha \) as \(\beta _0=\sqrt{\text {st}/\text {rst}}\) and \(\alpha _0=\sqrt{\text {st}/(n\beta _0)}+\beta _0\sqrt{\text {rst}}/n-2\), respectively;

  2. (2)

    Set \(m=0\);

  3. (3)

    For each \(t_i\), simulate \(z_i \sim \text {Bernoulli}(p_{i})\) where \(p_{i}=t_{i}/\bigl (\beta _{m}+t_{i}\bigr )\) (for \(i=1,2,\cdots ,n\)) and set \(\text {sz}=\sum _{i=1}^{n} z_{i}\);

  4. (4)

    Update \(\beta \) as \(\beta _{m+1}\) by generating from \(\text {GIG}\bigl (\lambda = n/2 - \text {sz} + \lambda _0, \chi = \text {st}/\alpha _{m}^{2} + \chi _0, \psi = \text {srt}/\alpha _{m}^{2} + \psi _0\bigr )\);

  5. (5)

    Update \(\alpha \) as \(\alpha _{m+1}\) by generating from \(\sqrt{ \text {IG}\bigl ( n/2 + \gamma _0, \text {rate}\bigr )}\) where \(\text {rate}=\text {st}/(2\beta _{m+1}) + \beta _{m+1}\text {rst}/2 - n + \theta _0\);

  6. (6)

    If \(m=M\), then go to the next step, otherwise set \(m=m+1\) and repeat the algorithm from step (3);

  7. (7)

    Set \(\sum _{m=M_{0}}^{M} \alpha _{m}/(M-M_{0}+1)\), \(\sum _{m=M_{0}}^{M} \beta _{m}/(M-M_{0}+1)\) as Bayesian estimators of \(\alpha \) and \(\beta \), respectively.

Appendix E. Algorithm for computing the Bayesian estimator using the Jeffreys prior

  1. (1)

    Read M and \({\varvec{t}}=\bigl (t_1,t_2,\cdots ,t_n\bigr )^{'}\). Define st=\(\sum _{i=1}^{n} t_i\) and rst=\(\sum _{i=1}^{n}1/t_i \) and set the initial values of \(\beta \) and \(\alpha \) as \(\beta _0=\sqrt{\text {st}/\text {rst}}\) and \(\alpha _0=\sqrt{\text {st}/(n\beta _0)}+\beta _0\sqrt{\text {rst}}/n-2\), respectively;

  2. (2)

    Set \(m=0\);

  3. (3)

    For each \(t_i\), simulate \(z_i \sim \text {Bernoulli}(p_{i})\) where \(p_{i}=t_{i}/\bigl (\beta _{m}+t_{i}\bigr )\) (for \(i=1,2,\cdots ,n\)) and set \(\text {sz}=\sum _{i=1}^{n} z_{i}\);

  4. (4)

    Update \(\beta \) as \(\beta _{m+1}\) by generating from \(\text {GIG}\bigl (\lambda = n/2 - \text {sz}, \chi = \text {st}/\alpha _{m}^{2}, \psi = \text {srt}/\alpha _{m}^{2}\bigr )\);

  5. (5)

    Simulate x form pdf \(\omega f_{\text {IG}}(x|\gamma _{1}^{*},\theta ^{*}) + (1-\omega ) f_{\text {IG}}(x|\gamma _{2}^{*},\theta ^{*})\), where \(\omega =1/\bigl (1+2\theta ^{*}/n\bigr )\), \(\gamma _{1}^{*}=n/2\), \(\gamma _{2}^{*}=(n-1)/2\), and \(\theta ^{*}=\sum _{i=1}^{n}t_i/(2\beta _{m+1})+\beta _{m+1}\sum _{i=1}^{n}t^{-1}_{i}/2-n\);

  6. (6)

    If \(u<\sqrt{1 + z/4}/(1 + \sqrt{z}/2)\), where \(u \sim Uniform(0,1)\) and \(z=\sqrt{x}\), then \(\alpha _{m+1}=z\) and go to the next step, otherwise repeat algorithm from step (5);

  7. (7)

    If \(m=M\), then go to the next step, otherwise set \(m=m+1\) and repeat the algorithm from step (3);

  8. (8)

    Set \(\sum _{m=M_{0}}^{M} \alpha _{m}/(M-M_{0}+1)\), \(\sum _{m=M_{0}}^{M} \beta _{m}/(M-M_{0}+1)\) as Bayesian estimators of \(\alpha \) and \(\beta \), respectively.

Appendix F. Fisher information matrix

Here, we derive the Fisher information (FI) matrix for the BS distribution. First, we give expressions for the FI matrix when the Bayesian inference is based on the conjugate, reference, and Jeffreys priors. Suppose we consider conjugate prior for joint distribution of \(\alpha \) and \(\beta \) as \(\pi _c(\alpha ,\beta )=\pi _c(\alpha )\pi _c(\beta )\). Based on the conjugate prior, the log-likelihood of joint posterior pdf is

$$\begin{aligned} l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})&=\sum _{i=1}^{n} \log f_{\text {BS}}(t_i|\alpha ,\beta ) + \log \pi _c(\alpha ) +\log \pi _c(\beta )\nonumber \\&\propto \sum _{i=1}^{n}\log f_{\text {BS}}(t_i|\alpha ,\beta ) +(\lambda _0-1) \log \beta -\frac{\chi _0}{2\beta }-\frac{\psi _0 \beta }{2} - 2 (\gamma _0+1) \log \alpha -\frac{\theta _0}{\alpha ^2}. \end{aligned}$$

Let \({I}^{(c)}(\alpha ,\beta )=(I^{(c)}_{ij})_{i,j=1,2}\) where \(I^{(c)}_{11}(\alpha ,\beta )=-E(\partial ^2l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\alpha } \partial {\alpha })\), \(I^{(c)}_{22}(\alpha ,\beta )=-E(\partial ^2l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\beta } \partial {\beta })\), and \(I^{(c)}_{12}(\alpha ,\beta )=I^{(c)}_{21}(\alpha ,\beta )=-E(\partial ^2l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\alpha } \partial {\beta })\) are elements of the FI matrix. It follows that

$$\begin{aligned}&E\Bigl (\frac{\partial ^2l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \alpha \partial \alpha }\Bigr )=n K_{\alpha \alpha } + \frac{2n (\gamma _0+1)}{\alpha ^2} -\frac{6n \theta _0}{\alpha ^4},\nonumber \\&E\Bigl (\frac{\partial ^2l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \alpha \partial \beta }\Bigr )=n K_{\alpha \beta },\nonumber \\&E\Bigl (\frac{\partial ^2l^{{{(c)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \beta \partial \beta }\Bigr )=n K_{\beta \beta } - \frac{n (\lambda _0-1)}{\beta ^2} -\frac{n \chi _0}{\beta ^3}, \end{aligned}$$

where

$$\begin{aligned} K_{\alpha \alpha }=&E\Bigl (\frac{\partial ^2 \log f_{\text {BS}}(t|\alpha ,\beta }{\partial {\alpha } \partial {\alpha }}\Bigr )=\frac{2}{\alpha ^2}, \end{aligned}$$
(F1)
$$\begin{aligned} K_{\alpha \beta }=&K_{\beta \alpha }=E\Bigl (\frac{\partial ^2 \log f_{\text {BS}}(t|\alpha ,\beta )}{\partial {\alpha } \partial {\beta }}\Bigr )=0,\end{aligned}$$
(F2)
$$\begin{aligned} K_{\beta \beta }=&E\Bigl (\frac{\partial ^2 \log f_{\text {BS}}(t|\alpha ,\beta )}{\partial {\beta } \partial {\beta }}\Bigr )=\frac{1+\alpha (2\pi )^{-1/2}h(\alpha )}{\alpha ^2\beta ^2}, \end{aligned}$$
(F3)

with

$$\begin{aligned} h(\alpha )=\alpha \sqrt{\frac{\pi }{2}}-\pi \exp \Bigl \{\frac{2}{\alpha ^2}\Bigr \}\Bigl [1-\Phi \Bigl (\frac{2}{\alpha }\Bigr )\Bigr ], \end{aligned}$$
(F4)

are given by Lemonte (2016). It turns out that

$$\begin{aligned} {I}^{(c)}(\alpha ,\beta )= - \begin{pmatrix} \frac{2n}{\alpha ^2} + \frac{2n (\gamma _0+1)}{\alpha ^2} -\frac{6n \theta _0}{\alpha ^4}&{} 0\\ 0&{} n\frac{1+\alpha (2\pi )^{-1/2}h(\alpha )}{\alpha ^2\beta ^2}- \frac{n (\lambda _0-1)}{\beta ^2} -\frac{n \chi _0}{\beta ^3}\\ \end{pmatrix}. \end{aligned}$$
(F5)

Note that for \(0<\alpha <0.3\), \(h(\alpha )\) in (F4) is replaced with

$$\begin{aligned} h_{1}(\alpha )=\frac{\alpha }{2}\sqrt{\frac{\pi }{2}}\Bigl (1+\frac{\alpha ^2}{4}-\frac{3\alpha ^4}{16}+\frac{5\alpha ^6}{64}\Bigr ), \end{aligned}$$

as suggested by Lemonte (2016). Likewise, based on reference prior \(\pi _{r}(\alpha ,\beta )=1/(\alpha \beta )\), we can write

$$\begin{aligned} l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})&\propto \sum _{i=1}^{n} \log f_{\text {BS}}(t_i|\alpha ,\beta ) - \log \pi _{r}(\alpha ,\beta ) \nonumber \\&=\sum _{i=1}^{n} \log f_{\text {BS}}(t_i|\alpha ,\beta ) - \log \alpha - \log \beta . \end{aligned}$$

Let \({I}^{(r)}(\alpha ,\beta )=(I^{(r)}_{ij})_{i,j=1,2}\) where \(I^{(r)}_{11}(\alpha ,\beta )=-E(\partial ^2l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\alpha } \partial {\alpha })\), \(I^{(r)}_{22}(\alpha ,\beta )=-E(\partial ^2l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\beta } \partial {\beta })\), and \(I^{(r)}_{12}(\alpha ,\beta )=I^{(r)}_{21}(\alpha ,\beta )=-E(\partial ^2l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\alpha } \partial {\beta })\) are elements of the FI matrix based on reference prior. It follows that

$$\begin{aligned}&E\Bigl (\frac{\partial ^2l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \alpha \partial \alpha }\Bigr )=n K_{\alpha \alpha } + \frac{n}{\alpha ^2},\nonumber \\&E\Bigl (\frac{\partial ^2l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \alpha \partial \beta }\Bigr )=n K_{\alpha \beta },\nonumber \\&E\Bigl (\frac{\partial ^2l^{{{(r)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \beta \partial \beta }\Bigr )=n K_{\beta \beta } + \frac{n}{\beta ^2}, \end{aligned}$$

where \(K_{\alpha \alpha }\), \(K_{\alpha \beta }\), and \(K_{\beta \beta }\) are given in (F1), (F2), and (F3), respectively. Then more algebra shows

$$\begin{aligned} {I}^{(r)}(\alpha ,\beta )= - \begin{pmatrix} \frac{3n}{\alpha ^2}&{} 0\\ 0&{} n\frac{1+\alpha (2\pi )^{-1/2}h(\alpha )}{\alpha ^2\beta ^2} +\frac{n}{\beta ^2}\\ \end{pmatrix}. \end{aligned}$$
(F6)

Finally, for the case of Jeffreys prior given in (5), we have

$$\begin{aligned} l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})&\propto \sum _{i=1}^{n} \log f_{\text {BS}}(t_i|\alpha ,\beta ) +\log \pi _{j}(\alpha ,\beta ) \nonumber \\&\propto \sum _{i=1}^{n} \log f_{\text {BS}}(t_i|\alpha ,\beta ) - \log \alpha - \log \beta +\frac{1}{2}\log \Bigl (\frac{1}{\alpha ^2}+\frac{1}{4}\Bigr ). \end{aligned}$$

Let \({I}^{(J)}(\alpha ,\beta )=(I^{(J)}_{ij})_{i,j=1,2}\) where \(I^{(J)}_{11}(\alpha ,\beta )=-E(\partial ^2l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\alpha } \partial {\alpha })\), \(I^{(J)}_{22}(\alpha ,\beta )=-E(\partial ^2l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\beta } \partial {\beta })\), and \(I^{(J)}_{12}(\alpha ,\beta )=I^{(J)}_{21}(\alpha ,\beta )=-E(\partial ^2l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})/\partial {\alpha } \partial {\beta })\) are elements of the FI matrix based on Jeffreys prior. It follows that

$$\begin{aligned}&E\Bigl (\frac{\partial ^2l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \alpha \partial \alpha }\Bigr )=n K_{\alpha \alpha } + \frac{n\bigl (32+20\alpha ^2+\alpha ^4\bigr )}{\alpha ^2\bigl (4+\alpha ^2\bigr )^2},\nonumber \\&E\Bigl (\frac{\partial ^2l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \alpha \partial \beta }\Bigr )=n K_{\alpha \beta },\nonumber \\&E\Bigl (\frac{\partial ^2l^{{{(J)}}}(\alpha ,\beta |{\varvec{t}})}{\partial \beta \partial \beta }\Bigr )=n K_{\beta \beta } + \frac{n}{\beta ^2}. \end{aligned}$$

Then more algebra shows

$$\begin{aligned} {I}^{(J)}(\alpha ,\beta )= - \begin{pmatrix} \frac{2n}{\alpha ^2}+\frac{n(32+20\alpha ^2+\alpha ^4)}{\alpha ^2(4+\alpha ^2)^2}&{} 0\\ 0&{} n\frac{1+\alpha (2\pi )^{-1/2}h(\alpha )}{\alpha ^2\beta ^2} +\frac{n}{\beta ^2}\\ \end{pmatrix}. \end{aligned}$$
(F7)
Table 7 Empirical bias and square root of mean squared error (RMSE) for the parameter \(\alpha \) obtained by five estimators: GRU, CP, JP, RP, and ML. Each simulation run is based on 1000 replicates of data sets of sizes 5, 10, 15, and 20. Parameters: \(\alpha =0.5, 1, 1.5, 2.5\), and \(\beta =0.5\)
Table 8 Empirical bias and square root of mean squared error (RMSE) for the parameter \(\alpha \) obtained by five estimators: GRU, CP, JP, RP, and ML. Each simulation run is based on 1000 replicates of data sets of sizes 5, 10, 15, and 20. Parameters: \(\alpha =0.5, 1, 1.5, 2.5\), and \(\beta =1\)
Table 9 Empirical bias and square root of mean squared error (RMSE) for the parameter \(\beta \) obtained by five estimators: GRU, CP, JP, RP, and ML. Each simulation run is based on 1000 replicates of data sets of sizes 5, 10, 15, and 20. Parameters: \(\alpha =0.5, 1, 1.5, 2.5\), and \(\beta =0.5\)
Table 10 Empirical bias and square root of mean squared error (RMSE) for the parameter \(\beta \) obtained by five estimators: GRU, CP, JP, RP, and ML. Each simulation run is based on 1000 replicates of data sets of sizes 5, 10, 15, and 20. Parameters: \(\alpha =0.5, 1, 1.5, 2.5\), and \(\beta =1\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Teimouri, M. Fast Bayesian Inference for Birnbaum-Saunders Distribution. Comput Stat 38, 569–601 (2023). https://doi.org/10.1007/s00180-022-01234-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-022-01234-3

Keywords