Skip to main content
Log in

Nonparametric confidence intervals for ranked set samples

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

In this work, we propose several different confidence interval methods based on ranked-set samples. First, we develop bootstrap bias-corrected and accelerated method for constructing confidence intervals based on ranked-set samples. Usually, for this method, the accelerated constant is computed by employing jackknife method. Here, we derive an analytical expression for the accelerated constant, which results in reducing the computational burden of this bias-corrected and accelerated bootstrap method. The other proposed confidence interval approaches are based on a monotone transformation along with normal approximation. We also study the asymptotic properties of the proposed methods. The performances of these methods are then compared with those of the conventional methods. Through this empirical study, it is shown that the proposed confidence intervals can be successfully applied in practice. The usefulness of the proposed methods is further illustrated by analyzing a real-life data on shrubs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Ahn S, Lim J, Wang X (2014) The Students \(t\) approximation to distributions of pivotal statistics from ranked set samples. J Korean Stat Soc 43:643–652

    Article  MATH  MathSciNet  Google Scholar 

  • Al-Omari AI, Bouza CN (2014) Review of ranked set sampling: modifications and applications. Rev Investig Oper 35:215–240

    MATH  MathSciNet  Google Scholar 

  • Bohn LL, Wolfe DA (1992) Nonparametric two-sample procedures for ranked-set samples data. J Am Stat Assoc 87:552–561

    Article  MATH  Google Scholar 

  • Chen Z (2007) Ranked set sampling: its essence and some new applications. Environ Ecol Stat 14:355–363

    Article  MathSciNet  Google Scholar 

  • Chen Z, Bai Z, Sinha B (2004) Ranked set sampling theory and applications. Springer, New York

    Book  MATH  Google Scholar 

  • Chen H, Stasny EA, Wolfe DA (2006) Unbalanced ranked set sampling for estimating a population proportion. Biometrics 62:150–158

    Article  MATH  MathSciNet  Google Scholar 

  • Dell TR, Clutter JL (1972) Ranked set sampling theory with order statistics background. Biometrics 28:545–555

    Article  MATH  Google Scholar 

  • Cojbasic V, Loncar D (2011) One-sided confidence intervals for population variances of skewed distributions. J Stat Plan Inference 141:1667–1672

    Article  MATH  MathSciNet  Google Scholar 

  • Drikvandi R, Modarres R, Hui TP (2006) A bootstrap test for symmetry based on ranked set samples. Comput Stat Data Anal 55:1807–1814

    Article  MATH  MathSciNet  Google Scholar 

  • Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7:1–26

    Article  MATH  MathSciNet  Google Scholar 

  • Efron B (1987) Better bootstrap confidence intervals. J Am Stat Assoc 82:171–185

    Article  MATH  MathSciNet  Google Scholar 

  • Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman & Hall, New York

    Book  MATH  Google Scholar 

  • Fligner MA, MacEachern SN (2006) Nonparametric two-sample methods for ranked-set sample data. J Am Stat Assoc 101:1107–1118

    Article  MATH  MathSciNet  Google Scholar 

  • Frey J (2007) Distribution-free statistical intervals via ranked-set sampling. Can J Stat 35:585–596

    Article  MATH  MathSciNet  Google Scholar 

  • Frey J (2014) Bootstrap confidence bands for the CDF using ranked-set sampling. J Korean Stat Soc 43:453–461

    Article  MATH  MathSciNet  Google Scholar 

  • Ghosh K, Tiwari R (2004) Bayesian density estimation using ranked set samples. Environmetrics 15:711–728

    Article  Google Scholar 

  • Hall P (1988) Theoretical comparison of bootstrap confidence intervals. Ann Stat 16:927–953

    Article  MATH  MathSciNet  Google Scholar 

  • Hall P (1992a) On the removal of skewness by transformation. J R Stat Soc B 54:221–228

    MathSciNet  Google Scholar 

  • Hall P (1992b) The bootstrap and edgeworth expansion. Springer, New York

    Book  MATH  Google Scholar 

  • Hui TP, Modarres R, Zheng G (2004) Bootstrap confidence interval estimation of mean via ranked set sampling linear regression. J Stat Comput Simul 75:543–553

    Article  MATH  MathSciNet  Google Scholar 

  • Johnson N (1978) Modified t-tests and confidence intervals for asymmetrical populations. J Am Stat Assoc 73:536–554

    MATH  MathSciNet  Google Scholar 

  • Li T, Balakrishnan N (2008) Some simple nonparametric methods to test for perfect ranking in ranked set sampling. J Stat Plan Inference 138:1325–1338

    Article  MATH  MathSciNet  Google Scholar 

  • Linder D, Samawi H, Yu L, Chatterjee A, Huang Y, Vogel R (2015) On stratified bivariate ranked set sampling for regression estimators. J Appl Stat 42:2571–2583

    Article  MathSciNet  Google Scholar 

  • McIntyre GA (1952) A method for unbiased selective sampling, using ranked sets. Aust J Agric Res 2:385–390

    Article  Google Scholar 

  • Modarres R, Hui TP, Zheng G (2006) Resampling methods for ranked set samples. Comput Stat Data Anal 51:1039–1050

    Article  MATH  MathSciNet  Google Scholar 

  • Muttlak HA, McDonald LL (1990) Ranked set sampling with size-biased probability of selection. Biometrics 46:435–445

    Article  MATH  MathSciNet  Google Scholar 

  • Ozturk O, Balakrishnan N (2009) An exact control-versus-treatment comparison test based on ranked set samples. Biometrics 65:1213–1222

    Article  MATH  MathSciNet  Google Scholar 

  • Patil GP, Sinha AK, Taillie C (1999) Ranked set sampling: a bibliography. Environ Ecol Stat 6:91–98

    Article  Google Scholar 

  • Samawi H, Rochani H, Linder D, Chatterjee A (2017) More efficient logistic analysis using moving extreme ranked set sampling. J Appl Stat 44:753–766

    Article  MathSciNet  Google Scholar 

  • Takahasi K, Wakimoto K (1968) On unbiased estimates of population mean based on the sample stratified by means of ordering. Ann Inst Math Stat 20:1–31

    Article  MATH  Google Scholar 

  • Zhou XH, Gao S (2000) One-sided confidence intervals for means of positively skewed distributions. Am Stat 54:100–104

    MathSciNet  Google Scholar 

Download references

Acknowledgements

We express our sincere thanks to the Associate Editor and the anonymous reviewers for their useful comments and suggestions on an earlier versions of this manuscript which led to this improved one.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arpita Chatterjee.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 27 KB)

Supplementary material 2 (txt 0 KB)

Appendix A: Proofs

Appendix A: Proofs

Proof of Theorem 2.1:

Let us define \(Y_{(r),i}=\frac{X_{(r),i}-\mu _{r}}{\sigma _r}\), for \(r=1,2,\ldots ,k\). Then, \(T_{RSS}\) can be expressed as

$$\begin{aligned} T_{RSS}=\frac{n\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}}{\sqrt{\sum _{r=1}^{k}\sigma ^{2}_{r}\frac{s^{2}_{r,Y}}{\lambda _{r,n}}}}, \end{aligned}$$

where \(s^{2}_{r,Y}=m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y_{(r),i}-\bar{Y}_{r})^2.\) We can express

$$\begin{aligned} \sum _{r=1}^{k}\sigma ^{2}_{r}\frac{s^{2}_{r,Y}}{\lambda _{r,n}}= a_{n}\Biggl [1+\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)-\sum _{r=1}^{k}a_{r,n}\bar{Y}^{2}_{r}\Biggr ], \end{aligned}$$
(7.1)

where \(a_{r,n}=\frac{\lambda ^{-1}_{r,n}\sigma ^{2}_{r}}{a_{n}}\) and \(a_{n}=\sum _{r=1}^{k}\lambda ^{-1}_{r,n}\sigma ^{2}_{r}\). Using (7.1), \(T_{RSS}\) can be expressed as

$$\begin{aligned} T_{RSS}= & {} a^{-1/2}_{n} T_{1}, \end{aligned}$$
(7.2)

where

$$\begin{aligned} T_1= & {} \sqrt{n}\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\Biggl [1-\frac{1}{2}\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)+\frac{1}{2}\sum _{r=1}^{k}a_{r,n}\bar{Y}^{2}_{r}\nonumber \\&+ \frac{3}{8}\left\{ \sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\right\} ^{2}\nonumber \\&+\frac{3}{4}\Biggl \{\sum _{r=1}^{k} a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \} \sum _{r=1}^{k}a_{r,n}\bar{Y}^{2}_{r}\Biggr ]+O_{p}(n^{-2}). \end{aligned}$$
(7.3)

In order to obtain the Edgeworth expansion of \(T_{RSS}\) given in Theorem 2.1, we first need to derive asymptotic expansions for the first three cumulants of \(T_{RSS}\), which are given in the following lemma. \(\square \)

Lemma 7.1

Under the assumptions of Theorem 2.1, we have:

  1. (1)

    \(E(T_{RSS})=-\frac{1}{2}n^{-1/2}\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r}}\biggr )^{-3/2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3/2}),\)

  2. (2)

    \(E(T^{2}_{RSS})=1+2\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r}}\biggr )^{-3}\biggl [\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}\biggr ]^{2}+ \biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r}}\biggr )^{-2}\biggl [\sum _{r=1}^{k}\frac{3\sigma ^{4}_{r}}{\lambda ^{3}_{r,n}}+\sum _{r=1}^{k}\sum _{r^{'}>r} \frac{\sigma ^{2}_{r} \sigma ^{2}_{r^{'}}}{\lambda ^{2}_{r}\lambda ^{2}_{r^{'}}}(\lambda _{r}+\lambda _{r^{'}})\biggr ] +O(n^{-2}),\)

  3. (3)

    \(E(T^{3}_{RSS})=-\frac{7}{2}n^{-1/2}\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r}}\biggr )^{-3/2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3/2})\).

Proof of (1)

Note that \(\lambda _{r,n}\rightarrow {\lambda _{r}}\), and so \(\lambda ^{-1}_{r,n}=O(1)\) and \(a_{r,n}=O(1)\). Now, \(a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)=a_{r,n}m^{-1/2}_{r}O_{p}(1)=a_{r,n}(\lambda ^{-1}_{r,n})^{1/2} O_{p}(n^{-1/2})=O_{p}(n^{-1/2})\), and similarly we have \(\bar{Y}^{2}_{r}=O_{p}(n^{-1})\). These facts imply that

$$\begin{aligned} \Biggl \{\sum _{r=1}^{k} a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \} \sum _{r=1}^{k}a_{r,n}\bar{Y}^{2}_{r}=O_{p}(n^{-3/2}). \end{aligned}$$

From (7.3), we have

$$\begin{aligned} E(T_1)= & {} -\frac{1}{2} E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\Biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \}\Biggr ]\nonumber \\&+\frac{1}{2}E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\sum _{r=1}^{k} a_{r,n}\bar{Y}^{2}_{r}\Biggr ]\nonumber \\&+\frac{3}{8}E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\Biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \}^{2}\Biggr ]+O(n^{-3/2}).\qquad \quad \end{aligned}$$
(7.4)

Since \(E(Y_{(r^{'}),i} (Y^{2}_{(r),i}-1))=0\) for \(r\ne r^{'}\), we have

$$\begin{aligned} E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\Biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \}\Biggr ]= & {} \sqrt{n}\sum _{r=1}^{k} \frac{\sigma _{r}a_{r,n}}{m^{2}_{r}}\sum _{i=1}^{m_{r}}E(Y^{3}_{i,r})\nonumber \\= & {} n^{-1/2}a^{-1}_{n}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}, \end{aligned}$$
(7.5)

where the last part follows from the fact that \(a^{-1}_{n}=(\frac{\sigma ^{2}_{r}}{\lambda _{r,n}})^{-1} a_{r,n}\) and \(E\{Y^{3}_{(r),i}\}=\sigma ^{-3}_{r}\gamma _{r}\). Using arguments similar to those above, we have

$$\begin{aligned} E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\sum _{r=1}^{k} a_{r,n}\bar{Y}^{2}_{r}\Biggr ]=\sqrt{n}\sum _{r=1}^{k} a_{r,n}\sigma _{r} E(\bar{Y}^{3}_{r})= & {} n^{-3/2}\sum _{r=1}^{k}\frac{a_{r,n}\sigma _{r}}{\lambda _{r,n}} E(Y^{3}_{r})\nonumber \\= & {} O(n^{-3/2}), \end{aligned}$$
(7.6)

since \(\lambda ^{-1}_{r,n}=O(1)\) and \(a_{r,n}=O(1)\). For notational simplification, set \(U_{(r),i}=Y^{2}_{(r),i}-1\). Then,

$$\begin{aligned} E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\Biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \}^{2}\Biggr ]= & {} \sqrt{n}E\Biggl [\sum _{i,j,l=1}^{k}\sigma _{i}a_{j,n} a_{l,n}\bar{Y}_{i}\bar{U}_{j}\bar{U}_{l}\Biggr ]\nonumber \\= & {} \sqrt{n}E\Biggl [\sum _{r=1}^{k}\sigma _{r}a^{2}_{r,n} \bar{Y}_{r}\bar{U}^{2}_{r}\Biggr ], \end{aligned}$$
(7.7)

where the last part follows from the fact that for other choices of (ijl), \(E(\bar{Y}_{i}\bar{U}_{j}\bar{U}_{l})=0\). Further, we have

$$\begin{aligned} E\Biggl [\sum _{r=1}^{k}\sigma _{r}a^{2}_{r,n} \bar{Y}_{r}\bar{U}^{2}_{r}\Biggr ]=\sum _{r=1}^{k}\sigma _{r}a^{2}_{r,n} \frac{1}{m^{2}_{r}} E\biggl (Y_{(r)}(Y_{(r)}-1)^{2}\biggr )=O(n^{-2}). \end{aligned}$$

Now, from (7.7), we have

$$\begin{aligned} \ E\Biggl [\sqrt{n}\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\Biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \}^{2}\Biggr ]=O(n^{-3/2}). \end{aligned}$$
(7.8)

Hence, Eqs. (7.4)–(7.6) and (7.8) imply that

$$\begin{aligned} E(T_{RSS})=a^{-1/2}_{n}E(T_{1})=-\frac{n^{-1/2}}{2}\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r,n}}\biggr )^{-3/2} \sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3/2}). \end{aligned}$$

\(\square \)

Proof of (3):

From (7.3), we have

$$\begin{aligned} \begin{aligned} E(T^{3}_1)&=n^{3/2}E\Biggl [\biggl (\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\biggr )^{3}\Biggl [1-\frac{3}{2}\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)-\frac{3}{2}\sum _{r=1}^{k}a_{r,n}\bar{Y}^{2}_{r}\\&\quad -\frac{9}{8}\biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\biggr \}^{2}\Biggr ]+O(n^{-3/2})\\&=n^{3/2}\biggl [A-\frac{3}{2}B-\frac{3}{2}C-\frac{9}{8}D\biggr ]+O(n^{-3/2}), \end{aligned} \end{aligned}$$
(7.9)

where

$$\begin{aligned} A= & {} E\biggl (\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\biggr )^{3} \;,\;B=E\biggl \{\biggl (\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\biggr )^{3}\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\biggr \}\;,\; C\\= & {} E\biggl \{\biggl (\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\biggr )^{3}\sum _{r=1}^{k}a_{r,n}\bar{Y}^{2}_{r}\biggr \} \end{aligned}$$

and

$$\begin{aligned} D=E\Biggl [\biggl (\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\biggr )^{3}\biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\biggr \}^{2}\Biggr ]. \end{aligned}$$

Now,

$$\begin{aligned} A= & {} E\biggl (\sum _{r=1}^{k}\sigma ^{3}_{r}\bar{Y}^{3}_{r}+3\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\sum _{r^{'}\ne r=1}^{k} \sigma ^{2}_{r^{'}}\bar{Y}^{2}_{r^{'}}+\sum _{r\ne r^{'}\ne r^{''}=1}^{k}\sigma _{r}\sigma _{r^{'}}\sigma _{r^{''}}\bar{Y}_{r^{}}\bar{Y}_{r^{'}}\bar{Y}_{r^{''}}\biggr )\nonumber \\= & {} \sum _{r=1}^{k}\sigma ^{3}_{r}m^{-2}_{r}E(Y^{3}_{(r)})= n^{-2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}. \end{aligned}$$
(7.10)

Let us recall \(U_{(r),i}=Y^{2}_{(r),i}-1\), and so \(\bar{U}_{r}=m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\). Now,

$$\begin{aligned} B= & {} E\biggl \{\biggl (\sum _{r=1}^{k} \sigma _{r}\bar{Y}_{r}\biggr )^{3}\sum _{r=1}^{k}a_{r,n}\bar{U}_{r}\biggr \}\nonumber \\= & {} E\Biggl (\sum _{r=1}^{k} \sigma ^{3}_{r}a_{r,n}\bar{Y}^{3}_{r}\bar{U}_{r}\Biggl )+ 3\sum _{r=1}^{k}\sigma _{r} a_{r,n}E\biggl (\bar{Y}_{r} \bar{U}_{r}\biggr )\sum _{r^{'}\ne r=1}^{k} \frac{\sigma ^{2}_{r^{'}}}{m_{r^{'}}}. \end{aligned}$$
(7.11)

Again, after some algebraic manipulations, we obtain

$$\begin{aligned} E\biggl (\bar{Y}^{3}_{r}\bar{U}_{r}\biggl )= & {} m^{-4}_{r}\sum _{i=j=l=q=1}E\biggl \{Y_{(r),i}Y_{(r),j}Y_{(r),l}(Y^{2}_{(r),q}-1)\biggr \}\nonumber \\= & {} m^{-3}_{r}E(Y^{5}_{(r)})+3\frac{(m_{r}-1)}{m^{3}_{r}}E(Y^{3}_{(r)})\nonumber \\= & {} 3n^{-2}\frac{\sigma ^{-3}_{r}\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3}) \end{aligned}$$
(7.12)

and

$$\begin{aligned} \sum _{r=1}^{k}\sigma _{r} a_{r,n}E\biggl (\bar{Y}_{r} \bar{U}_{r}\biggr )\sum _{r^{'}\ne r=1}^{k} \frac{\sigma ^{2}_{r^{'}}}{m_{r^{'}}}= & {} \sum _{r=1}^{k}\sigma _{r} a_{r,n}\frac{\sigma ^{-3}_{r}\gamma _{r}}{m_{r}}\sum _{r^{'}\ne r=1}^{k} \frac{\sigma ^{2}_{r^{'}}}{m_{r^{'}}}\nonumber \\= & {} n^{-2}\sum _{r=1}^{k} a_{r,n}\frac{\sigma ^{-2}_{r}\gamma _{r}}{\lambda _{r,n}}\sum _{r^{'}\ne r=1}^{k} \frac{\sigma ^{2}_{r^{'}}}{\lambda _{r^{'},n}}. \end{aligned}$$
(7.13)

From Eqs. (7.11)–(7.13), we obtain

$$\begin{aligned} B= & {} 3n^{-2}\biggl (\sum _{r=1}^{k}a_{r,n}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+\sum _{r=1}^{k} a_{r,n}\frac{\sigma ^{-2}_{r}\gamma _{r}}{\lambda ^{2}_{r,n}}\sum _{r^{'}\ne r=1}^{k} \frac{\sigma ^{2}_{r^{'}}}{\lambda _{r^{'}}}\biggr )+O(n^{-3})=3n^{-2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}\nonumber \\&+O(n^{-3}), \end{aligned}$$
(7.14)

where the last part follows from the fact that \(\sum _{r=1}^{k}a_{r,n}=1\). Similarly, it can be shown that \(C=O(n^{-3})\) and \(D=O(n^{-3})\). Hence, Eqs. (7.9), (7.10) and (7.14) imply

$$\begin{aligned} E(T^{3}_{RSS})=a^{-3/2}_{n} E(T^{3}_{1}) =-n^{-1/2}\frac{7}{2}\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r,n}}\biggr )^{-3/2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3/2}). \end{aligned}$$

\(\square \)

Proof of (2):

Now, from (7.3), we have

$$\begin{aligned} E(T^{2}_{1})= & {} E\Biggl [n\biggl (\sum _{r=1}^{k}\sigma _{r}\bar{Y}_{r}\biggr )^{2}\Biggl \{1-\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\biggr \}\Biggr ] +O(n^{-1})\nonumber \\= & {} nE\Biggl [\Biggl \{\sum _{r=1}^{k}\sigma ^{2}_{r}\bar{Y}^{2}_{r}+2\sum _{r^{'}>r=1}^{k}\sigma _{r}\sigma _{r^{'}}\bar{Y}_{r}\bar{Y}_{r^{'}}\Biggr \}\Biggl \{1-\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr \}\Biggr ]\nonumber \\&+O(n^{-1}). \end{aligned}$$
(7.15)

It is easy to prove that

$$\begin{aligned}&E\biggl (\sum _{r^{'}>r=1}^{k}\sigma _{r}\sigma _{r^{'}}\bar{Y}_{r}\bar{Y}_{r^{'}}\biggr )=0\quad \hbox {and}\quad E\Biggl [\Biggl \{\sum _{r^{'}>r=1}^{k}\sigma _{r}\sigma _{r^{'}}\bar{Y}_{r}\bar{Y}_{r^{'}}\Biggr \}\\&\quad \sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\Biggr ]=0. \end{aligned}$$

From (7.15) and after performing some tedious algebraic manipulations, we obtain

$$\begin{aligned} E(T^{2}_{1})= & {} \Biggl [nE\biggl (\sum _{r=1}^{k}\sigma ^{2}_{r}\bar{Y}^{2}_{r}\biggr )-nE\biggl \{\sum _{r=1}^{k}\sigma ^{2}_{r}\bar{Y}^{2}_{r}\biggr \}\biggl \{\sum _{r=1}^{k}a_{r,n} m^{-1}_{r}\sum _{i=1}^{m_{r}}(Y^{2}_{(r),i}-1)\biggr \}\Biggr ]\nonumber \\&+O(n^{-1})\nonumber \\= & {} a_{n}-n^{-1}\sum _{r=1}^{k}\frac{\sigma ^{2}_{r}a_{r,n}}{\lambda ^{2}_{r}}(\kappa _{r,Y}+2)+O(n^{-1})=a_{n}+O(n^{-1}), \end{aligned}$$
(7.16)

where \(\kappa _{r,Y}=E(Y^{4}_{r})\). Equations (7.2) and (7.16) imply that \(E(T^{2}_{RSS})=1+O(n^{-1})\). Now, Lemma 7.1 together with some algebraic calculations, give the following asymptotic expansions for the first three cumulants, \(\kappa _{1}(T_{RSS})\), \(\kappa _{2}(T_{RSS})\) and \(\kappa _{3}(T_{RSS})\), of \(T_{RSS}\):

$$\begin{aligned} \kappa _{1}(T_{RSS})=-\frac{1}{2}n^{-1/2}\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r}}\biggr )^{-3/2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3/2}),\\ \kappa _{2}(T_{RSS})=E(T^{2})-E(T)^{2}=1+O(n^{-1}) \end{aligned}$$

and

$$\begin{aligned} \kappa _{3}(T_{RSS})=-\frac{7}{2}n^{-1/2}\biggl (\sum _{r=1}^{k} \frac{\sigma ^{2}_{r}}{\lambda _{r}}\biggr )^{-3/2}\sum _{r=1}^{k}\frac{\gamma _{r}}{\lambda ^{2}_{r,n}}+O(n^{-3/2}). \end{aligned}$$

The characteristic function of \(T_{RSS}\) is

$$\begin{aligned} E(e^{iT_{RSS}})=\exp \biggl \{it\kappa _{1}(T)+\frac{1}{2}(it)^{2}\kappa _{2}(T)+\frac{1}{6}(it)^{3}\kappa _{3}(T)\biggr \} \end{aligned}$$
(7.17)

and so substituting the above asymptotic formulae for the cumulants in (7.17) and expanding the right-hand side as

$$\begin{aligned} e^{-\frac{{t}^{2}}{2}}\biggl \{1+n^{-1/2}p_{1}(it)+O(n^{-1})\biggr \} \end{aligned}$$

for polynomial \(p_{1}\) and inverting the Fourier transformation (for details, see Section 2.3 of Hall 1992b), we get the Edgeworth expansion of \(P(T_{RSS}\le x)\) as given in Theorem 2.1.

The derivation of the Edgeworth expansion of \(P(S_{RSS}\le x)\) is straightforward, hence omitted. \(\square \)

Proof of Theorem 3.1:

To facilitate the proof of Theorem 3.1, let \(S^{*}_{RSS}=({\bar{X}}^{*}_{RSS}-{\bar{X}}_{RSS})/{\hat{\tau }}\) be the bootstrap version of \(S_{RSS}\). Analogous to Theorem 2.1, we have

$$\begin{aligned} P\biggl \{S^{*}_{RSS}\le x|\mathcal {X}_{RSS}\biggr \}= & {} \Phi (x)+n^{-1/2}\hat{p}_{1}(x)\phi (x)+ O_{p}(n^{-1}),\nonumber \\ P\biggl \{T^{*}_{RSS}\le x|\mathcal {X}_{RSS}\biggr \}= & {} \Phi (x)+n^{-1/2}\hat{q}_{1}(x)\phi (x)+ O_{p}(n^{-1}), \end{aligned}$$
(7.18)

respectively, where \(\hat{p}_{1}(x)\) and \(\hat{q}_{1}(x)\) are as given in Theorem 2.1 except that population moments are now replaced by sample moments. Let \(t_{\xi }\), \(s_{\xi }\), \(\hat{t}_{\xi }\) and \(\hat{s}_{\xi }\) be the \(\xi \)th quantiles of \(P(T_{RSS}\le x)\), \(P(S_{RSS}\le x)\), \(P(T^{*}_{RSS}\le x|\mathcal {X}_{RSS})\) and \(P(S^{*}_{RSS}\le x|\mathcal {X}_{RSS})\), respectively. Then, the Cornish–Fisher expansions of these quantiles are given by

$$\begin{aligned} t_{\xi }= & {} z_{\xi }+n^{-1/2}q_{11}(z_{\xi })+ O(n^{-1}),\nonumber \\ s_{\xi }= & {} z_{\xi }+n^{-1/2}p_{11}(z_{\xi })+ O(n^{-1}),\nonumber \\ \hat{t}_{\xi }= & {} z_{\xi }+n^{-1/2}\hat{q}_{11}(z_{\xi })+ O_{p}(n^{-1}),\nonumber \\ \hat{s}_{\xi }= & {} z_{\xi }+n^{-1/2}\hat{p}_{11}(z_{\xi })+ O_{p}(n^{-1}); \end{aligned}$$
(7.19)

see the review of Cornish–Fisher expansion in Hall (1992b). The quantities \(\hat{q}_{11}(x)\) and \(\hat{p}_{11}(x)\) are the sample versions of \(q_{11}(x)=-q_{1}(x)\) and \(p_{11}(x)=-p_{1}(x)\), respectively. We now provide the proof for \(P(\mu \in I_{1,{BC_{a}}})\). Equation (7.18) implies that

$$\begin{aligned} \hat{G}_{RSS}({\bar{X}}_{RSS})= & {} P\biggl \{S^{*}_{RSS}\le 0|\mathcal {X}_{RSS} \biggr \}\nonumber \\= & {} \Phi (0)+n^{-1/2}\hat{p}_{1}(0)\phi (0)+O_{p}(n^{-1}). \end{aligned}$$
(7.20)

Therefore, from Eqs. (3.1) and (7.20), we have

$$\begin{aligned} \hat{d}= & {} \Phi ^{-1}\biggl \{\Phi (0)+n^{-1/2}\hat{p}_{1}(0)\phi (0)+O_{p}(n^{-1})\biggr \}\nonumber \\= & {} n^{-1/2}\hat{p}_{1}(0)+O_{p}(n^{-1}), \end{aligned}$$
(7.21)

with the last line following from Taylor series expansion. Equation (7.19) implies that

$$\begin{aligned} \hat{u}_{l_{\hat{a}}}(\alpha )= & {} {\bar{X}}_{RSS}+ \hat{s}_{l_{\hat{a}}(\alpha )}{\hat{\tau }}={\bar{X}}_{RSS}+{\hat{\tau }}\biggl \{z_{l_{\hat{a}}(\alpha )}+n^{-1/2}\hat{p}_{11}(z_{l_{\hat{a}}(\alpha )})+ n^{-1}\hat{p}_{21}(z_{l_{\hat{a}}(\alpha )})\biggr \}\nonumber \\&+O_{p}(n^{-3/2}). \end{aligned}$$
(7.22)

Equations (3.2) and (7.20) imply that

$$\begin{aligned} z_{l_{\hat{a}}(\alpha )} 2\hat{d}+z_{\alpha }+\hat{a}(\hat{d}^{2}+2\hat{d}z_{\alpha }+z^{2}_{\alpha })+O_{p}(n^{-1})= & {} z_{\alpha }+n^{-1/2}(2\hat{p}_{1}(0)+\hat{c}z^{2}_{\alpha })\nonumber \\&+O_{p}(n^{-1}), \end{aligned}$$
(7.23)

where \(\hat{a}=n^{-1/2}\hat{c}\) and \(\hat{c}=\hat{\eta }^{-3/2}_{1}\hat{\eta }_2\). From (7.21) and (7.22), upon using Taylor series expansion, we get

$$\begin{aligned} \hat{u}_{l_{\hat{a}}}(\alpha )= & {} {\bar{X}}_{RSS}+{\hat{\tau }}\biggl [z_{\alpha }+n^{-1/2}\{2\hat{p}_{1}(0)+\hat{c}z^{2}_{\alpha }+\hat{p}_{11}(z_{\alpha })\}\biggr ]+O_{p}(n^{-1})\nonumber \\= & {} {\bar{X}}_{RSS}+{\hat{\tau }}[z_{\alpha }+n^{-1/2}\hat{q}_{1}(z_{\alpha })]+O_{p}(n^{-1}), \end{aligned}$$
(7.24)

since \(2\hat{p}_{1}(0)+\hat{c}z^{2}_{\alpha }+\hat{p}_{11}(z_{\alpha })=\hat{q}_{1}(z_{\alpha })\). So, from (7.24), we have

$$\begin{aligned} P(\mu \in I^{*}_{1,{BC_{a}}})= & {} P\{ {\hat{\tau }}^{-1} ({\bar{X}}_{RSS}-\mu )+O_{p}(n^{-1})\ge -z_{\alpha }-n^{-1/2}q_{1}(z_{\alpha })\}\\ {}= & {} P\{ {\hat{\tau }}^{-1} ({\bar{X}}_{RSS}-\mu )\ge z_{1-\alpha }-n^{-1/2}q_{1}(z_{1-\alpha })\}+O(n^{-1}), \end{aligned}$$

where the last line is obtained by the delta method (see Section 2.7, Hall 1992b) and by using the fact that \(-z_{\alpha }=z_{1-\alpha }\). From Theorem 2.1, we then get

$$\begin{aligned} P(\mu \in I^{*}_{1,{BC_{a}}})= & {} 1- \biggl [ P\biggl \{ {\hat{\tau }}^{-1} ({\bar{X}}_{RSS}-\mu )\le z_{1-\alpha }-n^{-1/2}q_{1}(z_{1-\alpha })\biggr \}+O(n^{-1})\biggr ]\\= & {} 1- 1+\alpha +O(n^{-1})\\= & {} \alpha +O(n^{-1}). \end{aligned}$$

In a similar manner, we can show that \(P(\mu \in I^{*}_{0,{BC_{a}}})=P(\mu \in I^{*}_{2,{BC_{a}}})=\alpha +O(n^{-1})\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghosh, S., Chatterjee, A. & Balakrishnan, N. Nonparametric confidence intervals for ranked set samples. Comput Stat 32, 1689–1725 (2017). https://doi.org/10.1007/s00180-017-0744-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-017-0744-0

Keywords

Navigation