Skip to main content
Log in

Abrupt change in mean using block bootstrap and avoiding variance estimation

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

We deal with sequences of weakly dependent observations that are naturally ordered in time. Their constant mean is possibly subject to change at most once at some unknown time point. The aim is to test whether such an unknown change has occurred or not. The change point methods presented here rely on ratio type test statistics based on maxima of the cumulative sums. These detection procedures for the abrupt change in mean are also robustified by considering a general score function. The main advantage of the proposed approach is that the variance of the observations neither has to be known nor estimated. The asymptotic distribution of the test statistic under the no change null hypothesis is derived. Moreover, we prove the consistency of the test under the alternatives. A block bootstrap method is developed in order to obtain better approximations for the test’s critical values. The validity of the bootstrap algorithm is shown. The results are illustrated through a simulation study, which demonstrates computational efficiency of the procedures. A practical application to real data is presented as well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Andrews DWK (1991) Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59(3):817–858

    Article  MathSciNet  MATH  Google Scholar 

  • Andrews DWK (1993) Tests for parameter instability and structural change with unknown change point. Econometrica 61(4):821–856

    Article  MathSciNet  MATH  Google Scholar 

  • Andrews DWK, Ploberger W (1994) Optimal tests when a nuisance parameter is present only under the alternative. Econometrica 62(6):1383–1414

    Article  MathSciNet  MATH  Google Scholar 

  • Andrews DWK, Lee I, Ploberger W (1996) Optimal changepoint tests for normal linear regressions. J Econom 70(1):9–380

    Article  MathSciNet  MATH  Google Scholar 

  • Antoch J, Hušková M (2001) Permutation tests in change point analysis. Stat Probab Lett 53(1):37–46

    Article  MathSciNet  MATH  Google Scholar 

  • Antoch J, Hušková M, Prášková Z (1997) Effect of dependence on statistics for determination of change. J Stat Plan Infer 60(2):291–310

    Article  MathSciNet  MATH  Google Scholar 

  • Bazarova A, Berkes I, Horváth L (2015) Change point detection with stable AR(1) errors. In: Dawson D, Kulik R, Haye MO, Szyszkowicz B, Zhao Y (eds) Asymptotic laws and methods in stochastics, fields institute communications, vol 76. Springer, Berlin, pp 179–193

    Chapter  Google Scholar 

  • Bradley RC (2005) Basic properties of strong mixing conditions. A survey and some open questions. Probab Surveys 2:107–144

    Article  MathSciNet  MATH  Google Scholar 

  • Bulinskii AV (1987) Limit theorems under weak dependence conditions. In: Fourth international conference on probability theory and mathematical statistics. V.N.U. Sci Press, Utrecht, pp 307–326

  • Bulinskii AV (1989) On various conditions of mixing and asymptotic normality of random fields. Soviet Math Dokl 37(4):443–447

    Google Scholar 

  • Chen Z, Tian Z (2014) Ratio tests for variance change in nonparametric regression. Statistics 48(1):1–16

    Article  MathSciNet  MATH  Google Scholar 

  • Chen X, Wu Y (1989) Strong law for mixing sequence. Acta Math Appl Sin E 5(4):367–371

    Article  MathSciNet  MATH  Google Scholar 

  • Csörgő M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, Chichester

    MATH  Google Scholar 

  • Doukhan P (1994) Mixing: properties and examples, Lecture Notes in Statistics, vol 85. Springer, New York

    MATH  Google Scholar 

  • Fitzenberger B (1997) The moving block bootstrap and robust inference for linear least squares and quantile regression. J Econom 82:235–287

    Article  MathSciNet  MATH  Google Scholar 

  • Hall P, Horowitz JL, Jing BY (1995) On blocking rules for the bootstrap with dependent data. Biometrika 82(3):561–574

    Article  MathSciNet  MATH  Google Scholar 

  • Herrndorf N (1983) Stationary strongly mixing sequences not satisfying the central limit theorem. Ann Probab 11(3):809–813

    Article  MathSciNet  MATH  Google Scholar 

  • Horváth L, Horváth Z, Hušková M (2008) Ratio tests for change point detection. In: Balakrishnan N, Peña EA, Silvapulle MJ (eds) Beyond parametrics in interdisciplinary research: Festschrift in Honor of Professor Pranab K. Sen, IMS Collections, Beachwood, Ohio, vol 1, pp 293–304

  • Hušková M (2004) Permutation principle and bootstrap in change point analysis. Fields Inst Commun 44:273–291

    MathSciNet  MATH  Google Scholar 

  • Hušková M (2007) Ratio type test statistics for detection of changes in time series. In: Gomes MI, Pinto Martins JA, Silva JA (eds) Bulletin of the International Statistical Institute, Proceedings of the 56th Session, Lisboa, vol 976, pp 3934–3937

  • Hušková M, Marušiaková M (2012) \(M\)-Procedures for detection of changes for dependent observations. Commun Stat B Simul 41(7):1032–1050

    Article  MathSciNet  MATH  Google Scholar 

  • Ibragimov IA, Linnik YV (1971) Independent and stationary sequences of random variables. Wolters-Noordhoff, Groningen

    MATH  Google Scholar 

  • Kim JY (2000) Detection of change in persistence of a linear time series. J Econom 95(1):97–116

    Article  MathSciNet  MATH  Google Scholar 

  • Kim JY, Amador RB (2002) Corrigendum to detection of change in persistence of a linear time series. J Econom 109(2):389–392

    Article  Google Scholar 

  • Kirch C (2006) Resampling methods for the change analysis of dependent data. PhD thesis, University of Cologne, Germany

  • Lahiri S, Furukawa K, Lee YD (2007) A nonparametric plug-in rule for selecting optimal block lengths for block bootstrap methods. Stat Methodol 4(3):292–321

    Article  MathSciNet  MATH  Google Scholar 

  • Kulperger RJ (1990) On the distribution of the maximum of brownian bridges with application to regression with correlated errors. J Stat Comput Sim 34(2–3):97–106

    Article  MATH  Google Scholar 

  • Künsch HR (1989) The jacknife and the bootstrap for general stationary observations. Ann Stat 17(3):1217–1241

    Article  MATH  Google Scholar 

  • Leybourne SJ, Taylor AR (2006) Persistence change tests and shifting stable autoregressions. Econ Lett 91(1):44–49

    Article  MATH  Google Scholar 

  • Madurkayová B (2011) Ratio type statistics for detection of changes in mean. Acta Univ Carolinae Math Phys 52(1):47–58

    MathSciNet  MATH  Google Scholar 

  • Perron P (2006) Dealing with structural breaks. In: Hassani H, Mills TC, Patterson K (eds) Palgrave handbook of econometrics, volume 1: econometric theory. Palgrave Macmillan, Basingstoke, pp 278–352

    Google Scholar 

  • Peštová B, Pešta M (2015) Testing structural changes in panel data with small fixed panel size and bootstrap. Metrika 78(6):665–689

    Article  MathSciNet  MATH  Google Scholar 

  • Peštová B, Pešta M (2016) Erratum to: Testing structural changes in panel data with small fixed panel size and bootstrap. Metrika 79(2):237–238

    Article  MathSciNet  MATH  Google Scholar 

  • Politis DN, Romano JP (1992) A circular block-resampling procedure for stationary data. In: LePage R, Billard L (eds) Exploring the limits of bootstrap. Wiley, New York, pp 263–270

    Google Scholar 

  • Politis DN, White H (2004) Automatic block-length selection for the dependent bootstrap. Econom Rev 23:53–70

    Article  MathSciNet  MATH  Google Scholar 

  • Rosenblatt M (1971) Markov processes: structure and asymptotic behavior. Springer, Berlin

    Book  MATH  Google Scholar 

  • Wang D, Guo P, Xia Z (2016) Detection and estimation of structural change in heavy-tailed sequence. Commun Stat A Theor. https://doi.org/10.1080/03610926.1006780

    MATH  Google Scholar 

  • Zhao W, Tian Z, Xia Z (2010) Ratio test for variance change point in linear process with long memory. Stat Pap 51:397–407

    Article  MathSciNet  MATH  Google Scholar 

  • Zhao W, Xia Z, Tian Z (2011) Ratio test to detect change in the variance of linear process. Statistics 45:189–198

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Associate Editor and two anonymous referees for careful reading of the paper and for providing suggestions that improved this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michal Pešta.

Additional information

Institutional support to Barbora Peštová was provided by RVO:67985807. The research of Michal Pešta was supported by the Czech Science Foundation Project GAČR No. 15-04774Y.

Proofs

Proofs

Proof of Theorem 1

The proof is analogous in several steps with the proof of Theorem 1.1 in Horváth et al. (2008). Without loss on generality, we assume that \(\mu =0\). Let

$$\begin{aligned} Z_n(t)=\frac{1}{\sqrt{n}}\sum _{1\le j \le nt}\psi (\varepsilon _j)\quad \text{ and }\quad \widetilde{Z}_n(t)=\frac{1}{\sqrt{n}}\sum _{nt< j \le n}\psi (\varepsilon _j). \end{aligned}$$

By applying Theorem 1 from Doukhan (1994, Section 1.5.1) with the consequent remark justified by Herrndorf (1983) and Bulinskii (1987, 1989), we get

$$\begin{aligned} \left( Z_n(t),\widetilde{Z}_n(t)\right) \xrightarrow [n\rightarrow \infty ]{\mathscr {D}^2[0,1]}\sigma (\psi )\,\left( \mathcal {W}(t),\widetilde{\mathcal {W}}(t)\right) , \end{aligned}$$
(10)

where \(\{\mathcal {W}(t),\,0\le t\le 1\}\) is a standard Wiener process and \(\widetilde{\mathcal {W}}(t)=\mathcal {W}(1)-\mathcal {W}(t)\). Consequently, Lemma 4.3 and Lemma 4.4 by Hušková and Marušiaková (2012) together with Assumptions A1A5 lead to

$$\begin{aligned}&\sup _{1\le i \le nt} \left\{ n^{\kappa }\sqrt{\frac{[nt]}{i([nt]-i)}}\ \left| \sum _{1\le j \le i}\psi \left( Y_j-\widehat{\mu }_{1,[nt]}(\psi )\right) \right. \right. \nonumber \\&\quad \left. \left. -\left( \sum _{1\le j \le i}\psi (\varepsilon _j) -\frac{i}{[nt]}\sum _{1\le j \le nt}\psi (\varepsilon _j)\right) \right| \right\} \xrightarrow [n\rightarrow \infty ]{\mathsf {P}\,} 0, \end{aligned}$$

for some \(\kappa >0\), where [a] denotes the integer part of \(a\in \mathbb {R}\). Hence,

$$\begin{aligned}&\frac{1}{\sqrt{n}}\sup _{1<i\le nt}\left| \sum _{1\le j \le i}\psi \left( Y_j-\widehat{\mu }_{1,[nt]}(\psi )\right) \right| \\&\quad =\sup _{1\le i \le nt}\left| Z_n\left( \frac{i}{n}\right) -\frac{i}{[nt]}Z_n(t)\right| + o_{\mathsf {P}\,}(1),\quad n\rightarrow \infty . \end{aligned}$$

Similarly, we get

$$\begin{aligned}&\frac{1}{\sqrt{n}}\sup _{nt<i\le n}\left| \sum _{i\le j \le n}\psi \left( Y_j-\widehat{\mu }_{2,[nt]}(\psi )\right) \right| \\&\quad =\sup _{nt< i \le n}\left| \widetilde{Z}_n\left( \frac{i}{n}\right) -\frac{n-i}{n-[nt]}\widetilde{Z}_n(t)\right| + o_{\mathsf {P}\,}(1),\quad n\rightarrow \infty . \end{aligned}$$

With respect to (10), we get for all \(0<\gamma <1/2\)

$$\begin{aligned}&\Bigg (\frac{1}{\sqrt{n}}\sup _{1<i\le nt}\left| \sum _{1\le j \le i}\psi \left( Y_j-\widehat{\mu }_{1,[nt]}(\psi )\right) \right| ,\\&\quad \frac{1}{\sqrt{n}}\sup _{nt-1<i\le n-1}\left| \sum _{i+1\le j \le n}\psi \left( Y_j-\widehat{\mu }_{2,[nt]}(\psi )\right) \right| \Bigg )\\&\xrightarrow [n\rightarrow \infty ]{\mathscr {D}^2[\gamma ,1-\gamma ]}\sigma (\psi ) \left( \displaystyle \underset{0\le u\le t}{\sup }\left| \mathcal {W}(u)-\frac{u}{t}\mathcal {W}(t)\right| ,\right. \\&\left. \quad \underset{t\le u\le 1}{\sup }\left| \widetilde{\mathcal {W}}(u)-\frac{1-u}{1-t}\widetilde{\mathcal {W}}(t)\right| \right) . \end{aligned}$$

The continuous mapping theorem completes the proof

$$\begin{aligned} \mathcal {R}_n(\psi ,\gamma )\xrightarrow [n\rightarrow \infty ]{\mathscr {D}} \sup _{\gamma \le t\le 1-\gamma }\frac{\underset{0\le u\le t}{\sup }\big |\mathcal {W}(u)-u/t\mathcal {W}(t)\big |}{\underset{t\le u\le 1}{\sup }\big |\widetilde{\mathcal {W}}(u)-(1-u)/(1-t)\widetilde{\mathcal {W}}(t)\big |}. \end{aligned}$$

\(\square \)

Proof of Theorem 2

Let \(k>\tau +1\) and \(k=[\xi n]\) for some \(\zeta<\xi <1-\gamma \). Note that \(\tau =O(n)\) and \(k=O(n)\) as \(n\rightarrow \infty \). By the mean value theorem, we get

$$\begin{aligned} 0= & {} \sum _{i=1}^k\psi \left( Y_i-\widehat{\mu }_{1k}(\psi )\right) \nonumber \\= & {} \sum _{i=1}^k\psi \left( Y_i-\mu \right) +\left[ \sum _{i=1}^k\frac{\text{ d }}{\text{ d }\mu }\bigg .\psi \left( Y_i-\mu \right) \bigg |_{\mu =\mu ^*}\right] \left( \widehat{\mu }_{1k}(\psi )-\mu \right) , \end{aligned}$$
(11)

where \(\mu ^*\) lies between \(\mu \) and \(\widehat{\mu }_{1k}(\psi )\). The first sum in (11) can be expanded using Lemma 4.3 by Hušková and Marušiaková (2012) and Assumptions A1A4 as

$$\begin{aligned} \sum _{i=1}^k\psi \left( Y_i-\mu \right)= & {} \sum _{i=1}^k\psi \left( \varepsilon _i+\delta _n\mathcal {I} \{i>\tau \}\right) \nonumber \\= & {} \sum _{i=1}^k\psi (\varepsilon _i)+\sum _{i=1}^k\mathsf {E}\,\psi \left( \varepsilon _i+\delta _n\mathcal {I}\{i>\tau \}\right) \nonumber \\&+\,o_{\mathsf {P}\,}\left( k^{\theta -\nu +1}\right) \end{aligned}$$
(12)

as \(k\rightarrow \infty \) for any \(\theta \in [-1/2,0]\) and \(\nu \in \left( 0,\eta /(3(2+\chi +\chi '))\right) \). The Taylor expansion of \(\psi \) in the neighborhood of 0 with respect to Assumption A4 provides

$$\begin{aligned} \sum _{i=1}^k\mathsf {E}\,\psi \left( \varepsilon _i+\delta _n\mathcal {I}\{i>\tau \}\right)= & {} \sum _{i=\tau +1}^k\mathsf {E}\,\psi \left( \varepsilon _i+\delta _n\right) =k(1-\zeta /\xi )\delta _n\lambda '(0)\nonumber \\&+\,k(1-\zeta /\xi )o(\delta _n),\quad k\rightarrow \infty . \end{aligned}$$
(13)

The second sum in (11) can be rewritten using Lemma 4.4 by Hušková and Marušiaková (2012) with respect to the Lipschitz property from Assumption A4 as

$$\begin{aligned} \sum _{i=1}^k\frac{\text{ d }}{\text{ d }\mu }\bigg .\psi \left( Y_i-\mu \right) \bigg |_{\mu =\mu ^*}&=\sum _{i=1}^k\frac{\text{ d }}{\text{ d }d}\bigg .\mathsf {E}\,\psi \left( \varepsilon _i+\delta _n\mathcal {I}\{i>\tau \}-d\right) \bigg |_{d=0}\nonumber \\&\quad +\,O_{\mathsf {P}\,}\left( k^{1/2+(1/2+\theta )/(3+\chi )}\right) \nonumber \\&=-\sum _{i=1}^k\lambda '(\delta _n\mathcal {I}\{i>\tau \})=-k\left( \lambda '(0)+O(\delta _n)\right) \nonumber \\&\quad +\,O_{\mathsf {P}\,}\left( k^{1/2+(1/2+\theta )/(3+\chi )}\right) \end{aligned}$$
(14)

as \(k\rightarrow \infty \) for any \(\theta \in [-1/2,0]\). Combining (11)–(14), we end up with

$$\begin{aligned} \widehat{\mu }_{1k}(\psi )-\mu&=\frac{\sum _{i=1}^k\psi (\varepsilon _i)+k(1-\zeta /\xi )\delta _n\lambda '(0)+k(1-\zeta /\xi )o(\delta _n)+o_{\mathsf {P}\,}\left( k^{\theta -\nu +1}\right) }{k\left( \lambda '(0)+O(\delta _n)\right) +O_{\mathsf {P}\,}\left( k^{1/2+(1/2+\theta )/(3+\chi )}\right) }\nonumber \\&=\frac{1}{k\lambda '(0)}\sum _{i=1}^k\psi (\varepsilon _i)+(1-\zeta /\xi )\delta _n\nonumber \\&\quad +\,o_{\mathsf {P}\,}\left( k^{\theta -\nu }\right) , \end{aligned}$$
(15)

since \(\delta _n=O(k^{\theta })\) for some \(\theta \in \left( -\frac{1}{2},\frac{\eta }{3(2+\chi +\chi ')}-\frac{1}{2}\right) \subset [-1/2,0]\).

Consequently, applying Lemma 4.3 by Hušková and Marušiaková (2012) again, we obtain

$$\begin{aligned} \max _{1\le i\le k}k^{\nu -\theta -1}\Bigg |\sum _{j=1}^i\left( \psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right) -\psi (\varepsilon _j)-\mathsf {E}\,\psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right) \right) \Bigg | \xrightarrow [n\rightarrow \infty ]{\mathsf {P}\,}0. \end{aligned}$$
(16)

Assumption A1 allows us to apply law of large numbers for \(\alpha \)-mixing sequences (Chen and Wu 1989). Hence,

$$\begin{aligned} \sum _{j=1}^i\mathsf {E}\,(\psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right)&=\sum _{j=1}^i\mathsf {E}\,\psi \left( \varepsilon _j+\delta _n\mathcal {I}\{j>\tau \}+\mu -\widehat{\mu }_{1k}(\psi )\right) \nonumber \\&=\sum _{j=1}^i\frac{1}{k}\sum _{l=1}^k\psi \left( \varepsilon _l+\delta _n\mathcal {I}\{j>\tau ,l>\tau \}\right. \nonumber \\&\left. \quad +\,\mu -\widehat{\mu }_{1k}(\psi )\right) +o_{\mathsf {P}\,}(i),\quad k\rightarrow \infty \end{aligned}$$
(17)

uniformly for \(i=1,\ldots ,k\). Furthermore, due to the mean value theorem and Assumption A4, we have

$$\begin{aligned}&\sum _{j=1}^i\frac{1}{k}\sum _{l=1}^k\psi \left( \varepsilon _l+\delta _n\mathcal {I}\{j>\tau ,l>\tau \}+\mu -\widehat{\mu }_{1k}(\psi )\right) \nonumber \\&\quad =\sum _{j=1}^i\frac{1}{k}\sum _{l=1}^k\psi \left( \varepsilon _l\right) +\sum _{j=1}^i\frac{1}{k}\sum _{l=1}^k\left[ \frac{\text{ d }}{\text{ d }e}\bigg .\psi \left( \varepsilon _l+e\right) \bigg |_{e=e^*}\right] \nonumber \\&\qquad \left( \mu -\widehat{\mu }_{1k}(\psi )+\delta _n\mathcal {I}\{j>\tau ,l>\tau \}\right) \nonumber \\&\quad =\frac{i}{k}\sum _{l=1}^k\psi \left( \varepsilon _l\right) -\frac{1}{k}\sum _{l=1}^k\left( \lambda '(0)+O(\delta _n)\right) \sum _{j=1}^i\nonumber \\&\qquad \left( \mu -\widehat{\mu }_{1k}(\psi )+\delta _n\mathcal {I}\{j>\tau ,l>\tau \}\right) , \end{aligned}$$
(18)

where \(e^*\) is between 0 and \(\mu -\widehat{\mu }_{1k}(\psi )+\mathcal {I}\{j>\tau ,l>\tau \}\). Plugging (15) into (18) yields

$$\begin{aligned}&-\frac{1}{k}\sum _{l=1}^k\left( \lambda '(0)+O(\delta _n)\right) \sum _{j=1}^i\left( \mu -\widehat{\mu }_{1k}(\psi )+\delta _n\mathcal {I}\{j>\tau ,l>\tau \}\right) \nonumber \\&\quad =\lambda '(0)\left( \frac{i}{k\lambda '(0)}\sum _{s=1}^k\psi (\varepsilon _s)+i(1-\zeta /\xi ) \delta _n\right. \nonumber \\&\left. \qquad -\,i(1-(\tau +1)/i)(1-\zeta /\xi )\delta _n+i o_{\mathsf {P}\,}\left( k^{\theta -\nu }\right) \right) \nonumber \\&\quad =\frac{i}{k}\sum _{s=1}^k\psi (\varepsilon _s)+\lambda '(0)\delta _n(1-\zeta /\xi )(\tau +1)\nonumber \\&\qquad +\,o_{\mathsf {P}\,}\left( k^{\theta -\nu +1}\right) ,\quad k\rightarrow \infty . \end{aligned}$$
(19)

Let us take into account (16) together with (17), (18), and (19). Thus, we obtain

$$\begin{aligned}&k^{\nu -\theta -1}\Bigg |\sum _{j=1}^{\tau +1}\psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right) \nonumber \\&\qquad -\left( \sum _{j=1}^{\tau +1}\psi (\varepsilon _j)-\frac{\tau +1}{k}\sum _{l=1}^k \psi (\varepsilon _l)-\lambda '(0)\delta _n (1-\zeta /\xi )(\tau +1)\right) \Bigg |\\&\quad \le \max _{1\le i\le k}k^{\nu -\theta -1}\Bigg |\sum _{j=1}^i\psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right) \nonumber \\&\qquad -\,\left( \sum _{j=1}^i\psi (\varepsilon _j)-\frac{i}{k}\sum _{l=1}^k\psi (\varepsilon _l)-\lambda '(0)\delta _n(1-\zeta /\xi )(\tau +1)\right) \Bigg | \xrightarrow []{\mathsf {P}\,}0. \end{aligned}$$

as \(k\rightarrow \infty \). Let us choose \(\nu =\theta +1/2\). Thus, \(\theta \in \left( -\frac{1}{2},\frac{\eta }{3(2+\chi +\chi ')}-\frac{1}{2}\right) \) iff \(\nu \in \left( 0,\frac{\eta }{3(2+\chi +\chi ')}\right) \). Since \(\delta _n=O\left( k^{\theta }\right) \) as \(k\rightarrow \infty \) and

$$\begin{aligned} \frac{1}{\sqrt{k}}\left| \sum _{j=1}^{\tau +1}\psi (\varepsilon _j)-\frac{\tau +1}{k}\sum _{l=1}^k\psi (\varepsilon _l)\right| =O_{\mathsf {P}\,}(1),\quad n\rightarrow \infty \end{aligned}$$

according to the proof of Theorem 1 (requiring Assumption A5), we get

$$\begin{aligned}&\max _{1\le i\le k}\frac{1}{\sqrt{k}}\left| \sum _{j=1}^i\psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right) \right| \ge \frac{1}{\sqrt{k}}\left| \sum _{j=1}^{\tau +1}\psi \left( Y_j-\widehat{\mu }_{1k}(\psi )\right) \right| \xrightarrow [n\rightarrow \infty ]{\mathsf {P}\,}\infty . \end{aligned}$$

Note that there is no change in the means of \(Y_k,\ldots ,Y_n\). Again from the proof of Theorem 1, we have

$$\begin{aligned}&\max _{[\xi n]\le i\le n-1}\frac{1}{\sqrt{n}}\left| \sum _{j=i+1}^n\psi \left( Y_j-\widehat{\mu }_{2k}(\psi )\right) \right| \xrightarrow [n\rightarrow \infty ]{\mathscr {D}[\gamma ,1-\gamma ]}\sigma (\psi )\\&\quad \sup _{\xi \le u\le 1}\left| \widetilde{\mathcal {W}}(u)-\frac{1-u}{1-\xi }\widetilde{\mathcal {W}}(\xi )\right| , \end{aligned}$$

which completes the proof. \(\square \)

Proof of Theorem 3

Let us denote \(\mathsf {P}\,^*(\cdot )\equiv \mathsf {P}\,(\cdot |Y_1,\ldots ,Y_n)\). Moreover, let us define \(\beta _n=o_{\mathsf {P}\,,\mathsf {P}\,^*}(1), n\rightarrow \infty \) for some random sequence \(\{\beta _n\}_{n\in \mathbb {N}}\) as follows

$$\begin{aligned} \forall \epsilon>0,\,\forall \phi >0\,:\lim _{n\rightarrow \infty }\mathsf {P}\,\left[ \mathsf {P}\,^*\left[ \left| \beta _n\right| \ge \epsilon \right] \ge \phi \right] =0. \end{aligned}$$

Lemma 4.3 and Lemma 4.4 by Hušková and Marušiaková (2012) together with the mean value theorem used in a similar way as in the proof of Theorem 2 provide

$$\begin{aligned}&\sqrt{(l-1)K+k}\left( m_{L,K}^{\varvec{U}}(l,k)-\mu \right) \\&\quad =\frac{1}{\lambda '(0)\sqrt{(l-1)K+k}}\left[ \sum _{r=1}^{l-1}\sum _{s=1}^{K}\psi \left( \varepsilon _{U_r+s}\right) +\sum _{s=1}^{k}\psi \left( \varepsilon _{U_l+s}\right) \right] \\&\qquad +\delta _n\frac{\sqrt{(l-1)K+k}}{n}\left[ \sum _{r=1}^{l-1}\sum _{s=1}^{K}\mathcal {I}\{U_r+s>\tau \}+\sum _{s=1}^{k}\mathcal {I}\{U_l+s>\tau \}\right] \\&\qquad +o_{\mathsf {P}\,,\mathsf {P}\,^*}(1),\quad L\rightarrow \infty . \end{aligned}$$

Consequently, applying Lemma 4.3 by Hušková and Marušiaková (2012) again similarly as in the proof of Theorem 2, we obtain

$$\begin{aligned}&\max _{\begin{array}{c} (p,q)\in \varPi _{l,k,L,K} \end{array}}\Bigg |\sum _{i=1}^{p-1}\sum _{j=1}^{K}\psi \left( Y_{U_i+j}-m_{L,K}^{\varvec{U}}(l,k)\right) +\sum _{j=1}^q\psi \left( Y_{U_p+j}-m_{L,K}^{\varvec{U}}(l,k)\right) \Bigg . \end{aligned}$$
(20)
$$\begin{aligned}&\quad -\Bigg [\sum _{i=1}^{p-1}\sum _{j=1}^{K}\psi \left( \varepsilon _{U_i+j}\right) +\sum _{j=1}^q\psi \left( \varepsilon _{U_p+j}\right) \nonumber \\&\qquad -\frac{(p-1)K+q}{(l-1)K+k}\left( \sum _{r=1}^{l-1}\sum _{s=1}^{K}\psi \left( \varepsilon _{U_r+s}\right) +\sum _{s=1}^{k}\psi \left( \varepsilon _{U_l+s}\right) \right) \Bigg .\end{aligned}$$
(21)
$$\begin{aligned}&\quad \Bigg .\Bigg . -\lambda '(0)\delta _n\Bigg \{\sum _{i=1}^{p-1}\sum _{j=1}^{K}\mathcal {I}\{U_i+j>\tau \}+\sum _{j=1}^{q}\mathcal {I}\{U_p+j>\tau \}\Bigg .\end{aligned}$$
(22)
$$\begin{aligned}&\quad -\Bigg .\left( \frac{(p-1)K+q}{(l-1)K+k}\sum _{r=1}^{l-1}\sum _{s=1}^{K}\mathcal {I}\{U_r+s>\tau \}+\sum _{s=1}^{k}\mathcal {I}\{U_l+s>\tau \}\right) \Bigg \}\Bigg ]\Bigg |\nonumber \\&\qquad =o_{\mathsf {P}\,,\mathsf {P}\,^*}(\sqrt{KL}),\quad L\rightarrow \infty . \end{aligned}$$
(23)

Note that (20) contains \(S_{L,K}^{\varvec{U}}(p,q,l,k)\) and the expression in square brackets in (21)–(23) can be rewritten as

$$\begin{aligned}&\sum _{i=1}^{p-1}\sum _{j=1}^{K}\left( \psi \left( \varepsilon _{U_i+j}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_i+j>\tau \}\right) \\&\quad +\sum _{j=1}^q\left( \psi \left( \varepsilon _{U_p+j}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_p+j>\tau \}\right) \\&\quad -\frac{(p-1)K+q}{(l-1)K+k}\Bigg (\sum _{r=1}^{l-1}\sum _{s=1}^{K}\left( \psi \left( \varepsilon _{U_r+s}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_r+s>\tau \}\right) \Bigg .\\&\quad +\Bigg .\sum _{s=1}^{k}\left( \psi \left( \varepsilon _{U_l+s}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_l+s>\tau \}\right) \Bigg )=:T_{L,K}^{\varvec{U}}(p,q,l,k). \end{aligned}$$

Furthermore, we define

$$\begin{aligned}&\widetilde{T}_{L,K}^{\varvec{U}}(p,q,l,k):=\sum _{j=k+1}^{K}\left( \psi \left( \varepsilon _{U_l+j}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_l+j>\tau \}\right) \mathcal {I}\{p\ge l+1\}\\&\quad +\sum _{i=l+1}^{p-1}\sum _{j=1}^{K}\left( \psi \left( \varepsilon _{U_i+j}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_i+j>\tau \}\right) \mathcal {I}\{p\ge l+2\}\\&\quad +\sum _{j=1}^{q}\left( \psi \left( \varepsilon _{U_p+j}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_p+j>\tau \}\right) \\&\quad -\frac{(p-l)K+q-k}{(L-l)K+K-k}\Bigg (\sum _{s=k+1}^{K}\left( \psi \left( \varepsilon _{U_l+s}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_l+s>\tau \}\right) \Bigg .\\&\quad +\sum _{r=l+1}^{L}\sum _{s=1}^{K}\left( \psi \left( \varepsilon _{U_r+s}\right) -\lambda '(0)\delta _n\mathcal {I}\{U_r+s>\tau \}\right) \Bigg ). \end{aligned}$$

If \(\{\varepsilon _i, i\in \mathbb {N}\}\) is an \(\alpha \)-mixing sequence, then \(\{\psi (\varepsilon _i), i\in \mathbb {N}\}\) is also \(\alpha \)-mixing, but with smaller or equal mixing coefficients than \(\{\varepsilon _i, i\in \mathbb {N}\}\) (Bradley 2005, Theorem 5.2). Proof of Theorem 3.6.2 by Kirch (2006) for \(q(t)=1,\,t\in (0,1)\); \(e(i)=\psi (\varepsilon _i)\); and \(d=-\lambda '(0)\delta _n\) together with Remarks 3.5.4 and 3.5.5 from Kirch (2006) provide, conditionally on \(\varepsilon _1,\ldots ,\varepsilon _n\),

$$\begin{aligned}&\left( \max _{\begin{array}{c} (p,q)\in \varPi _{l,k,L,K} \end{array}} \frac{\left| T_{L,K}^{\varvec{U}}(p,q,l,k)\right| }{\sigma (\psi )\sqrt{LK}}, \max _{\begin{array}{c} (p,q)\in \widetilde{\varPi }_{l,k,L,K} \end{array}} \frac{\left| \widetilde{T}_{L,K}^{\varvec{U}}(p,q,l,k)\right| }{\sigma (\psi )\sqrt{LK}}\right) \nonumber \\&\quad \xrightarrow [L\rightarrow \infty ]{\mathscr {D}^2[\gamma ,1-\gamma ]} \left( \sup _{0\le u\le t}\left| \mathcal {W}(u)-u/t\mathcal {W}(t)\right| , \sup _{t\le u\le 1}\left| \widetilde{\mathcal {W}}(u)-(1-u)/(1-t)\widetilde{\mathcal {W}}(t)\right| \right) ,\nonumber \\ \end{aligned}$$
(24)

in probability \(\mathsf {P}\,\) along \(\varepsilon _1,\ldots ,\varepsilon _n\). In contrast to Kirch (2006), we drop the assumption of random errors being a linear process (Kirch 2006, Remark 3.5.3), because this is only assumed in order to show that the original (not bootstrapped) statistic weakly converges under the null hypothesis. The assumptions of Theorem 3 and the null hypothesis (2) provide us the asymptotic distribution of \(\mathcal {R}_n(\psi ,\gamma )\) from Theorem 1.

By the uniform stochastic closeness (20)–(23) in a \(\mathsf {P}\,\)-stochastic sense and (24), we get, conditionally on \(Y_1,\ldots ,Y_n\),

$$\begin{aligned}&\frac{1}{\sqrt{LK}}\Bigg ( \max _{\begin{array}{c} (p,q)\in \varPi _{l,k,L,K} \end{array}}\left| S_{L,K}^{\varvec{U}}(p,q,l,k)\right| ,\max _{\begin{array}{c} (p,q)\in \widetilde{\varPi }_{l,k,L,K} \end{array}}\left| \widetilde{S}_{L,K}^{\varvec{U}}(p,q,l,k)\right| \Bigg )\\&\quad \xrightarrow [L\rightarrow \infty ]{\mathscr {D}^2[\gamma ,1-\gamma ]} \sigma (\psi )\left( \sup _{0\le u\le t}\left| \mathcal {W}(u)-u/t\mathcal {W}(t)\right| ,\right. \nonumber \\&\left. \qquad \sup _{t\le u\le 1}\left| \widetilde{\mathcal {W}}(u)-(1-u)/(1-t)\widetilde{\mathcal {W}}(t)\right| \right) , \end{aligned}$$

in probability \(\mathsf {P}\,\) along \(Y_1,\ldots ,Y_n\). Finally, the assertion of Theorem 3 is straightforward, since the considered bootstrap statistic is a continuous function of the above vector of statistics. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peštová, B., Pešta, M. Abrupt change in mean using block bootstrap and avoiding variance estimation. Comput Stat 33, 413–441 (2018). https://doi.org/10.1007/s00180-017-0785-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-017-0785-4

Keywords

Navigation