Skip to main content
Log in

Hilbertian spatial periodically correlated first order autoregressive models

  • Regular Article
  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

In this article, we consider Hilbertian spatial periodically correlated autoregressive models. Such a spatial model assumes periodicity in its autocorrelation function. Plausibly, it explains spatial functional data resulted from phenomena with periodic structures, as geological, atmospheric, meteorological and oceanographic data. Our studies on these models include model building, existence, time domain moving average representation, least square parameter estimation and prediction based on the autoregressive structured past data. We also fit a model of this type to a real data of invisible infrared satellite images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Bosq D (2000) Linear processes in function spaces, theory and applications. Lecture notes in statistics, Springer, Berlin

  • Dunford N, Schwartz JT (1958) Linear operators, Part I: general theory. Wiley-Interscience, Hoboken

    Google Scholar 

  • Ferraty F, Vieu P (2006) Nonparametric functional data analysis. Springer, New York

    MATH  Google Scholar 

  • Helson H, Lowdenslager D (1958) Prediction theory and Fourier series in several variables. Acta Math Hung 99:165–202

    Article  MATH  MathSciNet  Google Scholar 

  • Hurd HL, Miamee A (2007) Periodically correlated random sequences., Spectral theory and practice. Wiley, Hoboken

  • Hurd HL, Kallianpur G, Farshidi J (2004) Correlation and spectral theory for periodically correlated random fields indexed on \({{\mathbb{Z}}}^{2}\). J Multivar Anal 90:359–383

    Article  MATH  MathSciNet  Google Scholar 

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications. Springer series in statistics, Springer, New York

  • Ramsay JO, Silverman BW (2005) Functional data analysis. Springer, New York

    Google Scholar 

  • Ruiz-Medina MD (2011a) Spatial autoregressive and moving average Hilbertian processes. J Multivar Anal 102:292–305

    Article  MATH  MathSciNet  Google Scholar 

  • Ruiz-Medina MD (2011b) Spatial functional prediction from spatial autoregressive Hilbertian processes. Environmetrics 23:119–128

    Article  MathSciNet  Google Scholar 

  • Serpedin E, Panduru F, Sari I, Giannakis GB (2005) Bibliography on cyclostationarity. Signal Process 85:233–2303

    Article  Google Scholar 

  • Shishebor Z, Soltani AR, Zamani A (2011) Asymptotic distribution for periodograms of infinite dimensional discrete time periodically correlated processes. J Multivar Anal 101:368–373

    MathSciNet  Google Scholar 

  • Soltani AR (1984) Extrapolation and moving average representation for stationary random fields and Burling’s Theorem. Ann Probab 12(1):120–132

    Article  MATH  MathSciNet  Google Scholar 

  • Soltani AR, Hashemi M (2011) Periodically correlated autoregressive Hilbertian. Stat Inference Stoch Process 14(2):177–188

    Article  MATH  MathSciNet  Google Scholar 

  • Soltani AR, Shishebor Z (1998) A spectral representation for weakly periodic sequences of bounded linear transformations. Acta Math Hung 80:265–270

    Article  MATH  MathSciNet  Google Scholar 

  • Soltani AR, Shishebor Z (1999) Weakly periodic sequences of bounded linear transformations: a spectral characterization. J Georgian Math 6:91–98

    Article  MATH  MathSciNet  Google Scholar 

  • Soltani AR, Shishebor Z, Sajjadnia Z (2012) Hilbertian GARCH models. Ann ISUP 56:61–80

    MathSciNet  Google Scholar 

  • Soltani AR, Shishebor Z, Zamani A (2010) Inference on periodograms of infinite dimensional discrete time periodically correlated processes. J Multivar Anal 101:368–373

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors express their sincere thanks to the editor and referees for providing valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Z. Shishebor.

Appendix: Proofs

Appendix: Proofs

Proof of Theorem 3.1

First, we show that the random field \({{\mathbf {X}}}=\{{{\mathbf {X}}}_{\mathbf {t}},\ {\mathbf {t}}\in {{\mathbb {Z}}}^{2}\}\) defined by

$$\begin{aligned} {\mathbf {X}}_{i,j}\equiv {\left( X_{i,jT_{2}}, X_{i,jT_{2}+1}, \ldots , X_{i,jT_{2}+T_{2}-1}\right) }^{\prime } , \end{aligned}$$

is a \({\mathbf {T}}_1\)-HSPC process. Let \({\mathbf {t}},{{\mathbf {s}}}\) and \({\mathbf {n}}\in {{\mathbb {Z}}}^{2}\). Then the cross-covariance matrix of \(\mathbf X\) is \(C_{{{{ \mathbf {X}}}}_{{\mathbf {t}}},{{{\mathbf {X}}}}_{\mathbf {s}}}=\left[ a_{i,j}(\mathbf{t,s})\right] _{ i,j=0}^{ T_{2}-1} \) where \(a_{i,j}(\mathbf{t,s})=\mathbb {E}\left( X_{{\mathbf {t}}\odot {{\mathbf {T}}}_{2}+(0,i)}\otimes X_{{{\mathbf {s}}}\odot {{\mathbf {T}}}_{2}+(0,j)}\right) \). Since \({\mathbf {T}}={ \mathbf {T}}_{1}\odot {\mathbf {T}}_{2}\),

$$\begin{aligned} a_{i,j}(\mathbf{t,s})&= \mathbb {E}\left( X_{t_1,t_2T_2+i}\otimes X_{s_1,s_2T_2+j}\right) \\&= \mathbb {E}\left( X_{t_1+n_1T1,t_2T_2+i+n_2T_2}\otimes X_{s_1+n_1T1,s_2T_2+j+n_2T_2}\right) \\&= \mathbb {E}\left( X_{t_1+n_1T1, (t_2+n_2 )T_2+i}\otimes X_{s_1+n_1T1,(s_2+n_2)T_2+j}\right) \\&= \mathbb {E}\left( X_{\left( {\mathbf {t}}{\mathbf {+}}{\mathbf {n}}\odot {{\mathbf {T}}} _{1}\right) \odot {{\mathbf {T}}}_{2}+(0,i)}\otimes X_{\left( {{\mathbf {s}} \mathbf {+}}{\mathbf {n}}\odot {{\mathbf {T}}}_{1}\right) \odot {{\mathbf {T}}} _{2}+(0,j)}\right) \\&= a_{i,j}(\mathbf{t+n\odot T_1 ,s+n\odot T_1}). \end{aligned}$$

So \(C_{{{{\mathbf {X}}}}_{{\mathbf {t}}},{{{\mathbf {X}}}}_{ \mathbf {s}}}=C_{{{{\mathbf {X}}}}_{{\mathbf {t}}{\mathbf {+}}{\mathbf { n}}\odot {{\mathbf {T}}}_{1}},{{{\mathbf {X}}}}_{{{\mathbf {s}} \mathbf {+}}{\mathbf {n}}\odot {{\mathbf {T}}}_{1}}}\). It means that \( {{\mathbf {X}}}\) is a \({{\mathbf {T}}}_{1}\)-HSPC process. By definition

$$\begin{aligned} {\varvec{\epsilon }}_{i,j}\equiv {\left( {\epsilon }_{i,jT_{2}}, {\epsilon }_{i,jT_{2}+1}, \dots , {\epsilon } _{i,jT_{2}+T_{2}-1}\right) }^{\prime }. \end{aligned}$$

Since the structure of definition \({\varvec{\epsilon }}=\left\{ \varvec{\mathbf {\epsilon }}_{\mathbf {t}},\ {\mathbf {t}}\in {{\mathbb {Z}}}^{2}\right\} \) is same as \(\mathbf X\), one can conclude \({\varvec{\epsilon }}\) is also a \({{\mathbf {T}}}_{1}\)-HSPC process. Moreover, the cross-covariance matrix of \({\varvec{\epsilon }}\) is \(C_{{\varvec{\epsilon }}_{{\mathbf {t}} },{\varvec{\epsilon }}_{\mathbf {s}}}=\left[ b_{i,j}(\mathbf{s,t})\right] _{i,j=0}^{T_2-1}\) where \(b_{i,j}(\mathbf{s,t})=\mathbb {E}\left( {\epsilon }_{{\mathbf {t}}\odot {{\mathbf {T}}}_{2}+(0,i)}\otimes {\epsilon }_{{\mathbf {s}}\odot {{\mathbf {T}}}_{2}+(0,j)}\right) \). For \({\mathbf {s}}\ne {{\mathbf { t}}}\) we have \({\mathbf {t }}\odot {{\mathbf {T}}}_{2}+(0,i)\ne \mathbf {s}\odot {{\mathbf {T}}} _{2}+(0,j)\) for all \(i,j=0,1, \cdots , T_2-1\) and hence \(b_{i,j}(\mathbf{s,t})=0\). Therefore, \({\varvec{\epsilon }}\) is a \({{\mathbf {T}}}_{1}\)-HSPC-WN process. This also results that \({\varvec{\varepsilon }}_{i,j}={{{\mathcal {D}}}}_{i}{{\varvec{\epsilon }}}_{i,j}\), is a \({{\mathbf {T}}}_{1}\)-HSPC-WN process with covariance operator \({C_{{{{ \varvec{\varepsilon }}}}_{i,j}}={{{\mathcal {D}}}} _{i}C_{{{\varvec{\epsilon }}}_{i,j}}{{{ \mathcal {D}}}}_{i}^{{{*}}}}\). To see \(\mathbf X\) satisfies (3.1), by using (2.1) recursively, we have

$$\begin{aligned} X_{i,jT_{2}}&= {\alpha }_{i,jT_{2}}X_{i-1,jT_{2}}+{\beta }_{i,jT_{2}}X_{i,jT_{2}-1}+{\ \gamma }_{i,jT_{2}}X_{i-1,jT_{2}-1}+{\epsilon }_{i,jT_{2}}\\&= {\alpha }_{i,0}X_{i-1,jT_{2}}+{\beta }_{i,0}X_{i,jT_{2}-1}+{\ \gamma }_{i,0}X_{i-1,jT_{2}-1}+{\epsilon }_{i,jT_{2}},\\ X_{i,jT_{2}+1}&= {\alpha }_{i,jT_{2}+1}X_{i-1,jT_{2}+1}+{\beta }_{i,jT_{2}+1}X_{i,jT_{2}}+{ \gamma }_{i,jT_{2}+1}X_{i-1,jT_{2}}+{\epsilon }_{i,jT_{2}+1}\\&= {\alpha }_{i,1}X_{i-1,jT_{2}+1}+{\beta }_{i,1}X_{i,jT_{2}}+{ \gamma }_{i,1}X_{i-1,jT_{2}}+{\epsilon }_{i,jT_{2}+1}\\&= {\alpha }_{i,1}X_{i-1,jT_{2}+1}+{\beta }_{i,1}\left( {\alpha }_{i,0}X_{i-1,jT_{2}}+{\beta }_{i,0}X_{i,jT_{2}-1}+{\ \gamma }_{i,0}X_{i-1,jT_{2}-1}\right. \\&\left. +\,\,{\epsilon }_{i,jT_{2}}\right) +{ \gamma }_{i,1}X_{i-1,jT_{2}}+{\epsilon }_{i,jT_{2}+1}\\&= {\alpha }_{i,1}X_{i-1,jT_{2}+1}+\left( {\beta }_{i,1}{ \alpha }_{i,0}+{\gamma }_{i,1}\right) X_{i-1,jT_{2}}+{\beta } _{i,1}^{(2)}X_{i,jT_{2}-1} \\&+{\ \beta }_{i,1}{\gamma }_{i,0}X_{i-1,jT_{2}-1}+{\beta }_{i,1}{\epsilon } _{i,jT_{2}}+{\epsilon }_{i,jT_{2}+1}, \end{aligned}$$

and for each \(k=0, 1, \dots , T_2-1,\) we can get

$$\begin{aligned} X_{i,jT_{2}+k}&= {\alpha }_{i,k}X_{i-1,jT_{2}+k}+\left( { \beta }_{i,k}{\ \alpha }_{i,k-1}+{\gamma }_{i,k}\right) X_{i-1,jT_{2}+k-1} \\&+{\beta }_{i,k}\left( {\beta }_{i,k-1}{\alpha }_{i,k-2}+{ \gamma }_{i,k-1}\right) X_{i-1,jT_{2}+k-2} \\&+\dots +{\beta }_{i,k}^{\left( k-1\right) }\left( {\beta }_{i,1}{ \alpha }_{i,0}+{\ \gamma }_{i,1}\right) X_{i-1,jT_{2}}+{\beta } _{i,k}^{\left( k+1\right) }X_{i,jT_{2}-1} \\&+{\beta }_{i,k}^{\left( k\right) }{\gamma } _{i,0}X_{i-1,jT_{2}-1}+\sum _{s=0}^{k}{{\ \beta }_{i,k}^{\left( k-s\right) }}{\epsilon }_{i,jT_{2}+s}. \end{aligned}$$

Writing this equations in matrix form, using definitions (2.6) and (2.7) gives

$$\begin{aligned} {{{{\mathbf {X}}}}}_{i,j}={{{\mathcal {A}}}} _{i}{{{\mathbf {X}}}}_{i-1,j}+{{{\mathcal { B}}}}_{i}{{{\mathbf {X}}}}_{i,j-1}+{{{ \mathcal {C}}}}_{i}{{{\mathbf {X}}}}_{i-1,j-1}+{{{\mathcal {D}}}}_{i}{{\varvec{\epsilon }}}_{i,j}, \end{aligned}$$

where \({{\mathcal {A}}}_{i}=\left[ {{a}}_{m,n}^{i}\right] \), \({{{ \mathcal {B}}}}_{i}=\left[ {{b}}_{m,n}^{i}\right] \), \({{ \mathcal {C}}}_{i}=\left[ {{c}}_{m,n}^{i}\right] \) and \({{{ \mathcal {D}}}}_{i}=\left[ {{d}}_{m,n}^{i}\right] \). \(\square \)

Proof of Theorem 3.2

By definition

$$\begin{aligned} \widetilde{\mathbf {X}}_\mathbf{t }&= {\left( {{\mathbf {X}}}_{t_1T_{1},t_2}, {{{\mathbf {X}}}}_{t_1T_{1}+1,t_2}, \dots , {{\mathbf {X}}}_{t_1T_{1}+T_{1}-1,t_2}\right) }^{\prime }\\&= {\left( {{\mathbf {X}}}_{{\mathbf {t}}\odot {{\mathbf {T}}} _{1}}, {{{\mathbf {X}}}}_{{\mathbf {t}}\odot {{\mathbf {T}}} _{1}+(1,0)}, \dots , {{\mathbf {X}}}_{{\mathbf {t}}\odot {{\mathbf {T}}} _{1}+(T_{1}-1,0)}\right) }^{\prime }. \end{aligned}$$

Therefore, for \({\mathbf {s,t}\in { \mathbb {Z}}}^{2}\), the cross-covariance matrix \(C_{{{\widetilde{\mathbf {X}}}}_{{\mathbf {t }}},{{\widetilde{\mathbf {X}}}}_{{{\mathbf {s}}}}}\) is a \(T_{1}\times T_{1}\) block matrix with \((i,j)\)-th block is \(C_{{{{\mathbf {X}}}}_{{\mathbf {t}}\odot {{\mathbf {T}}} _{1}+(i,0)},{{{\mathbf {X}}}}_{{{\mathbf {s}}}\odot {{\mathbf {T}}} _{1}+(j,0)}}\) for \(i, j=0, 1, \dots , T_{1}-1\). Since by Theorem 3.1 \({{\mathbf { X}}}\) is a \(\mathbf {T_1}\)-HSPC process,

$$\begin{aligned} C_{{{{\mathbf {X}}}}_{{\mathbf {t}}\odot {{\mathbf {T}}}_{1}+(i,0)},{ {{\mathbf {X}}}}_{{{\mathbf {s}}}\odot {{\mathbf {T}}}_{1}+(j,0)}}=C_{ {{{\mathbf {X}}}}_{(i,0)},{{{\ \mathbf {X}}}}_{\left( {{ \mathbf {s}}\mathbf {-}}{\mathbf {t}}\right) \odot {{\mathbf {T}}}_{1}+(j,0)}}. \end{aligned}$$

So the entries of \(C_{{\widetilde{{\mathbf {X}}}}_{{\mathbf {t}}},{\widetilde{{\mathbf {X}}}}_{{{ \mathbf {s}}}}}\) depend on \({\mathbf {t}},{{\mathbf {s}}}\) through \({{\mathbf {s} }}-{\mathbf {t}}\) and this means \({\widetilde{\mathbf {X}}}=\{{\widetilde{\mathbf {X}}}_{\mathbf {t}}, {\mathbf {t}}\in {{\mathbb {Z}}}^{2}\}\) is an HSS process. This result is also true for \(\widetilde{\varvec{\epsilon }}=\{\widetilde{\varvec{\epsilon }}_{\mathbf {t}}, {\mathbf {t}}\in {{\mathbb {Z}}}^{2}\}\) which is defined as

$$\begin{aligned} \widetilde{\varvec{\epsilon }}_{i,j}={\left( {\varvec{\epsilon }}_{iT_{1},j}, {\varvec{\epsilon }} _{iT_{1}+1,j}, \dots , {\varvec{\epsilon }}_{iT_{1}+T_{1}-1,j}\right) }^{\prime }. \end{aligned}$$

Moreover, by the same technique as used in the proof of Theorem 3.1, it can be shown that \(\widetilde{\varvec{\epsilon }}\) is an HSS-WN process. Therefore, \(\widetilde{\varvec{\varepsilon }}_{{\mathbf {t}}}={\varvec{\widetilde{\mathcal {D}}}}\varvec{\mathcal {E}}\varvec{\widetilde{\epsilon } }_{{\mathbf {t}}}\) is an HSS-WN process with covariance operator \(C_{{\widetilde{\varvec{\varepsilon }}}}=\varvec{\widetilde{\mathcal {D }}}\varvec{{\mathcal {E}}}C_{\widetilde{\varvec{\epsilon }}}\varvec{\mathcal {E} }^{*}\varvec{{\widetilde{\mathcal {D}}}}^{*}\). On the other hand, by (2.6) and (2.7),\(\ {{ {\mathcal {D}}}}_{iT_{1}+k}={{{\mathcal {D}}}}_{k}\), therefore, \({ {{\varvec{\varepsilon }}}}_{iT_{1}+k,j}={{{\mathcal {D}} }}_{iT_{1}+k}{{\varvec{\epsilon }}}_{iT_{1}+k,j}={{{\mathcal {D}} }}_{k}{{\varvec{\epsilon }}}_{iT_{1}+k,j}\). Now using Eq. (3.1) recursively, gives

$$\begin{aligned} {\mathbf {X}}_{iT_{1},j}&= {\mathcal {A}}_{iT_{1}}{\mathbf {X}}_{iT_{1}-1,j}+{ {{\mathcal {B}}}}_{iT_{1}}{{{\mathbf {X}}}}_{iT_{1},j-1}\mathrm { +}{{{\mathcal {C}}}}_{iT_{1}}{{{\mathbf {X}}}}_{iT_{1}-1,j-1} \mathrm {+}{{{\mathcal {D}}}}_{iT_{1}}{{{\varvec{ \epsilon }}}}_{iT_{1},j}\\&= {\mathcal {A}}_{0}{\mathbf {X}}_{iT_{1}-1,j}+{ {{\mathcal {B}}}}_{0}{{{\mathbf {X}}}}_{iT_{1},j-1}\mathrm { +}{{{\mathcal {C}}}}_{0}{{{\mathbf {X}}}}_{iT_{1}-1,j-1} \mathrm {+}{{{\mathcal {D}}}}_{0}{{{\varvec{ \epsilon }}}}_{iT_{1},j},\\ {\mathbf {X}}_{iT_{1}+1,j}&= \mathcal {A}_{iT_{1}+1}{\mathbf {X}}_{iT_{1},j}+\mathcal {B}_{iT_{1}+1}{\mathbf {X}}_{iT_{1}+1,j-1}+\mathcal {C}_{iT_{1}+1}{\mathbf {X}}_{iT_{1},j-1}+{{{\mathcal {D}}}}_{iT_{1}+1}{{\varvec{\epsilon }}}_{iT_1+1,j}\\&= \mathcal {A}_{1}{\mathbf {X}}_{iT_{1},j}+\mathcal {B}_{1}{\mathbf {X}}_{iT_{1}+1,j-1}+\mathcal {C}_{1}{\mathbf {X}}_{iT_{1},j-1}+{{{\mathcal {D}}}}_{1}{{\varvec{\epsilon }}}_{iT_1+1,j}\\&= \mathcal {A}_{1}\left( {\mathcal {A}}_{0}{\mathbf {X}}_{iT_{1}-1,j}+{ {{\mathcal {B}}}}_{0}{{{\mathbf {X}}}}_{iT_{1},j-1}\mathrm { +}{{{\mathcal {C}}}}_{0}{{{\mathbf {X}}}}_{iT_{1}-1,j-1} \mathrm {+}{{{\mathcal {D}}}}_{0}{{{\varvec{ \epsilon }}}}_{iT_{1},j}\right) \\&+\mathcal {B}_{1}{\mathbf {X}}_{iT_{1}+1,j-1}+\mathcal {C}_{1}{\mathbf {X}}_{iT_{1},j-1}+{{{\mathcal {D}}}}_{1}{{\varvec{\epsilon }}}_{iT_1+1,j}\\&= {{{\mathcal {A}}}}_{1}^{{(2)}}{ {{\mathbf {X}}}}_{{{iT}}_{{1}}-{ 1,j}}+{{{\mathcal {A}}}}_{1}^{{(1)}}{ {{\mathcal {C}}}}_{0}{{{\mathbf {X}}}}_{{i}{ {T}}_{1}-{1,j-1}}+{{{ \mathcal {A}}}}_{1}^{(0)}\left( {{{\mathcal {A} }}}_{1}{{{\mathcal {B}}}}_{0}+{{{\mathcal {C}} }}_{1}\right) {{{\mathbf {X}}}}_{i{{T}}_{1} {,j-1}} \\&+{{{\mathcal {B}}}}_{1}{{{\mathbf { X}}}}_{{i}{{T}}_{1}+1,j-1}+{ {{\mathcal {D}}}}_{1}{{{\varvec{\epsilon }}}}_{iT_{1}+1,j} +{{{\mathcal {A}}}}_{1}{{{\mathcal {D }}}}_{0}{{{\varvec{\epsilon }}}}_{iT_{1},j}, \end{aligned}$$

and for each \(k=0, 1, \dots , T_1-1,\) we can get

$$\begin{aligned} {{{\mathbf {X}}}}_{iT_{1}+k,j}&= {{{\mathcal {A}}}} _{k}^{(k+1)}{{{\mathbf {X}}}}_{{iT}_{1}-1,j}+{{{ \mathcal {A}}}}_{k}^{(k)}{{{\mathcal {C}}}}_{0}{ {{\mathbf {X}}}}_{iT_{1}-1,j-1} \\&+{{{\mathcal {A}}}}_{k}^{(k-1)}\left( {{{ \mathcal {A}}}}_{1}{{{\mathcal {B}}}}_{0}+{{{\mathcal {C} }}_{1}}\right) {{{\mathbf {X}}}}_{iT_{1},j-1}+{{{ \mathcal {A}}}}_{k}^{(k-2)}\left( {{{\mathcal {A}}}}_{2}{ {{\mathcal {B}}}}_{1}+{{{\mathcal {C}}}}_{2}\right) { {{\mathbf {X}}}}_{iT_{1}+1,j-1} \\&+\dots +{{{\mathcal {A}}}}_{k}^{\left( 0\right) }\left( { {{\mathcal {A}}}}_{k}{{{\mathcal {B}}}}_{k-1}+{ {{\mathcal {C}}}}_{k}\right) {{{\mathbf {X}}}} _{iT_{1}+k-1,j-1}\\&+{{{\mathcal {B}}}}_{k}{{{ \mathbf {X}}}}_{iT_{1}+k,j-1}+\sum _{s=0}^{k}{{{{\mathcal {A}}}}_{k}^{\left( k-s\right) }{{{\mathcal {D}}}}_{s}{{\varvec{ \epsilon }}}_{iT_{1}+s,j}}. \end{aligned}$$

Writing the above equations in matrix form, using definitions (2.8) and (2.9) gives

$$\begin{aligned} {\widetilde{{\mathbf {X}}}}_{i,j}=\varvec{\widetilde{\mathcal {A}}}{{\widetilde{\mathbf {X}}}}_{i-1,j}+ \varvec{\widetilde{\mathcal {B}}}{\widetilde{\mathbf {X}}}_{i,j-1}+\varvec{\widetilde{\mathcal {C}}}{ \widetilde{\mathbf {X}}}_{i-1,j-1}+{{\widetilde{\varvec{\varepsilon }}}}_{i,j}, \end{aligned}$$

where \(\varvec{\widetilde{\mathcal {A}}}=\left[ \widetilde{\mathbf {a}}_{m,n}\right] , \varvec{ \widetilde{\mathcal {B}}}=\left[ \widetilde{\mathbf {b}}_{m,n}\right] ,\ \varvec{\widetilde{\mathcal {C}}}=\left[ \widetilde{\mathbf {c }}_{m,n}\right] \) and \(\varvec{\widetilde{\mathcal {D}}}=\left[ \widetilde{\mathbf {d}}_{m,n}\right] \). \(\square \)

Proof of Theorem 3.4

By Corollary 3.5, \({\mathbf {Y}}\) is an HSS-AR(1) process on \( H^{T_{1}T_{2}}\) satisfies

$$\begin{aligned} {{\mathbf Y}}_{i,j}={\mathbb A}{{\mathbf Y}}_{i-1,j}+{\mathbb B}{{ \mathbf Y}}_{i,j-1}+{\mathbb C}{{\mathbf Y}}_{i-1,j-1}+{\varvec{\widetilde{\epsilon }}}_{i,j}. \end{aligned}$$

Now, by the results of Ruiz-Medina (2011a), under the Assumptions A, \(\mathbf Y\) has a unique stationary solution given by

$$\begin{aligned} {{\mathbf {Y}}}_{i,j}=\sum _{k=0}^{\infty }{\sum _{l=0}^{\infty }{ \sum _{r=0}^{\infty }{\ \frac{\left( k+l+r\right) !}{k!l!r!}{{\mathbb {A}}}^{k} {{\mathbb {B}}}^{l}{{\mathbb {C}}}^{r}}}}{\varvec{\widetilde{\epsilon } }} _{i-k-r,j-l-r}. \end{aligned}$$

Since \({{\widetilde{\mathbf {X}}}}_{{i,j}}={\widetilde{\varvec{\mathcal {D}}}} {\varvec{\mathcal {E}}}{{\mathbf {Y}}}_{{i,j}}\),

$$\begin{aligned} {\widetilde{{\mathbf {X}}}}_{i,j}=\sum _{k=0}^{\infty }{\sum _{l=0}^{\infty }{ \sum _{r=0}^{\infty }{\frac{\left( k+l+r\right) !}{k!l!r!}{\widetilde{\varvec{\mathcal {D}}}}{ \varvec{\mathcal {E}}}{{\mathbb {A}}}^{k}{{\ \mathbb {B}}}^{l}{{\mathbb {C}}}^{r}}}}{ \varvec{\widetilde{\epsilon } }}_{i-k-r,j-l-r}. \end{aligned}$$
(8.1)

By definition (2.4)

$$\begin{aligned} \widetilde{\mathbf {X}}_{i,j}&= {\left( {{\mathbf {X}}}_{iT_{1},j}, {{{\mathbf {X}}}}_{iT_{1}+1,j}, \dots , {{\mathbf {X}}}_{iT_{1}+T_{1}-1,j}\right) }^{\prime } ,\\ {\mathbf {X}}_{i,j}&= {\left( X_{i,jT_{2}}, X_{i,jT_{2}+1}, \ldots , X_{i,jT_{2}+T_{2}-1}\right) }^{\prime }. \end{aligned}$$

For \((i,j)\in {{\mathbb {Z}}}^{2}\), let \(i=i^{*}+\left[ \frac{i}{T_{1}}\right] T_{1}\) and \(j=j^{*}+\left[ \frac{j}{T_{2}}\right] T_{2}\) it is resulted that \(X_{i,j}\) is \(i^{*}T_{2}+j^{*}\)-th member of the vector \({{\widetilde{\mathbf {X}}}}_{\left[ \frac{i}{T_{1}}\right] , \left[ \frac{j}{T_{2}}\right] }\). i.e., \({\pi }_{i^{*}T_{2}+j^{*}}{{\widetilde{\mathbf {X}}}}_{\left[ \frac{i}{T_{1}}\right] ,\left[ \frac{j}{T_{2}}\right] }=X_{i,j},\) and by \(i^{*}T_{2}+j^{*}=\left( i-T_{1}\left[ \frac{i}{T_{1}}\right] -\left[ \frac{j}{T_{2}} \right] \right) T_{2}+j\), we have

$$\begin{aligned} {\pi }_{\left( i-T_{1}\left[ \frac{i}{T_{1}}\right] -\left[ \frac{j}{T_{2}} \right] \right) T_{2}+j}{{\widetilde{\mathbf {X}}}}_{\left[ \frac{i}{T_{1}}\right] ,\left[ \frac{j}{T_{2}}\right] }=X_{i,j}. \end{aligned}$$
(8.2)

Using Eqs. (8.1) and (8.2) gives

$$\begin{aligned} X_{i,j}{=}\sum _{k{=}0}^{\infty }{\sum _{l=0}^{\infty }{\sum _{r=0}^{\infty }{\frac{ \left( k+l+r\right) !}{k!l!r!}{\pi }_{\left( i-T_{1}\left[ \frac{i}{T_{1}} \right] -\left[ \frac{j}{T_{2}}\right] \right) T_{2}+j}{ \widetilde{\varvec{\mathcal {D}}}}{\varvec{\mathcal {E}}}{{\mathbb {A}}}^{k}{{\mathbb {B}}}^{l}{{\mathbb {C}}} ^{r}\widetilde{{\varvec{\epsilon }}}_{\left[ \frac{i}{T_{1}}\right] -k-r,\left[ \frac{j}{T_{2}}\right] -l-r} }}}, \end{aligned}$$

\(\square \)

Proof of Lemma 4.1

For \(h\in H\) define \({\left\langle \varvec{\theta } ,h\right\rangle }_{H} =\left[ \left\langle {\theta }_1,h\right\rangle _{H} ,\dots ,\left\langle {\theta } _m,h\right\rangle _{H} \right] ^{\prime }\). If \(\left\{ e_1,e_2,\dots \right\} \) be an orthogonal basis for \({H}\). Using Parseval’s identity for (4.1) gives

$$\begin{aligned} S(\varvec{\theta })=\sum ^{\infty }_{k=1}{\sum ^n_{i=1}{{ \left| \left\langle Y_i-{{\mathbf {x}}}_i\varvec{\theta },e_k \right\rangle _{H} \right| }^2}}. \end{aligned}$$

But the minimization of \(\sum ^n_{i=1}{{\left| \left\langle Y_i- {{\ \mathbf {x}}}_i\varvec{\theta } ,e_k\right\rangle _{H} \right| }^2}\) is by \(\widehat{\varvec{\theta }} \mathrm {=}{\left( {\mathbf {\mathcal {X}}}^{\prime }{\mathbf {\mathcal {X}}}\right) }^{{-1}} {\mathbf {\mathcal {X}}}^{\prime }\mathbf {Y}\). The reasoning is that the corresponding regression equation is

$$\begin{aligned} \left\langle Y_i,e_k\right\rangle _{H} =\left\langle {{\mathbf {x}}}_i\varvec{\theta } ,e_k\right\rangle _{H} +\left\langle {\varepsilon }_i,e_k\right\rangle _{H} ={{ \mathbf {x}}}_i\left\langle \varvec{\theta } ,e_k\right\rangle _{H} +\left\langle { \varepsilon }_i,e_k\right\rangle _{H} ,\ \ i=1,\dots ,n. \end{aligned}$$

The least square solution of this system is

$$\begin{aligned} {\left\langle \widehat{\varvec{\theta }},e_k\right\rangle }_{H}={\left( {\mathbf {\mathcal {X}}}^{\prime }{\mathbf {\mathcal {X}}}\right) }^{{-1}} {\mathbf {\mathcal {X}}}^{\prime }\left\langle \mathbf {Y},e_k\right\rangle _{H} =\left\langle {{\left( {\mathbf {\mathcal {X}}}^{\prime }{\mathbf {\mathcal {X}}}\right) }^{{-1}}{\mathbf {\mathcal {X}}}^{\prime }\mathbf {Y},e_k} \right\rangle _{H}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Haghbin, H., Shishebor, Z. & Soltani, A.R. Hilbertian spatial periodically correlated first order autoregressive models. Adv Data Anal Classif 8, 303–319 (2014). https://doi.org/10.1007/s11634-014-0172-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-014-0172-8

Keywords

Mathematics Subject Classification