Abstract
The present paper deals with convergence issues of Lohmöller’s procedure for the computation of the components in the PLS-PM algorithm. More datasets and proofs are given to highlight the convergence failure of this procedure. Consequently, a new procedure based on the Signless Lapalacien matrix of the indirect graph between constructs is introduced. In several cases that will be specified in this paper, both monotony and error convergence for this new procedure will be established. Several comparisons will be presented between the new procedure and the two conventionally used procedures (Lohmöller’s and Hanafi-Wold’s procedures).





Similar content being viewed by others
References
Esposito VV, Chin W, Henseler J, Wang W (eds) (2010) Handbook of partial least squares. Concepts, methods and applications. Springer, Heidelberg
Hair J, Hult G, Ringle C, Sarstedt M (2017) A primer on partial least squares structural equation modeling (pls-sem), 2nd edn. Sage, Thousand Oaks
Hanafi M (2007) Pls path modelling: computation of latent variables with the estimation mode b. Comput Stat 22(2):275–292. https://doi.org/10.1007/s00180-007-0042-3
Hanafi M, Dolce P, El Hadri Z (2021) Generalized properties for Hanafi-Wold’s procedure in partial least squares path modeling. Comput Stat 36:603–614. https://doi.org/10.1007/s00180-020-01015-w
Henseler J (2010) on the convergence of the partial least squares path modeling algorithm. Comput Stat 25(1):107–120
Jöreskog K (1970) A general method fort he analysis of covariance structure. Biometrika 57:239–251
Li JS, Zhang XD (1998) On the Laplacian eigenvalues of a graph. Linear Algebra Appl 285(1–3):305–7
Li Y (2005) Pls-gui-graphic user interface for partial least squares (pls-pc 1.8)-version 2.0.1 beta. University of South Carolina, Columbia
Monecke A, Leisch F (2012) Sempls: structural equation modeling using partial least squares. J Stat Softw 48(3):1–32
Sanchez G (2013) Pls path modeling with r trowchez editions. Berkeley. http://www.gastonsanchez.com/PLS Path Modeling with R.pdf
SARL A (2007–2008). Xlstat-plspm, Paris, France. http://www.xlstat.com/en/products/xlstat-plspm
Schur J (1911) Bemerkungen zur theorie der beschränkten bilinearformen mit unendlich vielen veränderlichen. Journal für die reine und angewandte Mathematik. 1911(140):1–28
Tenenhaus M, Vinzi VE, Chatelin YM, Lauro C (2005) Pls path modeling. Comput Stat Data Anal 48:159–205
Tenenhaus M, Tenenhaus A, Groenen PJ (2017) Regularized generalized canonical correlation analysis: a framework for sequential multiblock component methods. Psychometrika 82:737–777
Wold H (1982) Soft modelling: the basic design and some extensions. In: Joreskog KG, Wold H (eds) System under indirect observation, vol 2. North Holland, Amsterdam, pp 1–54
Wold H (1985) Partial least squares. In: Kotz S, Johnson NL (eds) Encyclopaedia of statistical sciences, vol 6. Wiley, New York, pp 581–591
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
1.1 A1
Proof
of Lemma 1.
The proof is performed by recurrence on the iteration s. \({\mathbf{z }_k}^{(s)}\) denotes the sequence of components generated by Lohmöller’s procedure applied to \((\mathbf{X }_1,... ,\mathbf{X }_K)\) and initialized by the weights \(\left( \tilde{\mathbf{w }}_1^{(0)},..., \tilde{\mathbf{w }}_K^{(0)}\right) \). \({\mathbf{y }_k}^{(s)}\) denotes the sequence of components generated by Lohmöller’s procedure applied to \((\mathbf{Q }_1,... ,\mathbf{Q }_K)\) and initialized by the weights \(\left( \tilde{\mathbf{u }}_1^{(0)},..., \tilde{\mathbf{ u }}_K^{(0)}\right) \) such that \({\mathbf{z }_k}^{(0)}={\mathbf{y }_k}^{(0)}\) for each \(k=1,2,\cdots ,K\). Suppose \({\mathbf{z }_k}^{(s)}={\mathbf{y }_k}^{(s)}\) for all \(s=0,1,2, \ldots , s_0. \) Let us show that \({\mathbf{z }_k}^{(s_0+1)}={\mathbf{y }_k}^{(s_0 +1)}\)
For each block k at iteration \(s_0+1\) we have:
Substituting \( \mathbf{Q }_k= \mathbf{X }_k \left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1/2}\), it follows
\(\square \)
1.2 A2
Proof
of lemma 2.
Substituting step 4 in step 5 in the equivalent form of Lohmöller’s procedure (see column 2 in Table 4) and by remarking that \({\mathbf{Q}}_{k}^{'}{ {\mathbf{Q}}_k}=n\mathbf{I }_n\), it follows:
Substituting step 3 in (26), it follows:
with \(\lambda _k^{(s)}= \bigg \Vert {\sum _{l=1}^K} \mathbf{R }_{kl}( {\mathbf{u}}^{(s)}) {\mathbf{u}}_l^{(s)} \bigg \Vert \).
Equivalently,
\(\square \)
1.3 A3
Proof
of Lemma 3.
Recall that \( {\mathbf{H}} \mathbf{a } = {\varvec{\Gamma }} \mathbf{b }\) and \(\Vert \mathbf{a }_k\Vert =\Vert \mathbf{b }_k\Vert =1\).
By developing :
and
By remarking that \( {\mathbf{a }}^{'} {\varvec{\Gamma }} \mathbf{b } = {\mathbf{a }}^{'} {\mathbf{H}} \mathbf{a } \) and summing (28) and (29) it follows, \(\mathbf (b--a) ' [ {\mathbf{H}} + \ {\varvec{\Gamma }} ] \mathbf (b--a) = \mathbf{b }' {\mathbf{H}} \mathbf{b } - \mathbf{a }' {\mathbf{H}} \mathbf{a }\). \(\square \)
1.4 A4
Proof
of lemma 4. Using Lemma 3 with \(\mathbf{a } = {\mathbf{u}}^{(s)}, \mathbf{b } = {\mathbf{u}}^{(s+1)}, {\mathbf{H}} =\mathbf{R }^{(s)}\) and \( {\varvec{\Gamma }} = {\varvec{\Lambda }}^{(s)},\) it follows,
By developing
- 1):
-
If the centroid scheme is considered \( \left( \theta _{kl}^{(s)}= sign \left( r_{kl}^{(s)}\right) \right) \), then
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} = n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)}. \end{aligned}$$On one hand,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)} = \rho ^{(s)}. \end{aligned}$$On the other hand,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\le & {} \left| \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\right| \le \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \left| r_{kl}^{(s+1)} \right| \\= & {} \rho ^{(s+1)}. \end{aligned}$$Thus,
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$ - 2):
-
If the factorial scheme is considered, then
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)}= n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2. \end{aligned}$$On one hand,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2= \rho ^{(s)}. \end{aligned}$$On the other hand, since \( c_{kl}= c^2_{kl}\), it comes,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)}&\le \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K { c_{kl}}^2 \left| r_{kl}^{(s)} \right| \left| r_{kl}^{(s+1)} \right| \\&\le \sqrt{ \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K { \left( c_{kl} r_{kl}^{(s)} \right) }^2 }\times \sqrt{ \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K \left( c_{kl} r_{kl}^{(s+1)} \right) ^2} \\&\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)} } \end{aligned}$$$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)} \right) . \end{aligned}$$Since the left side of the previous inequality is non negative, it follows:
$$\begin{aligned} 0\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)}= \sqrt{\rho ^{(s)}} \left( \sqrt{\rho ^{(s+1)}}- \sqrt{ \rho ^{(s)}} \right) . \end{aligned}$$As a consequence,
$$\begin{aligned} \sqrt{\rho ^{(s+1)}} \ge \sqrt{ \rho ^{(s)}} \end{aligned}$$Thus,
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$
\(\square \)
1.5 A5
Proof
of lemma 5.
Since \({{\varvec{\Omega }}}\) is symmetric positive semidefinite then it can be decomposed as \( {\varvec{\Omega }}= {\sum }^K_{i=1} \gamma _i \mathbf{a }_i \mathbf{a }'_i \) where \(\gamma _i\) are the eigenvalues of \( {\varvec{\Omega }}\) and \(a_i\) the associated eigenvectors.
Thus, each element \(\omega _{kl} \) can be writen as
where \(a_{ki}\) is the kth component of the vector \(\mathbf{a }_i\).
Recall that \(\mathbf{A }\left( {{\varvec{\Omega }}} \right) \) is given as :
For any vector \(\mathbf{v } = (\mathbf{v }'_1,\cdots ,\mathbf{v }'_K)'\), clearly
Substituting (30) in the previous equation, it follows:
where \({\varvec{\alpha }}_i=(\sqrt{\gamma _i}a_{1i} \mathbf{{v}}'_1,\ldots ,\sqrt{\gamma _i}a_{Ki} \mathbf{{v}}'_K)'\) and \(\mathbf{Q }=[\mathbf{Q }_1|\mathbf{Q }_2 | \ldots | \mathbf{Q }_K ]\). \(\square \)
1.6 A6
Proof
of lemma 6.
The proof begins by showing that the signless Laplacian matrix \(\mathbf {D} + \mathbf {C}\) is positive semidefinite.
Let \({\varvec{\alpha }}=({\varvec{\alpha }}_1, \ldots , {\varvec{\alpha }}_K)\) and \(\vert {\varvec{\alpha }}\vert =(\vert {\varvec{\alpha }}_1\vert , \ldots , \vert {\varvec{\alpha }}_K\vert )\). The Laplacian matrix (\(\mathbf{D }-\mathbf{C }\) ) is symmetric positive semidefinite (Li 1998). It follows that \(\vert {\varvec{\alpha }}\vert ^{'} (\mathbf{D }-\mathbf{C })\vert {\varvec{\alpha }}\vert \ge 0\) or equivalently
Suppose that \({\varvec{\Omega }} \) is not symmetric positive semidefinite then there is a vector \( \beta =(\beta _1,\ldots ,\beta _K)\ne \mathbf{0 }\) such that :
By remarking that the right term of the (32) is strictly positive, it follows :
Thus,
Regarding (31), the inequality (34) is impossible.
Let us now come to the proof that the matrix \( (\mathbf{D }+\mathbf{C }) \circ {\varvec{\theta }}^{(s)} \) is symmetric positive semidefinite.
- (a):
-
Factorial scheme : \({\varvec{\theta }}^{(s)}= \left[ \theta ^{(s)}_{kl}\right] = \left[ cor\left( \mathbf{z }^{(s)}_k,\mathbf{z }^{(s)}_l\right) \right] \).
Recall that the Hadamard product of two symmetric positive semidefinite matrices is symmetric and positive semidefinite matrix (Schur 1911). Also the correlation matrix \({\varvec{\theta }}^{(s)}\) is positive semidefinite, therfore \((\mathbf{D }+\mathbf{C }) \circ {\varvec{\theta }}^{(s)}\) is positive semidefinite.
- (b):
-
Centroid scheme : \({\varvec{\theta }}^{(s)}= \left[ \theta ^{(s)}_{kl}\right] = \left[ sign\left( cor(\mathbf{z }^{(s)}_k,\mathbf{z }^{(s)}_l)\right) \right] \).
\({\varvec{\theta }}^{(s)}\) can be written as \({\varvec{\theta }}^{(s)}= {\varvec{\theta }}_{+}^{(s)} - {\varvec{\theta }}_{-}^{(s)}\), where \({\varvec{\theta }}_{+}^{(s)} \) is the positive part of \({\varvec{\theta }}^{(s)}\) and \({\varvec{\theta }}_{-}^{(s)}\) its negative part. Therefore \( \mathbf{C } \circ {\varvec{\theta }}^{(s)} = \mathbf{C }\circ {\varvec{\theta }}_{+}^{(s)} - \mathbf{C } \circ {\varvec{\theta }}_{-}^{(s)}. \)
\( \mathbf{C }\circ {\varvec{\theta }}_{+}^{(s)}\) and \( \mathbf{C }\circ {\varvec{\theta }}_{-}^{(s)}\) can be seen as the adjacency matrices of two indirect graphs denoted respectively \( {\mathbf{G }_{+}}^{(s)}\) and \({\mathbf{G }_{-}}^{(s)}.\)
Let \({\mathbf{D }_+}^{(s)}\) and \({\mathbf{D }_- }^{(s)}\) be the diagonal matrices of degrees associated respectively to \( {\mathbf{G }_{+}}^{(s)}\) and \({\mathbf{G }_{-}}^{(s)}\), the Singnless Laplacian Matrix \( \left( {\mathbf{D }_+}^{(s)} + \mathbf{C }\circ {{\varvec{\theta }}_{+}}^{(s)} \right) \) of \({\mathbf{G }_{+}}^{(s)}\) is positive semidefinite, the Laplacian matrix \( \left( {\mathbf{D }_-}^{(s)} - \mathbf{C }\circ {{\varvec{\theta }}_{-}}^{(s)} \right) \) of \({\mathbf{G }_{-}}^{(s)}\) is also positive semidefinite, and clearly \(\mathbf{D } =( {\mathbf{D }_+}^{(s)} + {\mathbf{D }_-}^{(s)})\). Moreover, the matrix \(\left( \mathbf{D } + \mathbf{C } \right) \circ {\varvec{\theta }}^{(s)}\) can be written as :
It results that the matrix \(\left( \mathbf{D } + \mathbf{C } \right) \circ {\varvec{\theta }}^{(s)} \) is symmetric positive semidefinite as the sum of two positive semidefinite matrices. \(\square \)
1.7 A7
Proof
of Lemma 7.
Using Lemma 3 with \(\mathbf{a } = {\mathbf{u}}^{(s)}, \mathbf{b } = {\mathbf{u}}^{(s+1)}, {\mathbf{H}} = \tilde{\mathbf{R }}^{(s)}\) and \( {\varvec{\Gamma }} = \tilde{ {\varvec{\Lambda }}}^{(s)},\) it follows,
By developing
- (1):
-
If the Centroid scheme is considered \( \left( \theta _{kl}^{(s)}= sign \left( r_{kl}^{(s)}\right) \right) \), it follows,
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \tilde{\mathbf{R }}^{(s)}+ \tilde{ {\varvec{\Lambda }}}^{(s)} \right] \mathbf{e }^{(s)} = n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)}. \end{aligned}$$On one hand,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)} = \rho \left( {\mathbf{{z}}_1^{(s)}, \mathbf{{z}}_2^{(s)}, \cdots , \mathbf{{z}}_K^{(s)}} \right) . \end{aligned}$$On the other hand,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\le & {} \left| \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\right| \le \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \left| r_{kl}^{(s+1)} \right| \\= & {} \rho ^{(s+1)}. \end{aligned}$$Thus,
$$\begin{aligned} \mathbf{e }^{(s)'} \bigg [ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \bigg ]\mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$ - (2):
-
If the Factorial scheme is considered, then
$$\begin{aligned} \mathbf{e }^{(s)'} \bigg [ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \bigg ]\mathbf{e }^{(s)}= n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2. \end{aligned}$$On one hand,
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2= \rho \left( {\mathbf{{z}}_1^{(s)}, \mathbf{{z}}_2^{(s)}, \cdots , \mathbf{{z}}_K^{(s)}} \right) . \end{aligned}$$On the other hand \(( c_{kl}= c^2_{kl})\),
$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)}&\le \sum _{\begin{array}{c} k,l = 1 \end{array}}^K { c_{kl}}^2 \left| r_{kl}^{(s)} \right| \left| r_{kl}^{(s+1)} \right| \\&\le \sqrt{ \sum _{\begin{array}{c} k,l = 1 \end{array}}^K { \left( c_{kl} r_{kl}^{(s)} \right) }^2 }\times \sqrt{ \sum _{\begin{array}{c} k,l = 1 \end{array}}^K \left( c_{kl} r_{kl}^{(s+1)} \right) ^2} \\&\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)} } \end{aligned}$$It comes,
$$\begin{aligned} \mathbf{e }^{(s)'} \bigg [ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \bigg ]\mathbf{e }^{(s)} \le n \left( \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)} \right) . \end{aligned}$$Since the left side of the previous inequality is non negative, it follows:
$$\begin{aligned} 0\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)}= \sqrt{\rho ^{(s)}} \left( \sqrt{\rho ^{(s+1)}}- \sqrt{ \rho ^{(s)}} \right) . \end{aligned}$$As a consequence,
$$\begin{aligned} \sqrt{\rho ^{(s+1)}} \ge \sqrt{ \rho ^{(s)}} \end{aligned}$$Thus,
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$
\(\square \)
Appendix B
Rights and permissions
About this article
Cite this article
Hanafi, M., El Hadri, Z., Sahli, A. et al. Overcoming convergence problems in PLS path modelling. Comput Stat 37, 2437–2470 (2022). https://doi.org/10.1007/s00180-022-01204-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00180-022-01204-9