Abstract
Partial correspondence analysis (Yanai, in: Diday, Escoufier, Lebart, Pagès, Schektman, Thomassone (eds) Data analysis and informatics IV, North-Holland, Amsterdam, pp 193–207, 1986, in: Hayashi, Jambu, Diday, Osumi (eds) Recent developments in clustering and data analysis, Academic Press, Boston, pp 259–266, 1988) has been introduced in statistical literature to eliminate the effects of an ancillary criterion variable on the relationship between two categorical characters. It is well known that partial and classical correspondence analyses do not perform well if one (or both) of the variables forming the contingency table presents an ordinal structure. Cumulative correspondence analysis is a method that considers the information included in the ordinal variable(s). Nevertheless, in this case, a third categorical variable (ancillary) could also influence the existing relation. In this paper, we extend Yanai’s partial approach to cumulative correspondence analysis and, by using suitable orthogonal projectors, we obtain some properties. Finally, we present two real case studies.
Similar content being viewed by others
References
Agresti, A. (2007). An introduction to categorical data analysis. John Wiley & Sons.
Barlow, R., Bartholomew, D., Bremner, J., & Brunk, H. (1972). Statistical inference under order restrictions. John Wiley.
Beh, E. J., & Lombardo, R. (2014). Correspondence analysis: Theory, practice and new strategies. Wiley.
Beh, E. J., D’Ambra, L., & Simonetti, B. (2007). Ordinal correspondence analysis based on cumulative chi-squared test. In Correspondence analysis and related methods. Rotterdam: CARME 2007.
Beh, E. J. (1997). Simple correspondence analysis of ordinal cross-classifications using orthogonal polynomials. Biometrical Journal, 39, 589–613.
Beh, E. J. (2001). Confidence circles for correspondence analysis using orthogonal polynomials. Journal of Applied Mathematics and Decision Sciences, 5(1), 35–45.
Beh, E. (2004). Simple correspondence analysis: A bibliographic review. International Statistical Review, 72(2), 257–284.
Beh, E. J., D’Ambra, L., & Simonetti, B. (2011). Correspondence analysis of cumulative frequencies using a decomposition of Taguchi’s statistic. Communications in Statistics-Theory and Methods, 40, 1620–1632.
Beh, E. J., & Lombardo, R. (2012). A genealogy of correspondence analysis. Australian & New Zealand Journal of Statistics, 54(2), 137–168.
Benzécri, J. P. (1973). L’Analyse des données. In L’Analyse des correspondances (Vol. II). Dunod.
Böckenholt, U., & Böckenholt, I. (1990). Canonical analysis of contingency tables with linear constraints. Psychometrika, 55, 633–639.
Cailliez, F., & Pagès, J. P. (1976). Introduction à l’analyse des donnèes. SMASH.
Cuadras, C., & Cuadras, D. (2008). A unified approach for representing rows and columns in contingency tables, http://dugi-doc.udg.edu/bitstream/handle/10256/720/cuadrasnew.pdf.
D’Ambra, L., & Lauro, N. (1989). Non symmetrical analysis of three-way contingency tables. In R. Coppi & S. Bolasco (Eds.), Multiway data analysis (pp. 301–315). Elsevier Science Publishers B. V.
D’Ambra, A., & Amenta, P. (2022). An extension of correspondence analysis based on the multiple Taguchi’s index to evaluate the relationships between three categorical variables graphically: An application to the Italian football championship. Annals of Operations Research. https://doi.org/10.1007/s10479-022-04803-3.
D’Ambra, A., Amenta, P., & Beh, E. J. (2021). Confidence regions and other tools for an extension of correspondence analysis based on cumulative frequencies. AStA Advances in Statistical Analysis, 105, 405–429.
D’Ambra, L., Amenta, P., & D’Ambra, A. (2018). Decomposition of cumulative chi-squared statistics, with some new tools for their interpretation. Statistical Methods & Applications, 27(2), 297–318.
D’Ambra, L., Beh, E. J., & Camminatiello, I. (2014). Cumulative correspondence analysis of two-way ordinal contingency tables. Communications in Statistics-Theory and Methods, 43(6), 1099–1113.
D’Ambra, L., Köksoy, O., & Simonetti, B. (2009). Cumulative correspondence analysis of ordered categorical data from industrial experiments. Journal of Applied Statistics, 36(12), 1315–1328.
D’Ambra, L., & Lauro, N. C. (1982). Analisi in componenti principali in rapporto ad un sottospazio di riferimento. Statistica Applicata, 15, 51–67.
Daudin, J. (1980). Partial association measure and an application to qualitative regression. Biometrika, 67(3), 581–590.
Efron, B., & Tibshirani, R. (1998). An introduction to the Bootstrap. CRC Press.
Escofier, B. (1984). Analyse factorielle en reférence a un modéle. Application a l’analyse de tableaux dechanges. Revue de Statistique Appliquée, 32(4), 25–36.
Escoufier, Y. (1987). The duality diagram: A means of better practical applications. In P. Legendre & L. Legendre (Eds.), Development in numerical ecology. NATO advanced Institute (pp. 139–156). Springer Verlag.
Fisher, R. A. (1940). The precision of discriminant functions. Annals of Eugenics, 10, 422–429.
Gerami, J., Kiani Mavi, R., Farzipoor Saen, R., et al. (2020). A novel network DEA-R model for evaluating hospital services supply chain performance. Annal of Operations Research. https://doi.org/10.1007/s10479-020-03755-w.
Gilula, Z., & Haberman, S. (1988). The analysis of multivariate contingency tables by restricted canonical and restricted association models. Journal of American Statistical Association, 83, 760–771.
Golub, G. H., & van Loan, C. F. (1996). Matrix computation (3rd ed.). The Johns Hopkins University Press.
Goodman, L. (1986). Some useful extensions of the usual correspondence analysis approach and the usual log-linear models approach in the analysis of contingency tables. International Statistical Review, 54, 243–309.
Goodman, L. (1996). A single general method for the analysis of cross-classified data: Reconciliation and synthesis of some methods of Pearson, Yule, and Fisher, and also some methods of correspondence analysis and association analysis. Journal of the American Statistical Association, 91, 408–428.
Goodman, L. A., & Kruskal, W. H. (1954). Measures of association for cross-classifications. Journal of American Statistical Association, 49, 732–764.
Greenacre, M. J. (1984). Theory and applications of correspondence analysis. Academic Press.
Greenacre, M. (2007). Correspondence analysis in practice (2nd ed.). Chapman & Hall/CRC.
Hirotsu, C. (1986). Cumulative chi-squared statistic as a tool for testing goodness of fit. Biometrika, 73, 165–173.
Hirotsu, C. (1990). A critical look at accumulation analysis and related methods: Discussion. Technometrics, 32, 133–136.
Horst, P. (1935). Measuring complex attitudes. Journal of Social Psychology, 6, 369–374.
Hotelling, H. (1936). Relations between two sets of variates. Biometrika, 28, 321–377.
Lebart, L., Morineau, A., & Piron, M. (2004). Statistique exploratoire multidimensionnelle. DUNOD.
Lebart, L., Warwick, K., & Morineau, A. (1984). Multivariate descriptive statistical analysis. John Wiley & Sons.
Mardia, K., Bibby, J., & Kent, J. (1982). Multivariate analysis. Academic Press.
Nair, V. N. (1986). Testing in industrial experiments with ordered categorical data. Technometrics, 28(4), 283–291.
Nair, V. N. (1987). Chi-squared type tests for ordered alternatives in contingency tables. Journal of American Statistical Association, 82, 283–291.
Nishisato, S. (1980). Analysis of categorical data: Dual scaling and its applications. University of Toronto Press.
Ozcan, Y. A., Lins, M. E., Lobo, M. S. C., et al. (2010). Evaluating the performance of Brazilian university hospitals. Annals of Operations Research, 178, 247–261.
Parsa, A. R., & Smith, B. (1993). Scoring under ordered constraints in contingency tables. Communications in Statistics-Theory and Methods, 22, 3537–3551.
Ramsay, J. (1978). Confidence regions for multidimensional scaling analysis. Psychometrika, 43, 145–160.
Rao, C. R. (1964). The use and interpretation of principal component analysis in applied research. Sankhya A, 25, 329–358.
Rao, B. R. (1969). Partial canonical correlations. Trabajos de Estadistica y de Investigacion Operativa, 20(2–3), 211–219.
Rao, C. R., & Yanai, H. (1979). General definition and decomposition of projectors and some applications to statistical problems. Journal of Statistical Planning and Inference, 3, 1–17.
Ringrose, T. (1992). Bootstrapping and correspondence analysis in archaeology. Journal of Archaeological Science, 19(6), 615–629.
Ringrose, T. (1996). Alternative confidence regions for canonical variate analysis. Biometrika, 83(3), 575–587.
Ritov, Y., & Gilula, Z. (1993). Analysis of contingency tables by correspondence models subject to ordered constraints. Journal of the American Statistical Association, 88, 1380–1387.
Rouyendegh, B. D., Oztekin, A., Ekong, J., et al. (2019). Measuring the efficiency of hospitals: A fully-ranking DEA-FAHP approach. Annals of Operation Research, 278, 361–378.
Sarnacchiaro, P., & D’Ambra, A. (2011). Cumulative correspondence analysis to improve the public train transport. Electronic Journal of Applied Statistical Analysis: Decision Support System and Services, 2, 15–24.
Satterthwaite, F. (1946). An approximate distribution of estimates of variance components. Biometrical Bullettin, 2, 110–114.
Schriever, B. F. (1983). Scaling of order dependent categorical variables with correspondence analysis. International Statistical Review, 51, 225–238.
Srikantan, K. S. (1970). Canonical association between nominal measurements. Journal of the American Statistical Association, 65, 284–292.
Stewart, D., & Love, W. (1968). A general canonical correlation index. Psychological Bulletin, 70, 160–163.
Taguchi, G. (1966). Statistical analysis. Maruzen.
Taguchi, G. (1974). A new statistical analysis for clinical data, the accumulating analysis, in contrast with the chi-square test. Saishin Igaku, 29, 806–813.
Takane, Y., & Hwang, H. (2002). Generalized constrained canonical correlation analysis. Multivariate Behavioral Research, 37, 163–195.
Takane, Y., Hwang, H., & Abdi, H. (2008). Regularized multiple-set canonical correlation analysis. Psychometrika, 73, 753–775.
Takane, Y., & Jung, S. (2008). Regularized partial and/or constrained redundancy analysis. Psychometrika, 73, 671–690.
Takane, Y., & Shibayama, T. (1991). Principal component analysis with external information on both subjects and variables. Psychometrika, 56, 97–120.
Takane, Y., Yanai, H., & Hwang, H. (2006). An improved method for generalized constrained canonical correlation analysis. Computational Statistics & Data Analysis, 50(1), 221–241.
Takeuchi, K., Yanai, H., & Mukherjee, B. N. (1982). The foundations of multivariate analysis. John Wiley & Sons (Asia) Pte Ltd.
Takeuchi, K., & Hirotsu, C. (1982). The cumulative chi-squares method against ordered alternatives in two-way contingency tables. Reports of Statistical Application Research, Union of Japanese Scientists and Engineers, 29, 1–13.
ter Braak, C. J. F. (1988). Partial canonical correspondence analysis. In H. H. Bock (Ed.), Classification and related methods of data analysis (pp. 551–558). North Holland.
ter Braak, C. J. F. (1986). Canonical correspondence analysis: A new eigenvector technique for multivariate direct gradient analysis. Ecology, 67, 1167–1179.
Timm, N. H., & Carlson, J. E. (1976). Part and bipartial canonical correlation analysis. Psychometrika, 41, 159–176.
van den Wollenberg, A. L. (1977). Redundancy analysis: An alternative for canonical correlation analysis. Psychometrika, 42, 207–219.
Yanai, H. (1986). Some generalizations of correspondence analysis in terms of projectors. In E. Diday, Y. Escoufier, L. Lebart, J. P. Pagès, Y. Schektman, & R. Thomassone (Eds.), Data analysis and informatics IV (pp. 193–207). North-Holland.
Yanai, H. (1988). Partial correspondence analysis and its properties. In C. Hayashi, M. Jambu, E. Diday, & N. Osumi (Eds.), Recent developments in clustering and data analysis (pp. 259–266). Academic Press.
Yanai, H., & Puntanen, S., et al. (1993). Partial canonical correlation associated with symmetric reflexive g-inverses of the dispersion matrix. In K. Matsushita et al. (Ed.), Proceedings of the third Pacific area conference (pp. 253–264).
Yanai, H. (1974). Unification of various techniques of multivariate analysis by means of generalized coefficient of determination (G.C.D.). Japanese Journal of Behaviormetrics, 1, 45–54.
Funding
This study was not funded.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All the authors have no conflict of interest.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
- (A.1):
-
Let \({\varvec{\Omega }}\) and \({\varvec{\Phi }}\) be given positive definite symmetric matrices of order \((n\times n)\) and \((p \times p)\), respectively. The GSVD of matrix \(\textbf{A}\) is defined as \(\textbf{A}=\textbf{U}{\varvec{\Lambda }}\textbf{V}^T\) where the columns of \(\textbf{U}\) and \(\textbf{V}\) are orthonormalized with respect to \({\varvec{\Omega }}\) and \({\varvec{\Phi }}\) (that is \(\textbf{U}^T\mathbf {\Omega U} = \textbf{I}\) and \(\textbf{V}^T\mathbf {\Phi V} = \textbf{I}\)), respectively, and \({\varvec{\Lambda }}\) is a diagonal and positive definite matrix containing the generalized singular values, ordered from largest to smallest (Greenacre, 1984; Takane & Shibayama, 1991). It is noted as \(\text{ GSVD }(\textbf{A})_{{\varvec{\Omega }},{\varvec{\Phi }}}\) and it can be obtained by means the ordinary SVD as follows. Let \({\varvec{\Omega }}=\textbf{G}_{\varvec{\Omega }}\textbf{G}_{\varvec{\Omega }}^T\) and \({\varvec{\Phi }}=\textbf{G}_{\varvec{\Phi }}\textbf{G}_{\varvec{\Phi }}^T\) be arbitrary square root decomposition of matrices \({\varvec{\Omega }}\) and \({\varvec{\Phi }}\), respectively, and consider the SVD of matrix \(\textbf{G}_{\varvec{\Omega }}^T\textbf{A}\textbf{G}_{\varvec{\Phi }}\) (that is \(\textbf{G}_{\varvec{\Omega }}^T\textbf{A}\textbf{G}_{\varvec{\Phi }}=\tilde{\textbf{U}}\tilde{{\varvec{\Lambda }}}\tilde{\textbf{V}}^T\)) where \(\tilde{\textbf{U}}^T\tilde{\textbf{U}}=\textbf{I}\), \(\tilde{\textbf{V}}^T\tilde{\textbf{V}}=\textbf{I}\) and \(\tilde{{\varvec{\Lambda }}}\) is a diagonal and positive definite matrix. Generalized singular vectors \(\textbf{U}\) and \(\textbf{V}\) are then given by \(\textbf{U}=(\textbf{G}_{\varvec{\Omega }}^T)^{-1}\tilde{\textbf{U}}\) and \(\textbf{V}=(\textbf{G}_{\varvec{\Phi }}^T)^{-1}\tilde{\textbf{V}}\), respectively, with \({\varvec{\Lambda }}=\tilde{{\varvec{\Lambda }}}\). Note that if matrices \({\varvec{\Omega }}\) and \({\varvec{\Phi }}\) are singular then any g-inverse of them could be used (denoted \({\varvec{\Omega }}^-\)) and the solution will be not unique, while if we want to hold the uniqueness then we may use the Moore-Penrose inverse as g-inverse of \({\varvec{\Omega }}\) (denoted \({\varvec{\Omega }}^+\)) (Takane & Hwang, 2002).
- (A.2):
-
Note that \(\textbf{d}=\textbf{Mc}\) and \(\tilde{\textbf{D}}=\textbf{M}-\textbf{d}\textbf{1}_j^T\). Starting from \(\textbf{C}_1\) such that \(T=trace(\textbf{C}_1^T\textbf{C}_1)\), we have the following identities
$$\begin{aligned} \textbf{C}_1= & {} \textbf{W}^{\frac{1}{2}}\tilde{\textbf{D}}\textbf{N}^T\textbf{D}_I^{-\frac{1}{2}}\\= & {} \textbf{W}^{\frac{1}{2}}(\textbf{M}-\textbf{d}\textbf{1}_j^T)(n\times \textbf{P})^T(n\times \textbf{P}_I)^{-\frac{1}{2}}\\= & {} \sqrt{n}\times \textbf{W}^{\frac{1}{2}}(\textbf{M}-\textbf{d}\textbf{1}_j^T) \textbf{P}^T \textbf{P}_I^{-\frac{1}{2}}\\ \tilde{\textbf{C}}_2= & {} \sqrt{n}\times \textbf{C}_2\\= & {} \sqrt{n}\times \textbf{W}^{\frac{1}{2}}(\textbf{M}\textbf{P}^T-\textbf{Mc}\textbf{1}_j^T\textbf{P}^T)\textbf{P}_I^{-\frac{1}{2}}\\= & {} \sqrt{n}\times \textbf{W}^{\frac{1}{2}}\textbf{M}(\textbf{P}^T-\textbf{c}\textbf{r}^T)\textbf{P}_I^{-\frac{1}{2}}\\= & {} \sqrt{n}\times \textbf{W}^{\frac{1}{2}}\textbf{M}(\textbf{P}-\textbf{r}\textbf{c}^T)^T\textbf{P}_I^{-\frac{1}{2}}\\ \tilde{\textbf{C}}_3= & {} \sqrt{n}\times \textbf{C}_3. \end{aligned}$$such that \(T=trace(\tilde{\textbf{C}}_2^T\tilde{\textbf{C}}_2)=n\times trace(\textbf{C}_2^T\textbf{C}_2)\), and \(T=trace(\tilde{\textbf{C}}_3^T\tilde{\textbf{C}}_3)=n\times trace(\textbf{C}_3^T\textbf{C}_3)\), where \(\textbf{C}_2={} \textbf{W}^{\frac{1}{2}}(\textbf{M}-\textbf{d}\textbf{1}_j^T) \textbf{P}^T \textbf{P}_I^{-\frac{1}{2}}\) and \(\textbf{C}_3={} \textbf{W}^{\frac{1}{2}}\textbf{M}(\textbf{P}-\textbf{r}\textbf{c}^T)^T\textbf{P}_I^{-\frac{1}{2}}\). Finally, consider the transpose of \(\tilde{\textbf{C}}_3\)
$$\begin{aligned} \tilde{\textbf{C}}_3^T= & {} \sqrt{n}\times \textbf{C}_3^T\\= & {} \sqrt{n}\times \textbf{P}_I^{-\frac{1}{2}}(\textbf{P}-\textbf{r}\textbf{c}^T)\textbf{M}^T\textbf{W}^{\frac{1}{2}}\\= & {} \sqrt{n}\times (\textbf{P}_I^{-\frac{1}{2}}\textbf{P}-\textbf{r}^{\frac{1}{2}}\textbf{c}^T)\textbf{M}^T\textbf{W}^{\frac{1}{2}}\\= & {} \sqrt{n}\times (\textbf{P}_I^{-\frac{1}{2}}\textbf{P}-\textbf{P}^{\frac{1}{2}}\textbf{1}_I\textbf{c}^T)\textbf{M}^T\textbf{W}^{\frac{1}{2}}\\= & {} \sqrt{n}\times (\textbf{P}_I^{\frac{1}{2}}\textbf{P}_I^{-1}\textbf{P}-\textbf{P}^{\frac{1}{2}}\textbf{1}_I\textbf{c}^T)\textbf{M}^T\textbf{W}^{\frac{1}{2}}\\= & {} \sqrt{n}\times \textbf{P}_I^{\frac{1}{2}}(\textbf{P}_I^{-1}\textbf{P}-\textbf{1}_I\textbf{c}^T)\textbf{M}^T\textbf{W}^{\frac{1}{2}}\\ \tilde{\textbf{C}}_4= & {} \sqrt{n}\times \textbf{C}_4. \end{aligned}$$where \(T=trace(\tilde{\textbf{C}}_4\tilde{\textbf{C}}_4^T)=n\times trace(\textbf{C}_4\textbf{C}_4)^T\) and \(\textbf{C}_4=\textbf{P}_I^{\frac{1}{2}}(\textbf{P}_I^{-1}\textbf{P}-\textbf{1}_I\textbf{c}^T)\textbf{M}^T\textbf{W}^{\frac{1}{2}}\).
- A.3):
-
Let L be the Lagrangian function defined as
$$\begin{aligned} \begin{array}{ll} L = (\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} )&{} - \frac{\gamma }{2} (\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}^T {\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}} {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} - 1)\\ &{} - \frac{\mu }{2} (\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} ^T (\hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}})^- {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} - 1) \end{array} \end{aligned}$$The normal equations are
$$\begin{aligned} \frac{\partial L}{\partial {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}}= & {} \textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}}-\gamma {\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}} {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}= 0 \end{aligned}$$(17)$$\begin{aligned} \frac{\partial L}{\partial {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} }= & {} \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}^T \textbf{G}_{\textbf{X}\tilde{\textbf{K}}} {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} -\mu (\hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}})^-{\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} = 0 \end{aligned}$$(18)$$\begin{aligned} 2\frac{\partial L}{\partial \gamma }= & {} - (\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}^T {\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}} {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} - 1) = 0 \end{aligned}$$(19)$$\begin{aligned} 2\frac{\partial L}{\partial \mu }= & {} - (\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} ^T (\hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}})^- {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} - 1) = 0 \end{aligned}$$(20)Identities \(\lambda =(\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} )=cov(\textbf{t},\textbf{u})=\gamma =\mu \) are obtained by left multiplying (17) with \(\textbf{a}_{\hat{\textbf{X}}\tilde{\textbf{K}}}^T \) and using (19), or by left multiplying (18) with \(\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} ^T\) using (20). In addition we obtain the following transition formulas from Eqs. (18) and (17), respectively
$$\begin{aligned} {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}}= & {} \frac{1}{\lambda } (\hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}})\textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}^T \textbf{G}_{\textbf{X}\tilde{\textbf{K}}} {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} \end{aligned}$$(21)$$\begin{aligned} {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}= & {} \frac{1}{\lambda }({\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}})^- \textbf{G}_{\textbf{X}\tilde{\textbf{K}}}^T \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} {\textbf{a}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}} \end{aligned}$$(22)The general eigenvalue problem (6) is then achieved by using Eq. (21) in (17) and by left multiplying with \(\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}\). Eigen-system (7) is instead obtained by taking into account the relation \(\textbf{P}_{\textbf{X}\cup \tilde{\textbf{K}}}=\textbf{P}_{\tilde{\textbf{K}}}+\textbf{P}_{\textbf{X}/\tilde{\textbf{K}}}\) in (6) and by left multiplying with \(\textbf{Q}_{\tilde{\textbf{K}}}\), where \(\textbf{P}_{\textbf{X}/\tilde{\textbf{K}}}=\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X}(\textbf{X}^T \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X})^-\textbf{X}^T\textbf{Q}_{\tilde{\textbf{K}}}\). In fact, after rewriting \(\textbf{P}_{\textbf{X}\cup \tilde{\textbf{K}}}\) and left multiplying (6) with \(\textbf{Q}_{\tilde{\textbf{K}}}\), we achieve the following identities
$$\begin{aligned} \begin{array}{rl} \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}} \hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}}^T \textbf{G}_{\hat{\textbf{Y}}\tilde{\textbf{K}}}^T\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}{\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}}\textbf{G}_{\textbf{X}\tilde{\textbf{K}}}{\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \left[ \hat{\textbf{Y}}|\tilde{\textbf{K}}\right] \hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}}^T \left[ \hat{\textbf{Y}}|\tilde{\textbf{K}}\right] ^T\left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}}\left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \left[ \hat{\textbf{Y}}|\tilde{\textbf{K}}\right] \hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}}^T {\left[ \begin{array}{l} \hat{\textbf{Y}}^T\\ \hline \tilde{\textbf{K}}^T \end{array}\right] } \left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}}\left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \left[ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \hat{\textbf{Y}}|\textbf{0}\right] \hat{\textbf{M}}^T\hat{\textbf{W}}\hat{\textbf{M}}^T {\left[ \begin{array}{l} \hat{\textbf{Y}}^T\\ \hline \tilde{\textbf{K}}^T \end{array}\right] } \left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}}\left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \left[ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \hat{\textbf{Y}}\textbf{M}^T\textbf{W}\textbf{M}^T|\textbf{0}\right] {\left[ \begin{array}{l} \hat{\textbf{Y}}^T\\ \hline \tilde{\textbf{K}}^T \end{array}\right] } \left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}}\left[ \textbf{X}|\tilde{\textbf{K}}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \left[ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \textbf{Y}\textbf{M}^T\textbf{W}\textbf{M}^T|\textbf{0}\right] {\left[ \begin{array}{cc} \textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X}&{}\textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\tilde{\textbf{K}}\\ \tilde{\textbf{K}}^T\textbf{X}&{}\tilde{\textbf{K}}^T\tilde{\textbf{K}} \end{array}\right] } {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\left[ \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X}|\textbf{0}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \left[ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \textbf{Y}\textbf{M}^T\textbf{W}\textbf{M}^T|\textbf{0}\right] {\left[ \begin{array}{cc} \textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X}&{}\textbf{0}\\ \tilde{\textbf{K}}^T\textbf{X}&{}\tilde{\textbf{K}}^T\tilde{\textbf{K}} \end{array}\right] } {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\left[ \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X}|\textbf{0}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \left[ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \textbf{Y}\textbf{M}^T\textbf{W}\textbf{M}^T \textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X}|\textbf{0} \right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}} =&{} \lambda ^2\left[ \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X}|\textbf{0}\right] {\textbf{a}_{\textbf{X}\tilde{\textbf{K}}}}\\ \textbf{P}_{\textbf{X}/\tilde{\textbf{K}}} \textbf{Y}\textbf{M}^T\textbf{W}\textbf{M}^T \textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X} \textbf{a}_{\textbf{X}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}} \textbf{X}\textbf{a}_{\textbf{X}}\\ \textbf{Q}_{\tilde{\textbf{K}}}\textbf{X}(\textbf{X}^T \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X})^-\textbf{X}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{Y}\textbf{M}^T\textbf{W}\textbf{M}^T \textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X} \textbf{a}_{\textbf{X}} =&{} \lambda ^2\textbf{Q}_{\tilde{\textbf{K}}} \textbf{X}\textbf{a}_{\textbf{X}}\\ (\textbf{X}^T \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X})^-\textbf{X}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{Y}\textbf{M}^T\textbf{W}\textbf{M}^T \textbf{Y}^T\textbf{Q}_{\tilde{\textbf{K}}}\textbf{X} \textbf{a}_{\textbf{X}} =&{} \lambda ^2\textbf{a}_{\textbf{X}}\\ \end{array} \end{aligned}$$where last identy is obtained left-multiplying the previuos one with \((\textbf{X}^T \textbf{Q}_{\tilde{\textbf{K}}} \textbf{X})^-\textbf{X}^T\).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Amenta, P., D’Ambra, A. & Lucadamo, A. Partial cumulative correspondence analysis. Ann Oper Res 342, 1495–1527 (2024). https://doi.org/10.1007/s10479-022-05141-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-022-05141-0