Skip to main content

Advertisement

Log in

Finite Mixture of Censored Linear Mixed Models for Irregularly Observed Longitudinal Data

  • Published:
Journal of Classification Aims and scope Submit manuscript

Abstract

Linear mixed-effects models are commonly used when multiple correlated measurements are made for each unit of interest. Some inherent features of these data can make the analysis challenging, such as when the series of responses are repeatedly collected for each subject at irregular intervals over time or when the data are subject to some upper and/or lower detection limits of the experimental equipment. Moreover, if units are suspected of forming distinct clusters over time, i.e., heterogeneity, then the class of finite mixtures of linear mixed-effects models is required. This paper considers the problem of clustering heterogeneous longitudinal data in a mixture framework and proposes a finite mixture of multivariate normal linear mixed-effects model. This model allows us to accommodate more complex features of longitudinal data, such as measurement at irregular intervals over time and censored data. Furthermore, we consider a damped exponential correlation structure for the random error to deal with serial correlation among the within-subject errors. An efficient expectation-maximization algorithm is employed to compute the maximum likelihood estimation of the parameters. The algorithm has closed-form expressions at the E-step that rely on formulas for the mean and variance of the multivariate truncated normal distributions. Furthermore, a general information-based method to approximate the asymptotic covariance matrix is also presented. Results obtained from the analysis of both simulated and real HIV/AIDS datasets are reported to demonstrate the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

The data that support the findings of this study are available from the corresponding author upon request.

References

  • Acosta EP, Wu H, Hammer SM, Yu S, Kuritzkes DR, Walawander A, Eron JJ, Fichtenbaum CJ, Pettinelli C, Neath D, et al. (2004) Comparison of two indinavir/ritonavir regimens in the treatment of hiv-infected individuals. JAIDS Journal of Acquired Immune Deficiency Syndromes 37(3), 1358–1366.

    Article  Google Scholar 

  • Akaike H (1974) A new look at the statistical model identification. IEEE Trans Autom Cont 19:716–723.

    Article  MathSciNet  MATH  Google Scholar 

  • Bai X, Chen K, Yao W (2016) Mixture of linear mixed models using multivariate t distribution. Journal of Statistical Computation and Simulation 86(4), 771–787.

    Article  MathSciNet  MATH  Google Scholar 

  • Basso RM, Lachos VH, Cabral CRB, Ghosh P (2010) Robust mixture modeling based on scale mixtures of skew-normal distributions. Computational Statistics & Data Analysis 54(12), 2926–2941.

    Article  MathSciNet  MATH  Google Scholar 

  • Booth JG, Casella G, Hobert JP (2008) Clustering using objective functions and stochastic search. Journal of the Royal Statistical Society, Series B (Statistical Methodology) 70(1):119–139

  • Celeux G, Martin O, Lavergne C (2005) Mixture of linear mixed models for clustering gene expression profiles from repeated microarray experiments. Statistical Modelling 5(3), 243–267.

    Article  MathSciNet  MATH  Google Scholar 

  • De la Cruz-Mesía R, Quintana FA, Marshall G (2008) Model-based clustering for longitudinal data. Computational Statistics & Data Analysis 52(3), 1441–1457.

    Article  MathSciNet  MATH  Google Scholar 

  • Cuesta-Albertos JA, Gordaliza A, Matrán C, et al. (1997) Trimmed \(k\)-means: An attempt to robustify quantizers. The Annals of Statistics 25(2), 553–576.

  • Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1–38.

    MathSciNet  MATH  Google Scholar 

  • Faria S, Soromenho G (2010) Fitting mixtures of linear regressions. Journal of Statistical Computation and Simulation 80(2), 201–225.

    Article  MathSciNet  MATH  Google Scholar 

  • Fitzgerald AP, DeGruttola VG, Vaida F (2002) Modelling hiv viral rebound using non-linear mixed effects models. Statistics in Medicine 21(14), 2093–2108.

    Article  Google Scholar 

  • Gaffney S, Smyth P (2003) Curve clustering with random effects regression mixtures. In: AISTATS

  • Gałecki AT, Burzykowski T (2013) Linear mixed-effects models using R: a step-by-step approach. New York: Springer.

    Book  MATH  Google Scholar 

  • Hathaway RJ, et al. (1985) A constrained formulation of maximum-likelihood estimation for normal mixture distributions. The Annals of Statistics 13(2), 795–800.

    Article  MathSciNet  MATH  Google Scholar 

  • Hughes J (1999) Mixed effects models with censored data with application to HIV RNA levels. Biometrics 55:625–629.

    Article  MATH  Google Scholar 

  • Karlsson M, Laitila T (2014) Finite mixture modeling of censored regression models. Statistical Papers 55(3), 627–642.

    Article  MathSciNet  MATH  Google Scholar 

  • Kiefer NM (1978) Discrete parameter variation: Efficient estimation of a switching regression model. Econometrica: Journal of the Econometric Society 46:427–434

  • Lachos VH, Ghosh P, Arellano-Valle RB (2010) Likelihood based inference for skew-normal independent linear mixed models. Statistica Sinica 20:303–322.

    MathSciNet  MATH  Google Scholar 

  • Lachos, V. H., Matos, A. L., Castro, L. M., & Chen, M. H. (2019) Flexible longitudinal linear mixed models for multiple censored responses data. Statistics in Medicine, 38(6), 1074–1102.

    Article  MathSciNet  Google Scholar 

  • Laird NM, Ware JH, et al. (1982) Random-effects models for longitudinal data. Biometrics 38(4), 963–974.

    Article  MATH  Google Scholar 

  • Lin TI (2010) Robust mixture modeling using multivariate skew t distributions. Statistics and Computing 20(3), 343–356.

    Article  MathSciNet  Google Scholar 

  • Lin TI (2014) Learning from incomplete data via parameterized t mixture models through eigenvalue decomposition. Computational Statistics & Data Analysis 71:183–195.

    Article  MathSciNet  MATH  Google Scholar 

  • Lin TI, Wang WL (2013) Multivariate skew-normal linear mixed models for multi-outcome longitudinal data. Statistical Modelling 13(3), 199–221.

    Article  MathSciNet  MATH  Google Scholar 

  • Lin TI, Wang WL (2020) Multivariate-t linear mixed models with censored responses, intermittent missing values and heavy tails. Statistical Methods in Medical Research 29(5), 1288–1304.

    Article  MathSciNet  Google Scholar 

  • Lin TI, McLachlan GJ, Lee SX (2016) Extending mixtures of factor models using the restricted multivariate skew-normal distribution. Journal of Multivariate Analysis 143:398–413.

    Article  MathSciNet  MATH  Google Scholar 

  • Lin TI, Lachos VH, Wang WL (2018) Multivariate longitudinal data analysis with censored and intermittent missing responses. Statistics in Medicine 37(19), 2822–2835.

    Article  MathSciNet  Google Scholar 

  • Louis TA (1982) Finding the observed information matrix when using the em algorithm. Journal of the Royal Statistical Society, Series B (Methodological) 44(2):226–233

  • Matos LA, Lachos VH, Balakrishnan N, Labra FV (2013a) Influence diagnostics in linear and nonlinear mixed-effects models with censored data. Computational Statistics & Data Analysis 57(1):450–464

  • Matos LA, Prates MO, Chen MH, Lachos VH (2013b) Likelihood-based inference for mixed-effects models with censored response using the multivariate-t distribution. Statistica Sinica 23(3):1323–1345

  • Matos LA, Bandyopadhyay D, Castro LM, Lachos VH (2015) Influence assessment in censored mixed-effects models using the multivariate Student t distribution. Journal of Multivariate Analysis 141:104–117.

    Article  MathSciNet  MATH  Google Scholar 

  • Matos LA, Castro LM, Lachos VH (2016) Censored mixed-effects models for irregularly observed repeated measures with applications to HIV viral loads. TEST 25(4), 627–653.

    Article  MathSciNet  MATH  Google Scholar 

  • McLachlan GJ, Peel D (2000) Finite mixture models. John Wiley & Sons.

  • McNicholas PD (2016) Model-based clustering. Journal of Classification 33(3), 331–373.

    Article  MathSciNet  MATH  Google Scholar 

  • Meng XL, Rubin DB (1993) Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika 80(2), 267–278.

    Article  MathSciNet  MATH  Google Scholar 

  • Muñoz A, Carey V, Schouten JP, Segal M, Rosner B (1992) A parametric family of correlation structures for the analysis of longitudinal data. Biometrics 48(3), 733–742.

    Article  Google Scholar 

  • Ng SK, McLachlan GJ, Wang K, Ben-Tovim Jones L, Ng SW (2006) A mixture model with random-effects components for clustering correlated gene-expression profiles. Bioinformatics 22(14), 1745–1752.

    Article  Google Scholar 

  • Rousseeuw PJ, Kaufman L (1990) Finding groups in data. Hoboken: Wiley Online Library.

    MATH  Google Scholar 

  • Schwarz G, et al. (1978) Estimating the dimension of a model. The Annals of Statistics 6(2), 461–464.

    Article  MathSciNet  MATH  Google Scholar 

  • Spiessens B, Verbeke G, Komárek A (2002) A sas-macro for the classification of longitudinal profiles using mixtures of normal distributions in nonlinear and generalised linear mixed models. Arbeitspapier, Katholische Universität Leuven: Biostatistisches Zentrum, Belgien.

    Google Scholar 

  • Tzortzis G, Likas A (2014) The minmax k-means clustering algorithm. Pattern Recognition 47(7), 2505–2516.

    Article  Google Scholar 

  • Vaida F, Liu L (2009) Fast implementation for normal mixed effects models with censored response. Journal of Computational and Graphical Statistics 18(4), 797–817.

    Article  MathSciNet  Google Scholar 

  • Vaida F, Fitzgerald A, DeGruttola V (2007) Efficient hybrid EM for linear and nonlinear mixed effects models with censored response. Computational Statistics & Data Analysis 51:5718–5730

  • Verbeke G, Lesaffre E (1996) A linear mixed-effects model with heterogeneity in the random-effects population. Journal of the American Statistical Association 91(433), 217–221.

    Article  MATH  Google Scholar 

  • Wang WL (2013) Multivariate t linear mixed models for irregularly observed multiple repeated measures with missing outcomes. Biometrical Journal 55(4), 554–571.

    Article  MathSciNet  MATH  Google Scholar 

  • Wang WL (2017) Mixture of multivariate t linear mixed models for multi-outcome longitudinal data with heterogeneity. Statistica Sinica 27:733–760.

    MathSciNet  MATH  Google Scholar 

  • Wang WL (2019) Mixture of multivariate t nonlinear mixed models for multiple longitudinal data with heterogeneity and missing values. TEST 28(1), 196–222.

    Article  MathSciNet  MATH  Google Scholar 

  • Wang WL, Fan TH (2011) Estimation in multivariate t linear mixed models for multiple longitudinal data. Statistica Sinica 21:1857–1880.

    Article  MathSciNet  MATH  Google Scholar 

  • Wang WL, Lin TI (2015) Robust model-based clustering via mixtures of skew-t distributions with missing information. Advances in Data Analysis and Classification 9(4), 423–445.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu L (2009) Mixed effects models for complex data. Chapman and Hall/CRC

  • Yang YC, Lin TI, Castro LM, Wang WL (2020) Extending finite mixtures of t linear mixed-effects models with concomitant covariates. Computational Statistics & Data Analysis 148:106961, 1–20.

    MathSciNet  MATH  Google Scholar 

  • Zeller CB, Cabral CR, Lachos VH (2016) Robust mixture regression modeling based on scale mixtures of skew-normal distributions. Test 25(2), 375–396.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang B (2003) Regression clustering. In: Data Mining, 2003. ICDM 2003. Third IEEE International Conference on, IEEE, pp 451–458

Download references

Acknowledgements

We thank the editor, associate editor and three anonymous referees for their important comments and suggestions which lead to an improvement of this paper.

Funding

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) - Finance Code 001. Larissa A. Matos received support from FAPESP-Brazil (Grant 2020/16713-0).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Larissa A Matos.

Ethics declarations

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 291 KB)

Appendices

Appendix A. The derivatives of the expectations

Using the properties of conditional expectation, we obtain:

$$\begin{aligned}&\widehat{Z_{ij}\mathbf {y}}_i^{(k)} = \mathbb {E}(Z_{ij}\mathbf {y}_i|\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}) = \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i|Z_{ij}}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {y}_i))) \\&= \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i|Z_{ij}}(\mathbf {y}_i)) = \mathbb {E}(Z_{ij}|\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)})\mathbb {E}(\mathbf {y}_i|\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}, Z_{ij} = 1)\\&= \widehat{Z}_{ij}^{(k)}\widehat{\mathbf {y}}_i^{(k)},\\&\widehat{Z_{ij}\mathbf {y}_i^2}^{(k)} = \mathbb {E}(Z_{ij}\mathbf {y}_i\mathbf {y}_i^\top |\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}) = \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i|Z_{ij}}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {y}_i\mathbf {y}_i^\top ))) \\&= \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i\mathbf {y}_i^\top |Z_{ij}}(\mathbf {y}_i)) = \mathbb {E}(Z_{ij}|\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)})\mathbb {E}(\mathbf {y}_i\mathbf {y}_i^\top |\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}, Z_{ij} = 1)\\&= \widehat{Z}_{ij}^{(k)}\widehat{\mathbf {y}}_i^{2(k)},\\&\widehat{Z_{ij}\mathbf {b}}_{i}^{(k)} = \mathbb {E}(Z_{ij}\mathbf {b}_{i}|\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}) = \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i|Z_{ij}}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}))) \\&= \mathbb {E}(Z_{ij}|\mathbf {V}_i,\mathbf {C}_i, \widehat{\varvec{\theta }}_j^{(k)})\mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}))\\&= \widehat{Z}_{ij}^{(k)}\mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i})) = \widehat{Z}_{ij}^{(k)}\widehat{\mathbf {b}}_{i}^{(k)},\\&\widehat{Z_{ij}\mathbf {b}_{i}^2}^{(k)} = \mathbb {E}(Z_{ij}\mathbf {b}_{i}\mathbf {b}_{i}^\top |\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}) = \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i|Z_{ij}}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}\mathbf {b}_{i}^\top ))) \\&= \mathbb {E}(Z_{ij}|\mathbf {V}_i,\mathbf {C}_i, \widehat{\varvec{\theta }}_j^{(k)})\mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}\mathbf {b}_{i}^\top ))\\&= \widehat{Z}_{ij}^{(k)}\mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}(\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}\mathbf {b}_{i}^\top )) = \widehat{Z}_{ij}^{(k)}\widehat{\mathbf {b}}_{i}^{2(k)},\\&\widehat{Z_{ij}\mathbf {y}_i\mathbf {b}_{i}^\top }^{(k)} = \mathbb {E}(Z_{ij}\mathbf {y}_i\mathbf {b}_{i}^\top |\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)}) = \mathbb {E}_{Z_{ij}|\mathbf {V}_i,\mathbf {C}_i}(Z_{ij}\mathbb {E}_{\mathbf {y}_i|Z_{ij}}(\mathbf {y}_i\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}^\top ))) \\&= \mathbb {E}(Z_{ij}|\mathbf {V}_i,\mathbf {C}_i,\widehat{\varvec{\theta }}_j^{(k)})\mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}(\mathbf {y}_i\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}^\top ))\\&= \widehat{Z}_{ij}^{(k)}\mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}(\mathbf {y}_i\mathbb {E}_{\mathbf {b}_i|\mathbf {y}_i,Z_{ij}}(\mathbf {b}_{i}^\top )) = \widehat{Z}_{ij}^{(k)}\widehat{\mathbf {y}_i\mathbf {b}}_{i}^{(k)}. \end{aligned}$$

To compute the expectation terms above, it is important to note that,

$$\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i {\mathop {\sim }\limits ^{\text {ind.}}} TN_{n_i}(\mathbf {X}_i\varvec{\beta }_j, \sigma ^2_j\mathbf {E}_{ij}; \mathbb {A}), \text {and}$$
$$\mathbf {b}_{i}|\mathbf {y}_i, Z_{ij} {\mathop {\sim }\limits ^{\text {ind.}}} N_q(\varvec{\varphi }_{ij}(\mathbf {y}_i - \mathbf {X}_i\varvec{\beta }_j), \varvec{\Lambda }_{ij}),$$

where \(TN_{n_i}(\cdot ;\mathbb {A})\) denotes the truncated normal distribution in the interval \(\mathbb {A}\), \(\varvec{\varphi }_{ij} = \frac{1}{\sigma ^2_j}\varvec{\Lambda }_{ij}\mathbf {U}_i^\top\) and \(\varvec{\Lambda }_{ij} = \left( \mathbf {D}^{-1} + \frac{1}{\sigma ^2_j}\mathbf {U}_i^\top \mathbf {E}_{ij}^{-1}\mathbf {U}_i\right) ^{-1}\). So, the expectation terms are given by

$$\begin{aligned}&\widehat{\mathbf {b}}_{i}^{(k)} = \widehat{\varvec{\varphi }}_{ij}^{(k)}(\widehat{\mathbf {y}}_i^{(k)} - \mathbf {X}_i\widehat{\varvec{\beta }}^{(k)}_ j),\\&\widehat{\mathbf {b}}_{i}^{2(k)} = \widehat{\varvec{\Lambda }}_{ij}^{(k)} + \widehat{\varvec{\varphi }}_{ij}^{(k)}\left( \widehat{\mathbf {y}}_{i}^{2(k)} - \widehat{\mathbf {y}}_{i}^{(k)}\widehat{\varvec{\beta }}_j^{\top (k)}\mathbf {X}_i^\top - \mathbf {X}_i\widehat{\varvec{\beta }}_j^{(k)}\widehat{\mathbf {y}}_{i}^{\top (k)} + \mathbf {X}_i\widehat{\varvec{\beta }}_j^{(k)}\widehat{\varvec{\beta }}_j^{\top (k)}\mathbf {X}_i^\top \right) \widehat{\varvec{\varphi }}_{ij}^{\top (k)},\\&\widehat{\mathbf {y}_i\mathbf {b}}_{i}^{(k)} = \mathbb {E}_{\mathbf {y}_i|Z_{ij},\mathbf {V}_i,\mathbf {C}_i}[\mathbf {y}_i(\mathbf {y}_i - \mathbf {X}_i\varvec{\beta }_j)^\top \varvec{\varphi }_{ij}^\top ] = (\widehat{\mathbf {y}}_i^{2(k)}-\widehat{\mathbf {y}}_i^{(k)}\widehat{\varvec{\beta }}^{\top (k)}_j\mathbf {X}_i^\top )\widehat{\varvec{\varphi }}_{ij}^{\top (k)}. \end{aligned}$$

As mentioned earlier, the E-step reduces to the computation of \(\widehat{\mathbf {y}}_{i}^{(k)}\) and \(\widehat{\mathbf {y}}^{2(k)}_{i}\), that is, the mean and the second moment of a multivariate truncated Gaussian distribution, and can be determined in closed form (Vaida and Liu , 2009; Matos et al. , 2016).

Appendix B. Empirical information matrix

The elements of \(\widehat{s}_i\) in Equation (13) are given by

$$\begin{aligned}&\widehat{s}_{i,\pi _j} = \frac{\widehat{\mathbf {Z}}_{ij}}{\widehat{\pi }_j} - \frac{\widehat{\mathbf {Z}}_{ig}}{\widehat{\pi }_g},\\&\widehat{s}_{i,\varvec{\beta }_{j}} = (\widehat{s}_{i,\beta _{j1}}, \ldots , \widehat{s}_{i,\beta _{jp}}) = \frac{1}{\widehat{\sigma ^2_j}}\left[ \mathbf {X}_i^\top \widehat{\mathbf {E}}_{ij}^{-1}(\widehat{\mathbf {Z}}_{ij}\widehat{\mathbf {y}}_i - \widehat{\mathbf {Z}}_{ij}\mathbf {U}_i\widehat{\mathbf {b}}_{i}) - \widehat{\mathbf {Z}}_{ij}\mathbf {X}_i^\top \widehat{\mathbf {E}}_{ij}^{-1}\mathbf {X}_i\widehat{\varvec{\beta }}_j \right] ,\\&\widehat{s}_{i,\sigma ^2_{j}} = - \frac{\widehat{Z}_{ij}n_i}{2\widehat{\sigma ^2_j}} + \frac{1}{2\widehat{\sigma ^4_j}}\left[ \mathrm {tr}\left( \widehat{a}_{ij}^{(k)} \widehat{\mathbf {E}}_{ij}^{-1} \right) - 2\widehat{\varvec{\beta }}_j^\top \mathbf {X}_i^{\top }\widehat{\mathbf {E}}_{ij}^{-1}(\widehat{{Z}}_{ij}\widehat{\mathbf {y}}_{i} - \widehat{{Z}}_{ij}\mathbf {U}_i\widehat{\mathbf {b}}_{i}) \right. \\&+ \left. \widehat{{Z}}_{ij}\widehat{\varvec{\beta }}^\top _j\mathbf {X}_i^{\top } \widehat{\mathbf {E}}_{ij}^{-1}\mathbf {X}_i\widehat{\varvec{\beta }}_{j} \right] ,\\&\widehat{s}_{i,\varvec{\alpha }_{j}} = (\widehat{s}_{i,\alpha _{j1}}, \ldots , \widehat{s}_{i,\alpha _{jr}}) = -\frac{1}{2}\left\{ \widehat{{Z}}_{ij}\mathrm {tr}\left[ \widehat{\mathbf {D}}^{-1}_j\dot{\mathbf {D}}_j(r)\widehat{\mathbf {D}}^{-1}_j(\widehat{\mathbf {D}}_{j} - \widehat{\mathbf {b}}_{i}^2)\right] \right\} ,\\&\widehat{s}_{i,\varvec{\phi }_{j}} = (\widehat{s}_{i,\phi _{j1}}, \widehat{s}_{i,\alpha _{j2}}) = -\frac{1}{2}\widehat{{Z}}_{ij}\mathrm {tr}(\widehat{\mathbf {E}}_{ij}^{-1}\dot{\mathbf {E}}_{ij}(s)) + \frac{1}{2\widehat{\sigma ^2_j}}\left[ \mathrm {tr}\left( \widehat{a}_{ij}^{(k)}\widehat{\mathbf {E}}_{ij}^{-1}\dot{\mathbf {E}}_{ij}(s)\widehat{\mathbf {E}}_{ij}^{-1}\right) \right. \\&+ \left. \widehat{{Z}}_{ij}\widehat{\varvec{\beta }}^\top _j\mathbf {X}_i^{\top } \widehat{\mathbf {E}}_{ij}^{-1}\mathbf {X}_i\widehat{\varvec{\beta }}_{j} - 2\widehat{\varvec{\beta }}^\top _j\mathbf {X}_i^{\top }\widehat{\mathbf {E}}_{ij}^{-1}\dot{\mathbf {E}}_{ij}(s)\widehat{\mathbf {E}}_{ij}^{-1}(\widehat{{Z}}_{ij}\widehat{\mathbf {y}}_{i} - \widehat{{Z}}_{ij}\mathbf {U}_i\widehat{\mathbf {b}}_{i}) \right] , \end{aligned}$$

where \(\left. \dot{\mathbf {D}}_j(r) = \frac{\partial \mathbf {D}_j}{\partial \alpha _{jr}}\right| _{\alpha =\widehat{\alpha }}\), \(r = 1, 2, \ldots , \dim (\varvec{\alpha }_j)\); and \(\left. \dot{\mathbf {E}}_{ij}(s) = \frac{\partial \mathbf {E}_{ij}}{\partial \phi _{js}}\right| _{\phi =\widehat{\phi }}\), \(s = 1, 2\). For the DEC structure, the derivatives are given by

$$\begin{aligned}&\frac{\partial \mathbf {E}_{ij}}{\partial \phi _{j1}} = |t_{ik} - t_{ih}|^{\phi _{j2}}\phi _{j1}^{|t_{ik} - t_{ih}|^{\phi _{j2}}-1},\\&\frac{\partial \mathbf {E}_{ij}}{\partial \phi _{j2}} = |t_{ik} - t_{ih}|^{\phi _{j2}}\log (|t_{ik} - t_{ih}|)\log (\phi _{j1})\phi _{j1}^{|t_{ik} - t_{ih}|^{\phi _{j2}}}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Alencar, F.H.C., Matos, L.A. & Lachos, V.H. Finite Mixture of Censored Linear Mixed Models for Irregularly Observed Longitudinal Data. J Classif 39, 463–486 (2022). https://doi.org/10.1007/s00357-022-09415-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00357-022-09415-x

Keywords

Navigation