Skip to main content

Advertisement

A Bayesian Multiplex Graph Classifier of Functional Brain Connectivity Across Diverse Tasks of Cognitive Control

  • Research
  • Published:
Neuroinformatics Aims and scope Submit manuscript

Abstract

This article seeks to investigate the impact of aging on functional connectivity across different cognitive control scenarios, particularly emphasizing the identification of brain regions significantly associated with early aging. By conceptualizing functional connectivity within each cognitive control scenario as a graph, with brain regions as nodes, the statistical challenge revolves around devising a regression framework to predict a binary scalar outcome (aging or normal) using multiple graph predictors. Popular regression methods utilizing multiplex graph predictors often face limitations in effectively harnessing information within and across graph layers, leading to potentially less accurate inference and predictive accuracy, especially for smaller sample sizes. To address this challenge, we propose the Bayesian Multiplex Graph Classifier (BMGC). Accounting for multiplex graph topology, our method models edge coefficients at each graph layer using bilinear interactions between the latent effects associated with the two nodes connected by the edge. This approach also employs a variable selection framework on node-specific latent effects from all graph layers to identify influential nodes linked to observed outcomes. Crucially, the proposed framework is computationally efficient and quantifies the uncertainty in node identification, coefficient estimation, and binary outcome prediction. BMGC outperforms alternative methods in terms of the aforementioned metrics in simulation studies. An additional BMGC validation was completed using an fMRI study of brain networks in adults. The proposed BMGC technique identified that sensory motor brain network obeys certain lateral symmetries, whereas the default mode network exhibits significant brain asymmetries associated with early aging.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

Data is provided within the supplementary information files.

References

  • Ahmed, T., Raja, H., & Bajwa, W. U. (2020). Tensor regression using low-rank and sparse tucker decompositions. SIAM Journal on Mathematics of Data Science, 2(4), 944–966. https://doi.org/10.1137/19M1299335

    Article  Google Scholar 

  • Airoldi, E. M., Blei, D. M., Fienberg, S. E., & Xing, E. P. (2008). Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9(65), 1981–2014.

    PubMed  Google Scholar 

  • Allaire, J., & Chollet, F. (2023). Keras: R Interface to ‘Keras’. R package version 2.11.1. https://tensorflow.rstudio.com/

  • Allaire, J., & Tang, Y. (2023). Tensorflow: R Interface to ‘TensorFlow’. R package version 2.11.0.9000. https://github.com/rstudio/tensorflow

  • Carvalho, C. M., Polson, N. G., & Scott, J. G. (2010). The horseshoe estimator for sparse signals. Biometrika, 97(2), 465–480.

    Article  Google Scholar 

  • Chen, Y., Chen, D., Wang, Y., Wang, T., & Liang, Y. (2021). CaFGraph: Context-aware facial multi-graph representation for facial action unit recognition. Proceedings of the 29th ACM International Conference on Multimedia. (pp. 1029–1037).

  • Chen, F., Wang, Y.-C., Wang, B., & Kuo, C.-C.J. (2020). Graph representation learning: A survey. APSIPA Transactions on Signal and Information Processing, 9, 15.

    Article  Google Scholar 

  • Churchill, N. W., Raamana, P., Spring, R., & Strother, S. C. (2017). Optimizing fMRI preprocessing pipelines for block-design tasks as a function of age. NeuroImage, 154, 240–254.

    Article  PubMed  Google Scholar 

  • Contisciani, M., Power, E. A., & De Bacco, C. (2020). Community detection with node attributes in multilayer networks. Scientific Reports. https://doi.org/10.1038/s41598-020-72626-y

    Article  PubMed  PubMed Central  Google Scholar 

  • Craddock, R. C., Holtzheimer, P. E., Hu, X. P., & Mayberg, H. S. (2009). Disease state prediction from resting state functional connectivity. Magnetic Resonance in Medicine, 62(6), 1619–1628. https://doi.org/10.1002/mrm.22159

    Article  PubMed  PubMed Central  Google Scholar 

  • Dinh, V. C., & Ho, L. S. (2020). Consistent feature selection for analytic deep neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems (Vol. 33, pp. 2420–2431).

  • Dinov, I. D., & Velev, M. V. (2021). Data science: Time complexity, inferential uncertainty, and spacekime analytics. Walter de Gruyter GmbH & Co KG

  • Dinov, I. D. (2023). Springer. Cham. https://doi.org/10.1007/978-3-031-17483-4_2

    Article  Google Scholar 

  • Durante, D., Mukherjee, N., & Steorts, R. C. (2017). Bayesian learning of dynamic multilayer networks. Journal of Machine Learning Research, 18(43), 1–29.

    Google Scholar 

  • Fan, J., Gong, W., & Zhu, Z. (2019). Generalized high-dimensional trace regression via nuclear norm regularization. Journal of Econometrics, 212(1), 177–202. https://doi.org/10.1016/j.jeconom.2019.04.026. Big Data in Dynamic Predictive Econometric Modeling.

  • Fraley, C., Raftery, A. E., Murphy, T. B., & Scrucca, L. (2012). mclust version 4 for R: normal mixture modeling for model-based clustering, classification, and density estimation. Citeseer: Technical report.

    Google Scholar 

  • Frank, O., & Strauss, D. (1986). Markov graphs. Journal of the American Statistical Association, 81(395), 832–842. https://doi.org/10.1080/01621459.1986.10478342

    Article  Google Scholar 

  • Friedman, J., Tibshirani, R., & Hastie, T. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1), 1–22.

    Article  PubMed  PubMed Central  Google Scholar 

  • Gollini, I., & Murphy, T. B. (2016). Joint modeling of multiple network views. Journal of Computational and Graphical Statistics, 25(1), 246–265.

    Article  Google Scholar 

  • Guhaniyogi, R., Qamar, S., & Dunson, D. B. (2017). Bayesian tensor regression. Journal of Machine Learning Research, 18(79), 1–31.

    Google Scholar 

  • Guha, S., & Guhaniyogi, R. (2021). Bayesian generalized sparse symmetric tensor-on-vector regression. Technometrics, 63(2), 160–170.

    Article  Google Scholar 

  • Guha, S., & Guhaniyogi, R. (2024). Covariate-dependent clustering of undirected networks with brain-imaging data. Technometrics, 1–23.

  • Guha, S., & Rodriguez, A. (2021). Bayesian regression with undirected network predictors with an application to brain connectome data. Journal of the American Statistical Association, 116(534), 581–593. https://doi.org/10.1080/01621459.2020.1772079

    Article  CAS  Google Scholar 

  • Guha, S., & Rodriguez, A. (2023). High-dimensional Bayesian network classification with network global-local shrinkage priors. Bayesian Analysis. https://doi.org/10.1214/23-BA1378

    Article  Google Scholar 

  • Han, Q., Xu, K. S., & Airoldi, E. M. (2015). Consistent estimation of dynamic and multi-layer block models. Proceedings of the 32nd International Conference on International Conference on Machine Learning (Vol. 37).

  • He, L., Chen, K., Xu, W., Zhou, J., & Wang, F. (2018). Boosted sparse and low-rank tensor regression. Advances in Neural Information Processing Systems, 31.

  • Heaney, M. T. (2014). Multiplex networks and interest group influence reputation: An exponential random graph model. Social Networks, 36, 66–81. https://doi.org/10.1016/j.socnet.2012.11.003. Special Issue on Political Networks.

  • Heidari, N., & Iosifidis, A. (2021). Progressive spatio-temporal bilinear network with Monte Carlo dropout for landmark-based facial expression recognition with uncertainty estimation. 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP) (pp. 1–6). IEEE.

    Google Scholar 

  • Hoff, P. D. (2015). Multilinear tensor regression for longitudinal relational data. The Annals of Applied Statistics. https://doi.org/10.1214/15-aoas839

    Article  PubMed  PubMed Central  Google Scholar 

  • Hoff, P. D., Raftery, A. E., & Handcock, M. S. (2002). Latent space approaches to social network analysis. Journal of the American Statistical Association, 97(460), 1090–1098. https://doi.org/10.1198/016214502388618906

    Article  Google Scholar 

  • Holland, P. W., & Leinhardt, S. (1981). An exponential family of probability distributions for directed graphs. Journal of the American Statistical Association, 76(373), 33–50. https://doi.org/10.1080/01621459.1981.10477598

    Article  Google Scholar 

  • Ishwaran, H., & Rao, J. S. (2005). Spike and slab variable selection: Frequentist and Bayesian strategies. The Annals of Statistics, 33(2), 730–773. https://doi.org/10.1214/009053604000001147

    Article  Google Scholar 

  • Li, Y., Zhang, L., Lan, X., & Jiang, D. (2023). Towards adaptable graph representation learning: An adaptive multi-graph contrastive transformer. Proceedings of the 31st ACM International Conference on Multimedia (pp. 6063–6071).

  • Liu, Z., & Zhou, J. (2022). Introduction to graph neural networks. Springer Nature.

    Google Scholar 

  • Lubben, N., Ensink, E., Coetzee, G. A., & Labrie, V. (2021). The enigma and implications of brain hemispheric asymmetry in neurodegenerative diseases. Brain Communications, 3(3), 211.

    Article  Google Scholar 

  • Mandal, P. K., Mahajan, R., & Dinov, I. D. (2012). Structural brain atlases: design, rationale, and applications in normal and pathological cohorts. Journal of Alzheimer’s Disease, 31(s3), 169–188.

    Article  Google Scholar 

  • Mega, M. S., Cummings, J. L., Fiorello, T., & Gornbein, J. (1996). The spectrum of behavioral changes in Alzheimer’s disease. Neurology, 46(1), 130–135.

    Article  CAS  PubMed  Google Scholar 

  • Moon, S. W., Zhao, L., Matloff, W., Hobel, S., Berger, R., Kwon, D., Kim, J., Toga, A. W., & Dinov, I. D. (2023). Disease neuroimaging initiative: Brain structure and allelic associations in Alzheimer’s disease. CNS Neuroscience & Therapeutics, 29(4), 1034–1048. https://doi.org/10.1111/cns.14073. https://onlinelibrary.wiley.com/doi/pdf/10.1111/cns.14073

  • Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted Boltzmann machines. ICML’10 (pp. 807–814). Madison, WI, USA: Omnipress.

    Google Scholar 

  • Narayan, S. (1997). The generalized sigmoid activation function: Competitive supervised learning. Information Sciences, 99(1), 69–82. https://doi.org/10.1016/S0020-0255(96)00200-9

    Article  Google Scholar 

  • Ngoc, Q. T., Lee, S., & Song, B. C. (2020). Facial landmark-based emotion recognition via directed graph neural network. Electronics, 9(5), 764.

    Article  Google Scholar 

  • Nowicki, K., & Snijders, T. A. B. (2001). Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455), 1077–1087. https://doi.org/10.1198/016214501753208735

    Article  Google Scholar 

  • Park, T., & Casella, G. (2008). The Bayesian lasso. Journal of the American Statistical Association, 103(482), 681–686. https://doi.org/10.1198/016214508000000337

    Article  CAS  Google Scholar 

  • Polson, N. G., & Ročková, V. (2018). Posterior concentration for sparse deep learning. Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18 (pp. 938–949). Red Hook, NY, USA: Curran Associates Inc.

    Google Scholar 

  • Polson, N. G., Scott, J. G., & Windle, J. (2013). Bayesian inference for logistic models using Pólya-Gamma latent variables. Journal of the American Statistical Association, 108(504), 1339–1349. https://doi.org/10.1080/01621459.2013.829001

    Article  CAS  Google Scholar 

  • Richiardi, J., Eryilmaz, H., Schwartz, S., Vuilleumier, P., & Van De Ville, D. (2011). Decoding brain states from fMRI connectivity graphs. NeuroImage, 56(2), 616–626. https://doi.org/10.1016/j.neuroimage.2010.05.081. Multivariate Decoding and Brain Reading.

  • Rieck, J. R., Baracchini, G., Nichol, D., Abdi, H., & Grady, C. L. (2021a). Dataset of functional connectivity during cognitive control for an adult lifespan sample. Data in Brief, 39, 107573. https://doi.org/10.1016/j.dib.2021.107573

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Rieck, J. R., Baracchini, G., Nichol, D., Abdi, H., & Grady, C. L. (2021b). Reconfiguration and dedifferentiation of functional networks during cognitive control across the adult lifespan. Neurobiology of Aging, 106, 80–94.

    Article  CAS  PubMed  Google Scholar 

  • Schaefer, A., Kong, R., Gordon, E. M., Laumann, T. O., Zuo, X.-N., Holmes, A. J., Eickhoff, S. B., & Yeo, B. T. (2018). Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI. Cerebral cortex, 28(9), 3095–3114.

    Article  PubMed  Google Scholar 

  • Scott, J. G., & Berger, J. O. (2010). Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem. The Annals of Statistics, 2587–2619.

  • Snijders, T. A. B., Lomi, A., & Torló, V. J. (2013). A model for the multiplex dynamics of two-mode and one-mode networks, with an application to employment preference, friendship, and advice. Social Networks, 35(2), 265–276. https://doi.org/10.1016/j.socnet.2012.05.005

    Article  PubMed  Google Scholar 

  • Spencer, D., Guhaniyogi, R., Shinohara, R., & Prado, R. (2022). Bayesian tensor regression using the Tucker decomposition for sparse spatial modeling. arXiv preprint arXiv:2203.04733

  • Thompson, P., Moussai, J., Zohoori, S., Goldkorn, A., Khan, A., Mega, M., Small, G., Cummings, J., & Toga, A. (1998). Cortical variability and asymmetry in normal aging and Alzheimer’s disease. Cerebral Cortex (New York, NY: 1991), 8(6), 492–509.

    CAS  Google Scholar 

  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B (Methodological), 58(1), 267–288.

    Article  Google Scholar 

  • Wang, M., El-Fiqi, H., Hu, J., & Abbass, H. A. (2019). Convolutional neural networks using dynamic functional connectivity for EEG-based person identification in diverse human states. IEEE Transactions on Information Forensics and Security, 14(12), 3259–3272.

    Article  Google Scholar 

  • Whitfield-Gabrieli, S., & Nieto-Castanon, A. (2012). Conn: A functional connectivity toolbox for correlated and anticorrelated brain networks. Brain connectivity, 2(3), 125–141.

    Article  PubMed  Google Scholar 

  • Xu, S., Zhen, Y., & Wang, J. (2023). Covariate-assisted community detection in multi-layer networks. Journal of Business & Economic Statistics, 41(3), 915–926. https://doi.org/10.1080/07350015.2022.2085726

    Article  Google Scholar 

  • Yang, Y., Ye, C., & Ma, T. (2023). A deep connectome learning network using graph convolution for connectome-disease association study. Neural Networks, 164, 91–104.

    Article  PubMed  Google Scholar 

  • Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., Roffman, J. L., Smoller, J. W., Zöllei, L., Polimeni, J. R., et al. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology.

  • Zhang, X., Xu, G., & Zhu, J. (2022). Joint latent space models for network data with high-dimensional node variables. Biometrika, 109(3), 707–720. https://doi.org/10.1093/biomet/asab063

    Article  Google Scholar 

  • Zhao, R., Liu, T., Huang, Z., Lun, D. P., & Lam, K.-M. (2022). Spatial-temporal graphs plus transformers for geometry-guided facial expression recognition. IEEE Transactions on Affective Computing.

  • Zhou, H., Li, L., & Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502), 540–552. https://doi.org/10.1080/01621459.2013.776499. PMID: 24791032.

  • Zhou, W., Qu, A., Cooper, K. W., Fortin, N., & Shahbaba, B. (2023). A model-agnostic graph neural network for integrating local and global information. Preprint at http://arxiv.org/abs/2309.13459

  • Zhou, Y., Wong, R. K., & He, K. (2024). Broadcasted nonparametric tensor regression. Journal of the Royal Statistical Society Series B: Statistical Methodology, 027.

  • Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., & Sun, M. (2020). Graph neural networks: a review of methods and applications. AI open, 1, 57–81.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

S.G. conceived the idea, formulated the model, calculated the priors for parameters and full conditionals for MCMC computation. S.G. took the lead in drafting the article. J.R.A. implemented the model and drafted parts of the simulation study and the real data analysis, generated and added simulation figures and tables. I.D.D. took the lead in drafting the real data analysis section, editing the draft, providing scientific data and valuable scientific insights, generated the real data result figures. All authors reviewed the manuscript.

Corresponding author

Correspondence to Sharmistha Guha.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Below we present the full conditional posterior distributions for all parameters within the generalized linear model detailed in “Model and Prior Formulation” section.  These distributions are utilized for implementing a Markov Chain Monte Carlo (MCMC) algorithm via Gibbs sampling. The samples derived from the MCMC algorithm yield samples from the complete posterior distributions for the model parameters jointly, facilitating posterior inference as elaborated in “Posterior Inference” section. Let \(\kappa _i=(y_i-0.5)/\omega _i\), and \({\varvec{\Omega }} = diag\left( \frac{1}{\omega _1},...,\frac{1}{\omega _n}\right)\). Assuming \({\varvec{X}}\) to be an \(n\times p\) matrix with its ith row as \({\varvec{x}} _i\), and \({\varvec{A}} ^{(\alpha )}\) to be an \(n\times V(V-1)/2\) dimensional matrix with its ith row given by \({\varvec{w}} _i^{(\alpha )}\). Consequently, the full conditional distributions for the model parameters are expressed as:

  • \(\mu |-\sim N\left( \frac{{\textbf {1}}_{n}^{T} {\varvec{\Omega }} ^{-1}( {\varvec{\kappa }} - {\varvec{X}} {\varvec{\beta }} _x - \sum _{\alpha =1}^L {\varvec{A}} ^{(\alpha )} {\varvec{\gamma }} ^{(\alpha )})}{{\textbf {1}}_{n}^{T} {\varvec{\Omega }} ^{-1}{} {\textbf {1}}_{n}},\frac{1}{{\textbf {1}}_{n}^{T} {\varvec{\Omega }} ^{-1}{} {\textbf {1}}_{n}} \right)\), where \({\varvec{\kappa }} =(\kappa _1,...,\kappa _n)^T\) is an n-dimensional vector of continuous outcomes over all samples.

  • \({\varvec{\beta }} _x|- \sim N( {\varvec{\mu }} _{ {\varvec{\beta }} }, {\varvec{\Sigma }} _{ {\varvec{\beta }} })\), where

    $$\begin{aligned} {\varvec{\Sigma }} _{ {\varvec{\gamma }} }=( {\varvec{X}} ^T {\varvec{\Omega }} ^{-1} {\varvec{X}} + {\varvec{I}} _{p})^{-1},\,\, {\varvec{\mu }} _{ {\varvec{\gamma }} }= {\varvec{\Sigma }} _{ {\varvec{\gamma }} } {\varvec{X}} ^T {\varvec{\Omega }} ^{-1}( {\varvec{\kappa }} - \mu {\textbf {1}}_{n} - \sum _{l=1}^L {\varvec{A}} ^{(\alpha )} {\varvec{\gamma }} ^{(\alpha )}). \end{aligned}$$
  • \({\varvec{\xi }} _v|-\sim \eta _v N( {\varvec{\mu }} _v, {\varvec{\Sigma }} _v)+(1-\eta _v)\Delta ( {\varvec{0}} )\), where

    $$\begin{aligned} {\varvec{\Sigma }} _v=( {\varvec{Z}} _v^T {\varvec{\Omega }} ^{-1} {\varvec{Z}} _v+ {\varvec{K}} ^{-1})^{-1},\,\, {\varvec{\mu }} _v= {\varvec{\Sigma }} _v {\varvec{Z}} _v^T {\varvec{\Omega }} ^{-1}\tilde{ {\varvec{\kappa }} }_v. \end{aligned}$$

    Here \({\varvec{Z}} _v\) is an \(n\times LH\) matrix whose ith row is given by \((\sum _{k<v}w_{i,(k,v)}^{(1)} {\varvec{\xi }} _k^{(1)T} {\varvec{\Theta }} ^{(1)}+\sum _{v<k}w_{i,(v,k)}^{(1)} {\varvec{\xi }} _k^{(1)T} {\varvec{\Theta }} ^{(1)},..., \sum _{k<v}w_{i,(k,v)}^{(L)} {\varvec{\xi }} _k^{(L)T} {\varvec{\Theta }} ^{(L)}+\sum _{v<k}w_{i,(v,k)}^{(L)} {\varvec{\xi }} _k^{(L)T} {\varvec{\Theta }} ^{(L)})\) and \(\tilde{ {\varvec{\kappa }} }_v\) is an n dimensional vector with its ith entry

    $$\tilde{\kappa }_{i} = \kappa _i - \mu - {\varvec{X}} {\varvec{\beta }} _x - \sum _{k<k':k,k'\ne v} \sum _{\alpha =1}^L w_{i,(k,k')}^{(\alpha )} {\varvec{\xi }} _k^{(\alpha )T} {\varvec{\Theta }} ^{(\alpha )} {\varvec{\xi }} _{k'}^{(\alpha )}.$$
  • \(\eta _v|-\sim Ber(\tilde{\eta }_v)\), where

    $$\tilde{\eta }_v=\frac{\delta N(\tilde{ {\varvec{\kappa }} }_v| {\varvec{0}} , {\varvec{Z}} _v {\varvec{K}} {\varvec{Z}} _v^T+ {\varvec{\Omega }} )}{\delta N(\tilde{ {\varvec{\kappa }} }_v| {\varvec{0}} , {\varvec{Z}} _v {\varvec{K}} {\varvec{Z}} _v^{T}+ {\varvec{\Omega }} )+(1-\delta )N(\tilde{ {\varvec{\kappa }} }_v| {\varvec{0}} , {\varvec{\Omega }} )}.$$
  • \(\delta |-\sim Beta(a+\sum _{v=1}^V\eta _v,b+V-\sum _{v=1}^V\eta _v)\).

  • \({\varvec{K}} |-\sim IW\left( \nu +\#\{v:\eta _v=1\}, {\varvec{I}} _{LH}+\sum _{\{v:\eta _v =1\}} {\varvec{u}} _v {\varvec{u}} _v^T\right)\), where \({\varvec{u}} _v=( {\varvec{u}} _v^{(1)T},..., {\varvec{u}} _v^{(L)T})^T\).

  • $$\theta _{h}^{(\alpha )} \mid - \sim {\left\{ \begin{array}{ll} 1 &{} w.p. \; p_{h,1}^{(\alpha )} \\ 0 &{} w.p. \; p_{h,2}^{(\alpha )} \\ -1 &{} w.p. \; p_{h,3}^{(\alpha )} \end{array}\right. }$$

    where \(p_{h,1}^{(\alpha )} = \dfrac{\pi _{h, 1}^{(\alpha )} N( {\varvec{\kappa }} |\mu {\textbf {1}}_{n} + {\varvec{X}} {\varvec{\beta }} _x + {\varvec{A}} ^{(\alpha )} ( {\varvec{\gamma }} ^{(\alpha )})^{(\theta _{h}^{(\alpha )} = 1)}+\sum _{\alpha '\ne \alpha } {\varvec{A}} ^{(\alpha ')} ( {\varvec{\gamma }} ^{(\alpha ')}), {\varvec{\Omega }} )}{S}\), \(p_{h,2}^{(\alpha )} = \dfrac{\pi _{h, 2}^{(\alpha )} N( {\varvec{\kappa }} |\mu {\textbf {1}}_{n} + {\varvec{X}} {\varvec{\beta }} _x + {\varvec{A}} ^{(\alpha )} ( {\varvec{\gamma }} ^{(\alpha )})^{(\theta _{h}^{(\alpha )} = 0)}+\sum _{\alpha '\ne \alpha } {\varvec{A}} ^{(\alpha ')} ( {\varvec{\gamma }} ^{(\alpha ')}), {\varvec{\Omega }} )}{S},\) \(p_{h,3}^{(\alpha )} = \dfrac{\pi _{h, 3}^{(\alpha )} N( {\varvec{\kappa }} |\mu {\textbf {1}}_{n} + {\varvec{X}} {\varvec{\beta }} _x + {\varvec{A}} ^{(\alpha )} ( {\varvec{\gamma }} ^{(\alpha )})^{(\theta _{h}^{(\alpha )} = -1)}+\sum _{\alpha '\ne \alpha } {\varvec{A}} ^{(\alpha ')} ( {\varvec{\gamma }} ^{(\alpha ')}), {\varvec{\Omega }} )}{S}\), where, \(S=\sum _{s\in \{0,1,-1\}}\pi _{h, s}^{(\alpha )} N( {\varvec{\kappa }} |\mu {\textbf {1}}_{n} + {\varvec{X}} {\varvec{\beta }} _x + {\varvec{A}} ^{(\alpha )} ( {\varvec{\gamma }} ^{(\alpha )})^{(\theta _{h}^{(\alpha )} = s)}+\sum _{\alpha '\ne \alpha } {\varvec{A}} ^{(\alpha ')} {\varvec{\gamma }} ^{(\alpha ')}, {\varvec{\Omega }} )\)

  • \(\omega _i|- PG(1,\mu + {\varvec{x}} _{i}^{T} {\varvec{\beta }} _x+\sum _{\alpha =1}^L {\varvec{w}} _i^{(\alpha )T} {\varvec{\gamma }} ^{(\alpha )})\), for \(i=1,...,n.\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guha, S., Rodriguez-Acosta, J. & Dinov, I.D. A Bayesian Multiplex Graph Classifier of Functional Brain Connectivity Across Diverse Tasks of Cognitive Control. Neuroinform 22, 457–472 (2024). https://doi.org/10.1007/s12021-024-09670-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12021-024-09670-w

Keywords