Distributed simultaneous inference in generalized linear models via confidence distribution

https://doi.org/10.1016/j.jmva.2019.104567Get rights and content
Under an Elsevier user license
open archive

Abstract

We propose a distributed method for simultaneous inference for datasets with sample size much larger than the number of covariates, i.e., Np, in the generalized linear models framework. When such datasets are too big to be analyzed entirely by a single centralized computer, or when datasets are already stored in distributed database systems, the strategy of divide-and-combine has been the method of choice for scalability. Due to partition, the sub-dataset sample sizes may be uneven and some possibly close to p, which calls for regularization techniques to improve numerical stability. However, there is a lack of clear theoretical justification and practical guidelines to combine results obtained from separate regularized estimators, especially when the final objective is simultaneous inference for a group of regression parameters. In this paper, we develop a strategy to combine bias-corrected lasso-type estimates by using confidence distributions. We show that the resulting combined estimator achieves the same estimation efficiency as that of the maximum likelihood estimator using the centralized data. As demonstrated by simulated and real data examples, our divide-and-combine method yields nearly identical inference as the centralized benchmark.

AMS 2010 subject classifications

primary
62H15
secondary
62F12

Keywords

Bias correction
Confidence distribution
Inference
Lasso
Meta-analysis
Parallel computing

Cited by (0)