Taylor & Francis Group
Browse
TEXT
0_functions4analysis.r (16.97 kB)
TEXT
1_dataPreprocessing.R (7.83 kB)
TEXT
2_dataAnalysis.R (5.46 kB)
TEXT
IRWLStargetTiming.r (2.61 kB)
TEXT
RRvsMM.r (7.98 kB)
TEXT
effectOfinitialTarget.r (5.03 kB)
TEXT
multipleTargetsLOOCV.r (1.74 kB)
DOCUMENT
ucgs_a_2035231_sm7212.pdf (524.7 kB)
1/0
8 files

Sequential Learning of Regression Models by Penalized Estimation

Version 2 2022-03-31, 18:00
Version 1 2022-01-31, 21:40
dataset
posted on 2022-03-31, 18:00 authored by Wessel N. van Wieringen, Harald Binder

When data arrive in a sequence of two or more datasets, modeling on the most recent dataset should take previous datasets into account. We specifically investigate a strategy for regression modeling when parameter estimates from previous data can be used as anchoring points, yet may not be available for all parameters, thus, covariance information cannot be reused. A procedure that updates through targeted penalized estimation, which shrinks the estimator toward a nonzero value, is presented. The parameter estimate from the previous data serves as this nonzero value when an update is sought from novel data. This naturally extends to a sequence of datasets with the same response, but potentially only partial overlap in covariates. The iteratively updated regression parameter estimator is shown to be asymptotically unbiased and consistent. The penalty parameter is chosen through constrained cross-validated log-likelihood optimization. The constraint bounds the amount of shrinkage of the updated estimator toward the current one from below. The bound aims to preserve the (updated) estimator’s goodness of fit on all-but-the-novel data. The proposed approach is compared to other regression modeling procedures. Finally, it is illustrated on an epidemiological study where the data arrive in batches with different covariate-availability and the model is refitted with the availability of a novel batch. Supplementary materials for this article are available online.

History