# When can Multi-Site Datasets be Pooled for Regression? Hypothesis Tests, l2-consistency and Neuroscience Applications: Summary

## Introduction

### Some Basic Concepts and Issues

While the challenges posed by large-scale datasets are compelling, one is often faced with a fairly distinct set of technical issues for studies in biological and health sciences. For instance, a sizable portion of scientific research is carried out by small or medium sized groups supported by modest budgets. Hence, there are financial constraints on the number of experiments and/or number of participants within a trial, leading to small datasets. Similar datasets from multiple sites can be pooled to potentially improve statistical power and address the above issue.

#### Regression Problems

Ridge and Lasso regression are powerful techniques generally used for creating parsimonious models in the presence of a ‘large’ number of features. Here ‘large’ can typically mean either of two things:

• Large enough to enhance the tendency of a model to overfit (as low as 10 variables might cause overfitting)
• Large enough to cause computational challenges. With modern systems, this situation might arise in case of millions or billions of features

#### Ridge Regression and Overfitting:

Ridge Regression is a technique for analyzing multiple regression data that suffer from multicollinearity . When multicollinearity occurs, least squares estimates are unbiased, but their variances are large so they may be far from the true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors. It is hoped that the net effect will be to give estimates that are more reliable.

Ridge regression is commonly used in econometrics and thus machine learning. When fitting a model, unnecessary inputs or inputs with co-linearity might bring disastrously huge coefficients (with a large variance). Ridge regression performs L2 regularization, i.e. it adds a factor of sum of squared coefficients in the optimization objective. Thus, ridge regression optimizes the following:

• Objective = RSS + λ * (sum of square of coefficients)

Note that performing ridge regression is equivalent to minimizing RSS ( Residual Sum of Squares) under the constraint that sum of squared coefficients is less than some function of λ, say s(λ). Ridge regression usually utilizes the method of cross-validation where we train the model on the training set using different values of λ and optimizing the above objective function. Then each of those model (each trained with different λ's) are tested on the validation set to evaluate their performance.

#### Lasso Regression and Model Selection:

LASSO stands for Least Absolute Shrinkage and Selection Operator. Lasso regression performs L1 regularization, i.e. it adds a factor of sum of absolute value of coefficients in the optimization objective. Thus, lasso regression optimizes the following.

• Objective = RSS + λ * (sum of absolute value of coefficients)
1. λ = 0: Same coefficients as simple linear regression
2. λ = ∞: All coefficients zero (same logic as before)
3. λ < α < ∞: coefficients between 0 and that of simple linear regression

A feature of Lasso regression is its job as a selection operator, i.e. it usually shrinks a part of coefficients to zero, while keeping the values of other coefficients. Thus it can be used in opting unnecessary coefficients out of the model.

Another type of regression model that is worth mentioning here is what we call Elastic Net Regression. This type of regression model is utilizing both L1 and L2 regularization, namely combining the regularization techniques used in lasso regression and ridge regression together in the objective function. This type of regression could also be of possible interest to be applied in the context of this paper. Its objective function is shown below, where we can see both the sum of absolute value of coefficients and the sum of square of coefficients are included: $\hat{\beta} = argmin ||y – X \beta||^2 – λ_2 ||\beta||^2 – λ_1||\beta||$ w.r.t. $\beta$

The bias is error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting). The variance is error from sensitivity to small fluctuations in the training set. High variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs (overfitting). Mean square error (MSE) is defined by (variance + squared bias). A thing to mention is the following theorem:

• For ridge regression, there exists a certain λ such that the MSE of coefficients calculated by ridge regression is smaller than that calculated by direct regression.

### Related Work

#### Meta-analysis approaches

Meta analysis is a statistical analysis which combines the results of several studies. There are several methods for non-imaging Meta analysis: p-value combining, fixed effects model, random effects model, and Meta regression. When datasets at different sites cannot be shared or pooled, various strategies exist that cumulate the general findings from analyses on different datasets. However, minor violations of assumptions can lead to misleading scientific conclusions (Greco et al., 2013), and substantial personal judgment (and expertise) is needed to conduct them.

The idea of addressing “shift” within datasets has been rigorously studied within statistical machine learning. However, these focuses on the algorithm itself and do not address the issue of whether pooling the datasets, after applying the calculated adaptation (i.e., transformation), is beneficial. The goal in this work is to assess whether multiple datasets can be pooled — either before or usually after applying the best domain adaptation methods — for improving our estimation of the relevant coefficients within linear regression. A hypothesis test is proposed to directly address this question.

#### Simultaneous High dimensional Inference

Simultaneous high dimensional inference models are an active research topic in statistics. Multi sample- splitting takes half of the data set for feature selection and the remaining portion of the data set for calculating p values. The authors use contributions in this area to extend their results to a higher dimensional setting.

## The Hypothesis Test

The hypothesis test to evaluate statistical power improvements (e.g., mean squared error) when running a regression model on a pooled dataset is discussed below.β corresponds to the coefficient vector (i.e., predictor weights), then the regression model is

• $min_{β} \frac{1}{n}\left \Vert y-Xβ \right \|_2^2$ ........ (1)

Where $X ∈ R^{n×p}$ and $y ∈ R^{n×1}$ denote the feature matrix of predictors and the response vector respectively. If k denotes the number of sites, a domain adaptation scheme needs to be applied to account for the distributional shifts between the k different predictors $\lbrace X_i \rbrace_{i=1}^{k}$, and then run a regression model. If the underlying “concept” (i.e., predictors and responses relationship) can be assumed to be the same across the different sites, then it is reasonable to impose the same β for all sites. For example, the influence of CSF protein measurements on cognitive scores of an individual may be invariant to demographics. if the distributional mismatch correction is imperfect, we may define ∆ βi = βi − β∗ where i ∈ {1,...,k} as the residual difference between the site-specific coefficients and the true shared coefficient vector (in the ideal case, we have ∆ βi = 0). Therefore we derive the Multi-Site Regression equation ( Eq 2) where $\tau_i$ is the weighting parameter for each site

• $min_{β} \displaystyle \sum_{i=1}^k {\tau_i^2\left \Vert y_i-X_iβ \right \|_2^2}$ ......... (2)

where for each site i we have $y_i = X_iβ_i +\epsilon_i$ and $\epsilon_i ∼ N (0, σ^2_i)$

### Separate Regression or Shared Regression ?

Since the underlying relationship between predictors and responses is the same across the different datasets ( from which its pooled), estimates of $\beta_i$ across all k sites are restricted to be the same. Without this constraint , (3) is equivalent to fitting a regression separately on each site. To explore whether this constraint improves estimation, the Mean Square Error (MSE) needs to be examined. Hence, using site 1 as the reference, and setting $\tau_1$ = 1 in (2) and considering $\beta*=\beta_1$,

• $min_{β} \frac{1}{n}\left \Vert y_1-X_1β \right \|_2^2 + \displaystyle \sum_{i=2}^k {\tau_i^2\left \Vert y_i-X_iβ \right \|_2^2}$ .........(3)

To evaluate whether MSE is reduced, we first need to quantify the change in the bias and variance of (3) compared to (1).

#### Case 1: Sharing all $\beta$s

$n_i$: sample size of site i
$\hat{β}_i$: regression estimate from a specific site i.
$∆β^T$:length kp vector

Lemma 2.2 bounds the increase in bias and reduction in variance. Theorem 2.3 is the author's main test result.Although $\sigma_i$ is typically

unknown, it can be easily replaced using its site specific estimation. Theorem 2.3 implies that we can conduct a non-central $\chi^2$ distribution test based on the statistic.

Theorem 2.3 implies that the sites, in fact, do not even need to share the full dataset to assess whether pooling will be useful. Instead, the test only requires very high-level statistical information such as $\hat{\beta}_i,\hat{\Sigma}_i,\sigma_i$ and $n_i$ for all participating sites – which can be transferred without computational overhead.

One can find R code for the hypothesis test for Case 1 in https://github.com/hzhoustat/ICML2017 as provided by the authors. In particular the Hypotest_allparam.R script provides the hypothesis test whereas Simultest_allparam.R provides some simulation examples that illustrate the application of the test under various different settings.

#### Case 2: Sharing a subset of $\beta$s

For example, socio-economic status may (or may not) have a significant association with a health outcome (response) depending on the country of the study (e.g., insurance coverage policies). Unlike Case 1, $\beta$ cannot be considered to be the same across all sites. The model in (3) will now include another design matrix of predictors $Z\in R^{n*q}$and corresponding coefficients $\gamma_i$ for each site i,

$min_{β,\gamma} \sum_{i=1}^{k}\tau_i^2\left \Vert y_i-X_iβ-Z_i\gamma_i \right \|_2^2$ ... (9)

where

$y_i=X_i \beta^* + X_i \Delta \beta_i + Z_i \gamma_i^* + \epsilon_i, \tau_1=1$ ... (10)

While evaluating whether the MSE of $\beta$ reduces, the MSE change in $\gamma$ is ignored because they correspond to site-specific variables. If $\hat{\beta}$is close to the “true” $\beta^*$, it will also enable a better estimation of site-specific variables

One can find R code for the hypothesis test for Case 2 in https://github.com/hzhoustat/ICML2017 as provided by the authors. In particular the Hypotest_subparam.R script provides the hypothesis test whereas Simultest_subparam.R provides some simulation examples that illustrate the application of the test under various different settings.

## Sparse Multi-Site Lasso and High Dimensional Pooling

Pooling multi-site data in the high-dimensional setting where the number of predictors p is much larger than number of subjects n studied ( p>>n) leads to a high sparsity condition where many variables have their coefficients with limits tending to 0. Lasso Variable Selection helps in selecting the right coefficients for representing the relationship between the predictors and subjects

### $\ell_2$-consistency

In the background of asymptotic analysis and approximations, the Lasso estimator is not variable selection consistent if the "Irrepresentable Condition" fails. The Irrepresentable Condition: Lasso selects the true model consistently if and (almost) only if the predictors that are not in the true model are “irrepresentable” (in a sense to be clarified) by predictors that are in the true model. Which means, even if the exact sparsity pattern might not be recovered, the estimator can still be a good approximation to the truth. This also suggests that, for Lasso, estimation consistency might be easier to achieve than variable selection consistency.In classical regression, $\ell_2$ consistency properties are well known. Imposing the same $\beta$ across sites works in (3) because we understand its consistency. In contrast, in the case where p>>n, one cannot enforce a shared coefficient vector for all sites before the active set of predictors within each site are selected — directly imposing the same leads to a loss of $\ell_2$-consistency, making follow-up analysis problematic. Therefore, once a suitable model for high-dimensional multi-site regression is chosen, the first requirement is to characterize its consistency.

### Sparse Multi-Site Lasso Regression

The sparse multi-site Lasso variant is chosen because multi-task Lasso underperforms when the sparsity pattern of predictors is not identical across sites.The hyperparameter $\alpha\in [0, 1]$ balances both penalties between L1 regularization and the Group Lasso penalty on a group of features. The difference is that SMS Lasso generalizes the Lasso to the multi-task setting by replacing the L1-norm regularization with the sum of sup-norm regularization.

• Larger $\alpha$ weighs the L1 penalty more
• Smaller $\alpha$ puts more weight on the grouping.

Similar to a Lasso-based regularization parameter, $\lambda$ here will produce a solution path (to select coefficients) for a given $\alpha$.

### Setting the hyperparameter $\alpha$ using Simultaneous Inference

Step 1: They apply simultaneous inference (like multi sample-splitting or de-biased Lasso) using all features at each of the k sites with FWER control. This step yields “site-active” features for each site, and therefore, gives the set of always-active features and the sparsity patterns

Step 2: Then, each site runs a Lasso and chooses a λi based on cross-validation. Then they set λmulti-site to be the minimum among the best λs from each site. Using λmulti-site , we can vary to fit various sparse multi-site Lasso models – each run will select some number of always-active features. Then plot α versus the number of always-active features.

Step 3: Finally, based on the sparsity patterns from the site-active set, they estimate whether the sparsity patterns across sites are similar or different (i.e., share few active features). Then, based on the plot from step (2), if the sparsity patterns from the site-active sets are different (similar) across sites, then the smallest (largest) value of that selects the minimum (maximum) number of always-active features is chosen

## Experiments

There are 2 distinct experiments described:

1. Performing simulations to evaluate the hypothesis test and sparse multi-site Lasso;
2. Pooling 2 Alzheimer's Disease datasets and examining the improvements in statistical power. This experiment was also done with the view of evaluating whether pooling is beneficial for regression and whether it yields tangible benefits in investigating scientific hypotheses.

### Power and Type I Error

1. The first set of simulations evaluate Case 1 (Sharing all β): The simulations are repeated 100 times with 9 different sample sizes. As n increases, both MSEs decrease (two-site model and baseline single site model), and the test tends to reject pooling the multi-site data.
2. The second set of simulations evaluates Case 2 variables (Sharing subset of β): For small n, MSE of two-site model is much smaller than baseline, and as sample size increases this difference reduces. The test accepts with high probability for small n,and as sample size increases it rejects with high power.

### SMS Lasso L2 Consistency

In order to test the Sparse Multi-Site Model, the case where sparsity patterns are shared is considered separately from the case where they are not shared. Here, 4 sites with n = 150 samples each and p = 400 features were used.

1. Few Sparsity Patterns Shared:6 shared features and 14 site-specific features (out of the 400) are set to be active in 4 sites. The chosen $\alpha$= 0:97 has the smallest error, across all $\lambda$s, thereby implying a better $\ell$2 consistency. $\alpha$= 0:97 discovers more always-active features, while preserving the ratio of correctly discovered active features to all the discovered ones.
2. Most Sparsity Patterns Shared: 16 shared and 4 site-specific features to be active among all 400 features were set.The proposed choice of $\alpha$ = 0.25 preserves the correctly discovered number of always-active features. The ratio of correctly discovered active features to all discovered features increases here.

### Combining AD Datasets from Multiple Sites

2. Show how pooling can be used ( in certain regimes of high dimensional and standard linear regression) even when the features are different across sites. For this the authors show the $\ell_2$-consistency rate which supports the use of spare-multi-task Lasso when sparsity patterns are not identical