SVM Weights as a predictor for Stability

Mehmet Ugurbil

Unversity of Minnesota

02-Jul-2018

Contents

Aim

1. Investigate the informative content of an svm model fit on the entire dataset on feature strongness.

Null Hypothesis

SVM weights do not predict strongness.

Experiment Design

1. Fit svm on entire dataset for different sample sizes.
2. Construct AUC curve where features are samples ranked by their svm weights and target is whether they are strong or not.

Observations

1. Monotonic performance increase is observed as sample size increases in svm weights' power to detect strongness.
2. Performance is higher in the absence of multiplicity.
3. Performance is higher when weak variable count is increased, but this performance does not reach the predictive power when multiplicity is removed.

Dataset Descriptions

TIE-Net = Original simulated data - TIE near-faithful causal network.
TIE-Net-Reduced1 = TIE-Net with multiplicity removed according to the original graph.
TIE-Net-Reduced2 = TIE-Net with multiplicity removed using Tie* Algorithm.
- - Note that this is not one dataset, but one for each repeat per sample size (550 total).
- - This also implies that the feature stability doesn't make sense in this dataset, but included for completeness.
TIE-Net-Weak1 = TIE-Net with weak variables multiplied 50 times.
TIE-Net-Weak2 = TIE-Net-Weak1 with gaussian noise, uniformly random deviation.

Experiment Results

Frame 1: Frame 2:

The End