Time series foundation models rely on data augmentation to extend training coverage, yet augmentation strategies are chosen heuristically before training. SIAS provides a principled method to identify optimal augmentations through a decomposable quality score (affinity + diversity) and online contextual bandit selection.
Jitter, Scaling, Time Warp, Magnitude Warp, Permutation, Spectral, Trend Injection
Trend (154 trend series), Seasonal (123 seasonal series), Mixed (balanced), Noise (142 AR series)
Length 256, horizon 32, 160 train / 40 validation split, 15 epochs
| Strategy | Trend | Seasonal | Mixed |
|---|
The affinity-diversity score provides a fast, training-free proxy for augmentation effectiveness. Augmentations ranking high on this score also produce lower validation loss.
The bandit selects different augmentations per domain: time warp for trend data (78.7%), jitter for seasonal data (88.0%).
SIAS achieves 0.8909 MSE in trend domain, outperforming the best fixed baseline (0.8917) without prior domain knowledge.
The framework correctly identifies permutation as destructive (affinity 0.6749, lowest) despite its non-negligible diversity (0.8499).