InertialAI-Alpha
A 325M parameter foundation model that reads text and time-series as a single language. Designed for finetuning on your domain data.

Unified Intelligence
Your infrastructure generates text and time-series simultaneously. InertialAI Alpha is built to understand both natively.
Traditional approaches stitch together separate models, losing context at the seams. By projecting continuous sensor data into the same vector space as language tokens, InertialAI Alpha can reason about physical systems with the semantic depth of an LLM.
Designed for Adaptation
Multi-modal Fusion
Apply to your specific time-series/sensor data or text simultaneously, ensuring they are treated as first-class data types.
Sample Efficiency
Pretrained on 150B tokens of physical signals, InertialAI Alpha learns your specific patterns with 100x fewer examples than training from scratch.
Data Privacy
Finetuning happens within your VPC. Your proprietary manufacturing data or patient data never leave your secure environment.
Benchmark Results
Trained on a mix of text and time-series data, maintaining competitive performance in both modalities.
Language Understanding
Despite being trained on only 150B tokens, InertialAI Alpha achieves strong performance on reasoning benchmarks.
| Benchmark | Alpha (325M) | Gemma 3 (270M) | LFM-2 (350M) |
|---|---|---|---|
| Training Tokens | 138B | 6T | 10T |
| HellaSwag (0-shot) | 43.4% | 40.1% | 48.3% |
| PIQA (10-shot) | 69.5% | 67.6% | 69.8% |
| BoolQ (10-shot) | 56.5% | 51.7% | 57.2% |
| Winograd (0-shot) | 67.4% | 65.2% | 60.8% |
Matches or exceeds models trained on 40x more text data. | |||
Probabilistic Forecasting
Forecast Evaluated on CRPS (Continuous Ranked Probability Score) with varying input context and output lengths. Lower is better.
| Domain | Alpha | TFM v1 | TFM v2 | Naive |
|---|---|---|---|---|
| Web/CloudOps | 0.63 | 0.94 | 0.66 | 1.00 |
| Energy (Grid Load) | 0.89 | 0.78 | 0.67 | 1.00 |
| Transport | 0.64 | 0.59 | 0.50 | 1.00 |
| Retail (Sales) | 0.48 | 0.42 | 0.42 | 1.00 |
| Nature (Weather) | 0.72 | 0.41 | 0.35 | 1.00 |
Designed to be finetuned for downstream multi-modal applications. | ||||
Real-World Applications
See how InertialAI Alpha handles complex, multi-modal tasks.
Predicting Cardiac Events

Early detection of cardiac events in the ER requires analyzing two distinct data sources: continuous ECG waveforms from bedside monitors and unstructured clinical notes from triage. Standard approaches process these separately, missing critical correlations between physiological signals and patient history.
InertialAI Alpha jointly encodes the ECG time-series and clinical text into a shared representation. This allows the model to detect patterns like arrhythmias that coincide with reported symptoms of faintness. By understanding both modalities together, it achieves higher accuracy than single-modality baselines like Transformers and LSTMs.
| Model | AUROC (Higher is Better) | Performance Gap |
|---|---|---|
| InertialAI Alpha | 0.934 | - |
| Transformer | 0.891 | -4.6% |
| XGBoost | 0.862 | -7.7% |
| LSTM | 0.814 | -12.8% |