by Liquid AI
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Recency2026-03-03T20:27:34.645Z | 100 | 15% | +15.0 |
| Context Window2026-03-03T20:27:34.645Z | 72 | 15% | +10.7 |
| Capabilities2026-03-03T20:27:34.645Z | 14 | 25% | +3.6 |
| Versatility2026-03-03T20:27:34.645Z | 33 | 10% | +3.3 |
| Output Capacity2026-03-03T20:27:34.645Z | 20 | 10% | +2.0 |
| Pricing Tier2026-03-03T20:27:34.645Z | 0 | 25% | +0.0 |
Cost Estimator
You save $40.22/month vs category average