by
A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures and modality-isolated routing. Supporting an extensive 131K token context length, the model achieves efficient inference via multi-expert parallel collaboration and quantization, while advanced post-training techniques including SFT, DPO, and UPO ensure optimized performance across diverse applications with specialized routing and balancing losses for superior task handling.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Recencyjust now | 92 | 15% | +13.9 |
| Context Windowjust now | 81 | 15% | +12.1 |
| Capabilitiesjust now | 33 | 30% | +10.0 |
| Output Capacityjust now | 65 | 15% | +9.8 |
| Pricingjust now | 0 | 25% | +0.1 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with ERNIE 4.5 21B A3B and help the community make better decisions.
Cost Estimator
You save $40.64/month vs category average