by Baidu
A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures and modality-isolated routing. Supporting an extensive 131K token context length, the model achieves efficient inference via multi-expert parallel collaboration and quantization, while advanced post-training techniques including SFT, DPO, and UPO ensure optimized performance across diverse applications with specialized routing and balancing losses for superior task handling.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Recency2026-03-03T20:28:11.013Z | 96 | 15% | +14.4 |
| Context Window2026-03-03T20:28:11.013Z | 81 | 15% | +12.1 |
| Capabilities2026-03-03T20:28:11.013Z | 29 | 25% | +7.1 |
| Output Capacity2026-03-03T20:28:11.013Z | 65 | 10% | +6.5 |
| Versatility2026-03-03T20:28:11.013Z | 33 | 10% | +3.3 |
| Pricing Tier2026-03-03T20:28:11.013Z | 0 | 25% | +0.1 |
Cost Estimator
You save $39.46/month vs category average