by Google
Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Recency2026-03-03T20:27:08.669Z | 90 | 15% | +13.4 |
| Context Window2026-03-03T20:27:08.669Z | 62 | 15% | +9.3 |
| Pricing Tier2026-03-03T20:27:08.669Z | 30 | 25% | +7.5 |
| Capabilities2026-03-03T20:27:08.669Z | 29 | 25% | +7.1 |
| Output Capacity2026-03-03T20:27:08.669Z | 55 | 10% | +5.5 |
| Versatility2026-03-03T20:27:08.669Z | 33 | 10% | +3.3 |
Free