by ·
Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Recencyjust now | 86 | 15% | +12.9 |
| Capabilitiesjust now | 33 | 30% | +10.0 |
| Context Windowjust now | 62 | 15% | +9.3 |
| Output Capacityjust now | 55 | 15% | +8.3 |
| Pricingjust now | 30 | 25% | +7.5 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with Gemma 3n 2B (free) and help the community make better decisions.
Free