Try Gemma 4 31B in the Workbench
Run this model interactively, tune parameters, and compare outputs.
gemma-4-31b-it
Gemma 4 31B is the flagship dense multimodal model in the Gemma 4 family with 31 billion parameters. It supports text, image, and video inputs with a 256K token context window and over 140 languages.
As the largest and most capable Gemma 4 variant, it achieves an estimated LMArena score of 1452, comparable to much larger models. Gemma 4 introduces Per-Layer Embeddings (PLE) for richer token representations, shared KV cache for efficient long-context inference, and variable aspect ratio vision encoding.
| Metric | Value |
|---|---|
| Parameter Count | 31 billion |
| Mixture of Experts | No |
| Context Length | 256,000 tokens |
| Multilingual | Yes (140+ langs) |
| Quantized* | No |
Example request
- Minimal
- Basic parameters
- All parameters
Fetch model details
The models endpoint returns the full model object, including itsjson_request_schema.