Try OpenAI/GPT-OSS-20B in the Workbench
Run this model interactively, tune parameters, and compare outputs.
openai-gpt-oss-20b
OpenAI GPT OSS 20B is an LLM designed for efficient reasoning, agentic tasks, and flexible deployment, particularly on consumer hardware and specialized local environments.
It excels in delivering high-quality reasoning and chain-of-thought outputs while operating within modest hardware constraints, thanks to its Mixture-of-Experts architecture with only 3.6 billion active parameters at a time, and native MXFP4 (4-bit) quantization. The model supports a context length of up to 131,000 tokens, is fully fine-tunable, and natively supports agentic capabilities such as function calling, web browsing, Python execution, and structured output. Its multilingual capabilities have also been demonstrated in professional evaluations across 14 languages.
Some other noteworthy features of OpenAI GPT OSS 20B include configurable reasoning effort (allowing users to balance latency and output quality) and full chain-of-thought visibility for enhanced debugging and trust.
| Metric | Value |
|---|---|
| Parameter Count | 21 billion |
| Mixture of Experts | Yes |
| Active Parameter Count | 3.6 billion |
| Context Length | 131,000 tokens |
| Multilingual | Yes |
| Quantized* | Yes |
| Precision* | MXFP4 |
Example request
- Minimal
- Basic parameters
- All parameters
Fetch model details
The models endpoint returns the full model object, including itsjson_request_schema.