Try OpenAI/GPT-OSS-120B in the Workbench
Run this model interactively, tune parameters, and compare outputs.
gpt-oss-120b
OpenAI GPT OSS 120B is an LLM built with a Mixture-of-Experts (MoE) architecture and designed for efficient, large-scale reasoning, coding, and agentic tasks. It excels in processing extremely long contexts—up to 128,000 tokens—while maintaining resource efficiency by activating only a small subset of its total parameters per token, making it practical for both research and production environments.
Some other noteworthy features of OpenAI GPT OSS 120B include its suitability for local and private deployments, and its strong performance on tool-use and code generation tasks.
| Metric | Value |
|---|---|
| Parameter Count | 117 billion |
| Mixture of Experts | Yes |
| Active Parameter Count | 5.1 billion |
| Context Length | 128,000 tokens |
| Multilingual | Yes |
| Quantized* | Yes |
| Precision* | MXFP4 |
Example request
- Minimal
- Basic parameters
- All parameters
Fetch model details
The models endpoint returns the full model object, including itsjson_request_schema.