Skip to main content

Try OpenAI/GPT-OSS-120B in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: gpt-oss-120b OpenAI GPT OSS 120B is an LLM built with a Mixture-of-Experts (MoE) architecture and designed for efficient, large-scale reasoning, coding, and agentic tasks. It excels in processing extremely long contexts—up to 128,000 tokens—while maintaining resource efficiency by activating only a small subset of its total parameters per token, making it practical for both research and production environments. Some other noteworthy features of OpenAI GPT OSS 120B include its suitability for local and private deployments, and its strong performance on tool-use and code generation tasks.
MetricValue
Parameter Count117 billion
Mixture of ExpertsYes
Active Parameter Count5.1 billion
Context Length128,000 tokens
MultilingualYes
Quantized*Yes
Precision*MXFP4
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "gpt-oss-120b",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/gpt-oss-120b

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.