Skip to main content

Try GPT 5 Nano in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: gpt-5-nano GPT 5 Nano is a Multimodal LLM optimized for speed and efficiency, making it ideal for applications that require ultra-low latency or must operate under resource constraints. It excels in rapid summarization, classification, and other straightforward language tasks where cost and response time are critical, though it has more limited reasoning and coding abilities compared to larger GPT-5 variants. Some other noteworthy features of GPT 5 Nano include support for both text and image inputs, minimal reasoning modes for even faster responses, and real-time streaming outputs.
MetricValue
Parameter CountUnknown
Mixture of ExpertsUnknown
Context Length400,000 tokens
MultilingualNo
Quantized*Yes
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "gpt-5-nano",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/gpt-5-nano

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.