Skip to main content

Try GPT 4.1 nano in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: gpt-4-1-nano-2025-04-14 GPT 4.1 Nano is an LLM designed for tasks requiring low latency such as classification or autocompletion. It excels in delivering fast responses with minimal cost while maintaining impressive capabilities, featuring the full 1 million token context window despite its lightweight nature. Some other noteworthy use cases of GPT 4.1 Nano include high-volume operations, content tagging, and powering real-time AI agents where speed and efficiency are critical.
MetricValue
Parameter CountUnknown
Mixture of ExpertsUnknown
Context Length1,047,576 tokens
MultilingualYes
Quantized*Unknown
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "gpt-4-1-nano-2025-04-14",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/gpt-4-1-nano-2025-04-14

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.