Skip to main content

Try Nemotron 3 Super in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: nvidia-nemotron-120b-a12b nvidia/Nemotron-120B-A12B (Nemotron 3 Super) is a 120B total parameter model with 12B active parameters, using a hybrid Mamba-Transformer mixture-of-experts (MoE) architecture. It delivers over 5x throughput compared to the previous Nemotron Super and features a native 1M-token context window for long-term memory in multi-agent systems. The model excels at agentic reasoning, scoring 85.6% on PinchBench (best in its class), and is optimized for applications like software development and cybersecurity triaging.
MetricValue
Parameter Count120 billion
Mixture of ExpertsYes
Active Parameter Count12 billion
Context Length1,000,000 tokens
MultilingualYes
Quantized*Yes
Precision*NVFP4
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "nvidia-nemotron-120b-a12b",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/nvidia-nemotron-120b-a12b

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.