Skip to main content

Try Perplexity Sonar in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: sonar Perplexity Sonar is an LLM built on top of Llama 3.3 70B and further trained in-house by Perplexity to optimize answer quality, factuality, and readability for search-augmented tasks. It excels in delivering fast, accurate answers grounded in real-time web data with detailed citations—making it especially effective for research, fact-checking, and obtaining up-to-date information. Some other noteworthy features of Perplexity Sonar include its ability to provide concise responses with source attribution at high speed (up to 1,200 tokens per second), seamless integration with Perplexity’s search engine for enhanced user experience, and a context window suitable for handling complex queries.
MetricValue
Parameter Count70 billion
Mixture of ExpertsNo
Context Length128,000 tokens
MultilingualYes
Quantized*No
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "sonar",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/sonar

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.