Skip to main content

Try Ministral 3B in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: ministral-3b-latest Ministral 3B is a 3B parameter LLM optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.
MetricValue
Parameter Count3 billion
Mixture of ExpertsNo
Context Length128,000 tokens
MultilingualYes
Quantized*Unknown
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "ministral-3b-latest",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/ministral-3b-latest

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.