Skip to main content

Try Gemini 3 Flash in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: gemini-3-flash-preview gemini-3-flash-preview is a Multimodal LLM. It excels in agentic workflows, multi-turn chat, coding assistance, and interactive tasks due to its lower latency and near-Pro reasoning compared to larger Gemini variants. Some other noteworthy features of gemini-3-flash-preview include configurable thinking levels (minimal, low, medium, high), structured output, tool use, automatic context caching, and support for multimodal inputs like text, images, audio, video, and PDFs.
MetricValue
Parameter CountUnknown
Mixture of ExpertsUnknown
Context Length1M tokens
MultilingualYes
Quantized*Unknown
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "gemini-3-flash-preview",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/gemini-3-flash-preview

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.