Skip to main content

Try Gemini 2.5 Flash in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: gemini-2-5-flash Gemini 2.5 Flash is a multimodal LLM designed for fast, cost-effective reasoning across text, images, audio, and video. It excels in low-latency, high-volume tasks that require rapid processing with strong reasoning abilities, making it suitable for general-purpose applications where speed and versatility are essential. Its main strengths include an exceptionally long context window (up to 1 million tokens), native support for multiple modalities, and robust multilingual capabilities. Some other noteworthy features of Gemini 2.5 Flash include deep domain knowledge in science, mathematics, and code, as well as support for agentic use cases and the ability to handle large-scale processing with efficient performance.
MetricValue
Parameter CountUnknown
Mixture of ExpertsUnknown
Context Length1,048,576 tokens
MultilingualYes
Quantized*Unknown
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "gemini-2-5-flash",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/gemini-2-5-flash

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.