Skip to main content

Try Qwen3 VL 4B - Instruct in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: qwen3-vl-4b-instruct Qwen/Qwen3-VL-4B-Instruct is a multimodal LLM that processes both text and images, offering a relatively lightweight option for vision-language tasks while maintaining strong general language capabilities. It excels in visual question answering, document and UI understanding, spatial reasoning over images, and general instruction-following dialogue, making it suitable when you need a compact model that can both see and read. Some other noteworthy use cases of Qwen/Qwen3-VL-4B-Instruct include image captioning and explanation, multimodal coding assistance from designs or screenshots, and agentic visual assistants that can reason about interfaces and complex scenes.
MetricValue
Parameter Count4 billion
Mixture of ExpertsNo
Context Length256,000 tokens (up to 1M with extension)
MultilingualYes
Quantized*No
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
curl -X POST https://hub.oxen.ai/api/ai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "qwen3-vl-4b-instruct",
  "messages": [
    {
      "role": "user",
      "content": "Hello, what can you do?"
    }
  ]
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/qwen3-vl-4b-instruct

Request parameters

This model follows the standard OpenAI chat completions request body. See the chat completions reference for the full parameter list.