Skip to main content

Try WAN 2.6 - Video to Video in the Workbench

Run this model interactively, tune parameters, and compare outputs.
Model ID: wan-v2-6-reference-to-video WAN 2.6 - Video to Video is a diffusion model designed for reference-based video generation with character and identity consistency. It excels in generating cinematic videos from reference footage while maintaining stable character appearance, voice characteristics, and motion style across multiple shots. Some noteworthy use cases of WAN 2.6 - Video to Video include character-driven storytelling, brand-consistent video creation, and multi-character dialogue scenes with synchronized audio.
MetricValue
Parameter CountUnknown
Mixture of ExpertsUnknown
Context LengthUnknown
MultilingualUnknown
Quantized*Unknown
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.
Notes on the description: The search results provided information about WAN 2.6’s general capabilities (text-to-video, image-to-video, and reference-to-video modes), output specifications (up to 1080p, 15 seconds, native audio with lip-sync), and technical features (multimodal architecture, temporal stability). However, specific technical details such as parameter count, mixture of experts status, context length, multilingual support, and quantization information for the Fal-hosted wan/v2.6/reference-to-video endpoint were not found in the search results. The description focuses on the reference-to-video capability as specified in your model identifier, which the sources confirm is a core feature with strong identity retention and character consistency.

Example request

Use the Workbench as a request builder: configure parameters for this model in the UI, then open the API tab to copy the exact cURL or Python call.
This blocks until the video is ready (typically 5-15 minutes). Prefer Async or Async with SSE for anything beyond quick experimentation.See the video generation reference for more details.
curl -X POST https://hub.oxen.ai/api/ai/videos/generate \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OXEN_API_KEY" \
  -d '{
  "model": "wan-v2-6-reference-to-video",
  "prompt": "An ox slowly walking down the road. Cinematic."
}'

Fetch model details

The models endpoint returns the full model object, including its json_request_schema.
curl -H "Authorization: Bearer $OXEN_API_KEY" https://hub.oxen.ai/api/ai/models/wan-v2-6-reference-to-video

Request parameters

Required parameters

FieldTypeDefaultDescription
promptstring"An ox slowly walking down the road. Cinematic."Text description of what you want to generate, or the instruction on how to edit the given image.

Optional parameters

FieldTypeDefaultDescription
input_videosarray<string>["https://hub.oxen.ai/api/repos/ox/Oxen-AI-Assets/file/main/images/winter_summer_ox.mp4"]Videos to use as reference.