Documentation Index
Fetch the complete documentation index at: https://docs.oxen.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Generate text responses from language models using the OpenAI-compatible chat completions API. Supports streaming, vision, tool calling, and structured output.
Minimal Example
from openai import OpenAI
client = OpenAI(
base_url="https://hub.oxen.ai/api/ai",
api_key="YOUR_API_KEY",
)
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "What is Oxen.ai?"}],
max_tokens=200,
)
print(response.choices[0].message.content)
With Streaming
from openai import OpenAI
client = OpenAI(
base_url="https://hub.oxen.ai/api/ai",
api_key="YOUR_API_KEY",
)
stream = client.chat.completions.create(
model="gemini-3-1-flash-lite-preview",
messages=[{"role": "user", "content": "Write a haiku about data"}],
stream=True,
)
for chunk in stream:
content = chunk.choices[0].delta.content
if content:
print(content, end="", flush=True)
print()
What’s Next