Skip to main content
Simply upload your data, and we will provision GPU infrastructure and run the fine-tune. When itโ€™s done, Oxen.ai will save the fine-tuned model weights directly to your repository. You can deploy your model to a dedicated endpoint and use the inference endpoints to test it out. Model weights and datasets are versioned so that you can always track the data that was used to train the model. Fine-Tuning Ox Once the fine-tuning process is complete, you can deploy the model and start using it in your application.

Why Fine-Tune?

Fine-tuning is a great tool to reach for when basic prompting and context engineering fall short. You may need to fine-tune when:
  • Quality is critical and the model isnโ€™t consistently producing correct outputs.
  • Proprietary Data gives you a unique advantage that generic models canโ€™t capture.
  • Latency is a deal breaker and you need real-time responses.
  • Throughput limitations are bottlenecking your applicationโ€™s scalability.
  • Ownership of the model is important and you want to control your own destiny.
  • Cost if a foundation model is too expensive for your use case or you want to deploy a smaller model to the edge.
With Oxen.ai, we make it easy to automate the fine-tuning process of LLMs on your own data.

Tasks

Oxen.ai supports many data types and tasks for fine-tuning.

Start by Uploading a Dataset

To get started, youโ€™ll need to create a new repository on Oxen.ai. Once youโ€™ve created a repository, you can upload your data. The dataset can be in any tabular format including csv, jsonl, or parquet. Fine-Tuning Dataset Upload Once you have your dataset uploaded, you can query, explore, and make sure that the data is high quality before kicking off the fine-tuning process. Your model will only be as good as the data you train it on. Fine-Tuning Dataset When you feel confident that your dataset is ready, use the โ€œActionsโ€ button to select the model you want to fine-tune. Fine-Tuning Dataset

Selecting a Model

This will take you to a form where you can select the model you want to fine-tune and the columns you want to use for the fine-tuning process. We support fine-tuning for text generation, image generation, image editing, and video generation with a variety of models. Fine-Tuning Model Selection
If you want support for any specific models, data formats, training methods contact us. We are actively working on support for new models and distributed training.

Monitoring the Fine-Tune

Once you have started the fine-tuning process, you can monitor its progress. The dashboard will show you loss over time and token accuracy processed. Fine-Tuning Monitoring If you are fine-tuning an image or video generation model, you can view the generated images or videos in the โ€œSamplesโ€ tab to get a feel for the modelโ€™s performance. Fine-Tuning Samples Click on the โ€œInfoโ€ tab to see the fine-tuning configuration and all the hyper-parameters used. This will include a link to the dataset version you used and the raw model weights for downloading and running locally. Fine-Tuning Samples

Deploying the Model

Once the model is fine-tuned, you can deploy it to a dedicated endpoint. This will give you a /chat/completions api that you can use to test out the model. Fine-Tuning Configuration Swap out the model name with the name of the model you want to use.
curl https://hub.oxen.ai/api/chat/completions -H "Content-Type: application/json" -d '{
    "model":"oxen:my-model-name",
    "messages": [{"role": "user", "content": "What is the best name for a friendly ox?"}],
}'

Using the Model

Once the model is deployed, you can also chat with it using the Oxen.ai chat interface. Learn more about the chat interface here. Chatting with the Model For image and video generation, you can use the playground to generate images and videos. Chatting with the Model

Downloading the Model

If you want access to the raw model weights, you can download them from the repository using the Oxen.ai Python Library or the CLI. Follow the instructions for installing oxen if you havenโ€™t already.
oxen download my-username/my-repo models/ox-artistic-cyan-elephant/model.safetensors --revision models/ox-artistic-cyan-elephant
Fine-Tuning Configuration

Need Custom Infrastructure?

If you need custom or private deployments in your own VPC or want to train a larger model on distributed infrastructure, contact us and we can give you a custom deployment.
โŒ˜I