Oxen.ai lets you go from datasets to custom models with a few clicks.
Simply upload your data, and we will provision GPU infrastructure to execute the training process, then save the fine-tuned model weights directly to your repository. Model weights and datasets are versioned so that you can always track the data that was used to train the model.Once the fine-tuning process is complete, you can deploy the model and start using it in your application.
If you are looking for a more hands-on approach to fine-tuning, you can write your own code in Notebooks.
Here are specific examples of how fine-tuning can be used to solve real-world problems. From coding agents to opitmizing tool calling for your agent, there are a lot of use cases for fine-tuning.
To get started, youโll need to create a new repository on Oxen.ai. Once youโve created a repository, you can upload your data. The dataset can be in any tabular format including csv, jsonl, or parquet.Once you have your dataset uploaded, you can query, explore, and make sure that the data is high quality before kicking off the fine-tuning process. Your model will only be as good as the data you train it on.When you feel confident that your dataset is ready, use the โActionsโ button to select the model you want to fine-tune.
This will take you to a form where you can select the model you want to fine-tune and the columns you want to use for the fine-tuning process. Right now we support fine-tuning for prompt/response single-turn chat pairs.
If you want support for any larger models or modalities like text-to-image, contact us. We are actively working on support for different data formats and distributed training.
Once you have started the fine-tuning process, you can monitor its progress. The dashboard will show you loss over time, token accuracy, the learning rate, and number of tokens processed.Click on the โConfigurationโ tab to see the fine-tuning configuration. This will include a link to the dataset version you used and the raw model weights. It will show you the pricing for the fine-tuning process as well.
Once the model is fine-tuned, you can deploy it to a hosted endpoint. This will give you a /chat/completions endpoint that you can use to test out the model.Swap out the model name with the name of the model you want to use.
Copy
Ask AI
curl https://hub.oxen.ai/api/chat/completions -H "Content-Type: application/json" -d '{ "model":"oxenai:my-model-name", "messages": [{"role": "user", "content": "What is the best name for a friendly ox?"}],}'
If you want access to the raw model weights, you can download them from the repository using the Oxen.ai Python Library or the CLI.Follow the instructions for installing oxen if you havenโt already.
If you need custom or private deployments in your own VPC or want to train a larger model on distributed infrastructure, contact us and we can give you a custom deployment.