
β Features
Test out prompts on closed source models or fine-tune your own models. Once you have a model you love, run the model at scale with batch inference.- β‘οΈ Inference - Quickly iterate on prompts and models
- βοΈ Fine-Tuning - Go from dataset to deployable model in a few clicks
- π Datasets - Build datasets for training, fine-tuning, or evaluating models
- π Batch Inference - Run your model at scale over large datasets, to label data, generate synthetic data or evaluate performance
- πΎ Version Control - Sync your datasets, model weights, and code with a collaborative hub
β‘ Quickly Iterate on Models
Whether you are making your first LLM call or need to deploy a fine-tuned model, Oxen.ai gives you the flexibility to swap models through a unified Model API. The interface is OpenAI compatible and supports foundation models from Anthropic, Google, Meta, and OpenAI. See the list of supported models to get started. Closed source models not working for your use case? Fine-tune your own model, optimizing it for accuracy, speed, or cost, and deploy it to the same interface in minutes.
βοΈ Fine-Tune Models
The best models are the ones that understand your context and continue to learn from your data over time. Go from dataset to model in a few clicks with Oxen.aiβs fine-tuning tooling. Select a dataset, define your inputs and outputs, and let Oxen.ai do the grunt work. Oxen saves model weights to itβs version store tying model weights to the dataset and code that was used to train them.
π Build Datasets
Quality datasets are the difference between prototypes and production models. Collaborate on multi-modal datasets used for training, fine-tuning, or evaluating models. Backed by Oxen.aiβs version control, youβll never worry about remembering what data a model was trained or evaluated on. Learn how to interface with datasets in the Oxen Python Library or more about supported dataset types and formats here.
π Run Models at Scale
Find the best model and prompt for your use case. Leverage your own datasets to build custom evaluations. Evaluation results are versioned and saved as datasets in the repository for easy performance tracking over time.