Skip to main content
This tutorial will show you how to fine-tune an LLM for text generation. If you only have a single input and output you want the model to learn, this is the right fine-tuning method for you.

Create Your Dataset

For this example, we are using a formatted version of the mlabonne/FineTome-100k dataset filtered to only educational content and limited to the first 3,000 rows. The dataset has one column for the prompt and one for the response. Oxen supports datasets in a variety of formats, including jsonl, csv, and parquet. datasets-page

Fine-Tuning The Model

Once you have uploaded your dataset, click the “Actions” button and select “Fine-tune a model”. Fine-tune button Next select your base model, the prompt source, the response source, whether you’d like to use LoRA or not, and if you want advanced control over the fine-tune. Fine-tune first page For our Advance Options, you can have control over hyper-parameters and model specifications like learning rate, batch size, and number of epochs. Advanced options photo

Monitoring the Fine-Tune

While we’re fine-tuning your model, you’ll be able to see the configuration, logs, and metrics of the fine-tuning. Metrics example

Deploying the Model

Once your fine-tuning is complete, go to the configuration page and click “Deploy”. From there, you will not only have an API endpoint to use, but you will also be able to chat with your fine-tuned model to get a sense of how it’s doing. Deploy example fine-tuned chatbot

Next Steps

Oxen.ai makes fine-tuning easy, but if you are struggling to fine-tune your model or want us to fine-tune for you, we’d be happy to set up a free consultation with our ML experts!
I