At the end of the day, AI is about automating workflows and making sure your model performs well given your data.

Oxen.ai makes it easy to run models on your data. With Oxen, you can either kick off compute jobs on the Oxen.ai Hub or you can write your own custom code to log model results to an Oxen repository.

Kicking off Compute Jobs

The easiest way to get started with model inference is to use the Oxen.ai Hub. The Hub allows you to select a model and then run the model on your data.

The Oxen.ai Hub currently supports LLM models for processing Text columns, but we are working on adding support for more model modalities including Vision and Audio.

Supported Models

Oxen.ai supports many of the flagship AI models such as OpenAI’s GPT-4, Meta’s Llama, and Google’s Gemini.

To see which model would best suit your task, visit our Models Page. If you don’t see the model you need, please let us know and we’ll add it.

Grab a Dataset

For a simple example, we’ll use a SMS Spam Collection Dataset. The goal of this dataset is to classify whether a given text is spam or ham (not spam).

To get started, you can download the 200 row dataset here.

By the end of this tutorial, we will have processed our dataset through an LLM to classify whether a given text is spam or ham.

Upload Dataset

Once you have the sample dataset downloaded, you can upload it to an Oxen repository by clicking the + button in the top-right of the UI and selecting Create Repository then choosing Add Files.

Once the dataset is uploaded, you should see it in the file list.

Clicking on the file will allow you to preview the data.

Select a Model

Start by clicking the 🚀 icon in the top-right above the data preview. This will open up a Prompt UI where you can select a task type, model, and setup your prompt.

Once you are happy with your prompt, click the Run Sample button. This will run the model on the first 5 rows of data and show you the results.

You can either Run Again with a different prompt until you are satisfied with the results or you click Next to configure where the results should be written.

Run a Job

Once you are satisfied with your prompt, you can pick a destination branch and write a commit message for once the job is finished.

Be sure to check the “Automatically commit on completion” if you want your results to be committed to the repository. Otherwise the results will be sitting in a workspace for you to review before committing.

Query Results

Once you have a job committed to a branch, you can query the results using the Oxen.ai text2sql engine.

For example, you can see the breakdown of results of the prediction column by running the following query:

What is the percentage of spam vs ham?

It will write the complex SQL query behind the scenes and return the results.

As we can see here, 82% of the results are ham, but there is an anomoly of one result that says “Please provide the text you’d like me to classify.”

Let’s run another query to find this outlier.

Show me the row where the prediction is not equal to spam or ham.

Looks like it was row 101 in the original dataset. Even gpt-4o made a mistake! To fix this, we might want to tweak our prompt to be more explicit about where the text is located and re-run the job.

Programmatically Run a Job

If you need to run a model as part of a larger workflow, you can use the Oxen.ai API to programmatically run a model on a dataset.

Currently the API is only exposed over HTTP requests and requires a valid api key in the header. To kick off a model inference job, you can send a POST request to the /api/repos/:namespace/:repo_name/evaluations/:resource endpoint.

For example if the file you want to process is at:

https://oxen.ai/ox/customer-intents/main/data.parquet

The parameters should be:

  • :namespace -> ox
  • :repo_name -> customer-intents
  • :resource -> main/data.parquet (combination of branch name and file_name)
curl -X POST -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
    https://hub.oxen.ai/api/repos/:namespace/:repo_name/evaluations/:resource \
    --data '{
    "name": "My Awesome Evaluation",
    "prompt": "Classify the following text into one of the following categories: [intent_1, intent_2, intent_3]\n{my_text_column_name}",
    "type": "text",
    "model": "gpt-4o-mini",
    "is_sample": false,
    "target_column": "prediction",
    "target_branch": "api-results-branch",
    "auto_commit": true,
    "commit_message": "test commit message"
}
'

Make sure to grab the evaluation.id from the response as you will need it to check the status of the job or retrieve the results later.

To check the status of the job, you can send a GET request to the /api/repos/:namespace/:repo_name/evaluations/:evaluation_id endpoint.

curl -X GET -H "Content-Type: application/json" \
    https://hub.oxen.ai/api/repos/:namespace/:repo_name/evaluations/:evaluation_id

You will be able to poll this response to see the progress of your job, or check it out in the Oxen.ai UI under the “Evaluations” tab.

We will be adding Python SDK support for this in the near future.