Skip to main content

Overview

This schema is used for fine-tuning models with text generation capabilities.

Schema Type

When creating a fine-tune with this schema, use:
{
  "resource": "main/your-dataset.parquet",
  "base_model": "<model-canonical-name>",
  "script_type": "text_generation",
  "training_params": {
    ...
  }
}
Key Parameters:
  • script_type: text_generation (the fine-tune type)
  • base_model: One of the supported model canonical names below

Supported Models

  • Llama 3.2 1B Instruct (meta-llama/Llama-3.2-1B-Instruct)
  • OpenAI/GPT-OSS-20B (openai/gpt-oss-20b)
  • Llama 3.1 8B Instruct (meta-llama/Llama-3.1-8B-Instruct)
  • Llama 3.2 3B Instruct (meta-llama/Llama-3.2-3B-Instruct)
  • Qwen/Qwen3-1.7B (Qwen/Qwen3-1.7B)
  • Qwen/Qwen3-4B (Qwen/Qwen3-4B)
  • Qwen/Qwen3-0.6B (Qwen/Qwen3-0.6B)

Request Schema

Required Fields

FieldTypeRequiredDescription
answer_columnstringYesAssistant (Response) Column (DataFrame column name)
batch_sizeintegerNo(default: 1) (min: 1)
enable_thinkingbooleanNoenable_thinking
epochsintegerNo(default: 1) (min: 1)
grad_accumintegerNo(default: 1) (min: 1)
learning_ratenumberNo(default: 0.0001)
logging_stepsintegerNo(default: 10) (min: 1)
lora_alphaintegerNo(default: 16) (min: 1)
lora_rankintegerNo(default: 16) (min: 1)
neftune_noise_alphanumberNo(default: 0)
question_columnstringYesUser (Prompt) Column (DataFrame column name)
save_steps_rationumberNo(default: 0.25)
save_strategystringNosave_strategy
seq_lengthintegerNo(default: 1024) (min: 1)
use_lorabooleanNoUse LoRA

Example Request

{
  "resource": "main/your-dataset.parquet",
  "base_model": "<model-canonical-name>",
  "script_type": "text_generation",
  "training_params": {
    "answer_column": "<answer_column>",
    "batch_size": 1,
    "enable_thinking": false,
    "epochs": 1,
    "grad_accum": 1,
    "learning_rate": 0.0001,
    "logging_steps": 10,
    "lora_alpha": 16,
    "lora_rank": 16,
    "neftune_noise_alpha": 0,
    "question_column": "<question_column>",
    "save_steps_ratio": 0.25,
    "save_strategy": "epoch",
    "seq_length": 1024,
    "use_lora": true
  }
}

Field Details

answer_column

Assistant (Response) Column Type: string

batch_size

Type: integer Default: 1 Minimum: 1

enable_thinking

Type: boolean Default: false

epochs

Type: integer Default: 1 Minimum: 1

grad_accum

Type: integer Default: 1 Minimum: 1

learning_rate

Type: number Default: 0.0001 Minimum: 0

logging_steps

Type: integer Default: 10 Minimum: 1

lora_alpha

Type: integer Default: 16 Minimum: 1

lora_rank

Type: integer Default: 16 Minimum: 1

neftune_noise_alpha

Type: number Default: 0 Minimum: 0

question_column

User (Prompt) Column Type: string

save_steps_ratio

Type: number Default: 0.25

save_strategy

Type: string Default: "epoch"

seq_length

Type: integer Default: 1024 Minimum: 1

use_lora

Use LoRA Type: boolean Enable LoRA for faster fine-tuning and lower memory use Default: true