curl --request POST \
--url https://dev.hub.oxen.ai/api/repos/{namespace}/{repo_name}/fine_tunes/{id}/actions/stop \
--header 'Authorization: Bearer <token>'{
"fine_tune": {
"base_model": "<string>",
"created_by": {
"id": "<string>",
"image": "<string>",
"name": "<string>",
"username": "<string>"
},
"credits_used": "<string>",
"deployed_model": {},
"description": "<string>",
"display_name": "<string>",
"error": "<string>",
"fine_tune_script": {
"description": "<string>",
"display_name": "<string>",
"docker_image_name": "<string>",
"fine_tune_schema": {
"description": "<string>",
"id": "<string>",
"name": "<string>",
"schema": {
"additionalProperties": true,
"basic": [
"<string>"
],
"properties": {},
"required": [
"<string>"
],
"type": "<string>"
}
},
"id": "<string>",
"name": "<string>",
"script_type": "<string>"
},
"finished_at": "<string>",
"gpu_count": 123,
"gpu_model": "<string>",
"id": "<string>",
"inserted_at": "<string>",
"last_credit_check": "<string>",
"name": "<string>",
"output_resource": {},
"queue_position": 123,
"rate_per_second": "<string>",
"repository_id": "<string>",
"resource": {
"path": "<string>",
"version": "<string>"
},
"source_model": {},
"started_at": "<string>",
"status": "<string>",
"total_token_count": 0,
"training_params": {
"answer_column": "<string>",
"batch_size": 123,
"enable_thinking": true,
"epochs": 123,
"grad_accum": 123,
"learning_rate": 123,
"logging_steps": 123,
"lora_alpha": 123,
"lora_rank": 123,
"neftune_noise_alpha": 123,
"question_column": "<string>",
"save_steps_ratio": 123,
"save_strategy": "<string>",
"seq_length": 123,
"use_lora": true
},
"updated_at": "<string>",
"use_lora": true
},
"status": "<string>",
"status_message": "<string>"
}Stop a fine-tune if it is currently running or queued.
curl --request POST \
--url https://dev.hub.oxen.ai/api/repos/{namespace}/{repo_name}/fine_tunes/{id}/actions/stop \
--header 'Authorization: Bearer <token>'{
"fine_tune": {
"base_model": "<string>",
"created_by": {
"id": "<string>",
"image": "<string>",
"name": "<string>",
"username": "<string>"
},
"credits_used": "<string>",
"deployed_model": {},
"description": "<string>",
"display_name": "<string>",
"error": "<string>",
"fine_tune_script": {
"description": "<string>",
"display_name": "<string>",
"docker_image_name": "<string>",
"fine_tune_schema": {
"description": "<string>",
"id": "<string>",
"name": "<string>",
"schema": {
"additionalProperties": true,
"basic": [
"<string>"
],
"properties": {},
"required": [
"<string>"
],
"type": "<string>"
}
},
"id": "<string>",
"name": "<string>",
"script_type": "<string>"
},
"finished_at": "<string>",
"gpu_count": 123,
"gpu_model": "<string>",
"id": "<string>",
"inserted_at": "<string>",
"last_credit_check": "<string>",
"name": "<string>",
"output_resource": {},
"queue_position": 123,
"rate_per_second": "<string>",
"repository_id": "<string>",
"resource": {
"path": "<string>",
"version": "<string>"
},
"source_model": {},
"started_at": "<string>",
"status": "<string>",
"total_token_count": 0,
"training_params": {
"answer_column": "<string>",
"batch_size": 123,
"enable_thinking": true,
"epochs": 123,
"grad_accum": 123,
"learning_rate": 123,
"logging_steps": 123,
"lora_alpha": 123,
"lora_rank": 123,
"neftune_noise_alpha": 123,
"question_column": "<string>",
"save_steps_ratio": 123,
"save_strategy": "<string>",
"seq_length": 123,
"use_lora": true
},
"updated_at": "<string>",
"use_lora": true
},
"status": "<string>",
"status_message": "<string>"
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Fine-tune ID
Stop fine-tune response
Standard wrapper for fine-tune /stop responses.
Fine-tune job resource as returned by /stop
Show child attributes
Canonical name of the base model, e.g. 'Qwen/Qwen3-0.6B'
Credits used so far for this fine-tune
Deployment information for the resulting model, if deployed
Optional description of the fine-tune
Optional display name for the fine-tune
Error message if the fine-tune failed
Fine-tune script configuration used for this run
Show child attributes
Description of what the script does
Human-friendly name for the script
Docker image used to run the script
Fine-tune configuration schema used by the script
Show child attributes
Schema description
Fine-tune schema ID
Schema name
JSON schema describing training params UI
Show child attributes
Per-parameter schema; structure is flexible
Fine-tune script ID
Fine-tune script name
Type of script (for example, 'text_generation')
Time when training finished
Number of GPUs requested for the job
GPU model requested for the job (for example, 'A10G')
Fine-tune ID
Creation timestamp
Last time credits were checked for this job
Fine-tune name
Optional output resource produced by this fine-tune
Queue position if the job is queued
Billing rate per second for this fine-tune
ID of the repository this fine-tune belongs to
Base model that is being fine-tuned
Time when training actually started
Current status of the fine-tune
Total number of tokens processed
Training parameters used for this fine-tune run
Show child attributes
Column containing assistant responses
Per-device batch size used during training
Whether to enable thinking tokens during training
Number of epochs to train for
Gradient accumulation steps
Base learning rate for the optimizer
Interval (in steps) at which logs are written
LoRA alpha scaling factor
LoRA rank for low-rank adapters
NEFTune noise alpha
Column containing user prompts
Ratio of steps at which to save checkpoints
Save strategy, e.g. 'epoch'
Sequence length used during training
Whether LoRA is enabled for this run
Last update timestamp
Whether LoRA fine-tuning is enabled
High-level status string (for example, 'success').
Human-readable status message (for example, 'resource_found').