LLM Fine-Tuning Cost Calculator

Estimate the training cost of fine-tuning GPT, Claude, and other LLMs.

Fine-tuning a foundation model lets you specialise it on your own data — at the cost of training tokens billed per million. Use this calculator to estimate the upfront training cost of fine-tuning any supported LLM. Enter the model, the size of your training dataset in tokens, and the number of epochs to see your projected USD cost in seconds.

1,000,000
3

Estimated training cost

$75.00

1,000,000 tokens × 3 epochs × $25.00/M training rate

How fine-tuning costs are calculated

Fine-tuning cost is a function of three things: total training tokens, the number of epochs (passes over your dataset), and the model's training rate per million tokens. The formula is: training tokens × epochs × (training $/M ÷ 1,000,000). For example, fine-tuning GPT-4o mini at $3.00 per million training tokens with a 1M-token dataset over 3 epochs costs $9.00. Inference (using the fine-tuned model after training) is billed separately at the fine-tuned rate.

When fine-tuning is worth it

Fine-tuning pays off when you have stable, repeatable tasks (classification, structured extraction, branded tone) and enough volume to amortise the training cost. For exploratory work, prompt engineering or few-shot prompting on a base model is usually cheaper. Once you commit, GPT-4o mini and GPT-3.5 Turbo are the most economical options for fine-tuning training tokens; GPT-4o costs more but produces stronger results on harder tasks.

Frequently Asked Questions

How much does it cost to fine-tune GPT-4o?
OpenAI charges $25.00 per million training tokens for GPT-4o fine-tuning. A 1M-token dataset run for 3 epochs costs $75. Inference on the fine-tuned model is billed at $3.75 per million input tokens and $15.00 per million output tokens — separate from training.
How much does it cost to fine-tune GPT-4o mini?
GPT-4o mini fine-tuning training is $3.00 per million tokens. A 1M-token dataset run for 3 epochs costs $9. Inference on the fine-tuned model is billed at $0.225/M input and $0.90/M output.
Do all LLMs support fine-tuning?
No. Among major providers, OpenAI offers fine-tuning for GPT-4o, GPT-4o mini, GPT-4.1, and GPT-3.5 Turbo. Anthropic's Claude models and Google's Gemini models do not currently expose public fine-tuning APIs. Mistral and others offer fine-tuning on selected models. This calculator only includes models with publicly listed fine-tuning training rates.
How many epochs should I use?
OpenAI's default is 3 epochs, which works well for most datasets. Increase to 4–5 for very small datasets (under 1,000 examples) and decrease to 1–2 for very large datasets to avoid overfitting. Each additional epoch multiplies your training cost by one full pass.

Training rates from official provider pricing pages. Inference (post-training) costs are billed separately and not included in this estimate.