LLM Fine-Tuning Cost Calculator
Estimate the training cost of fine-tuning GPT, Claude, and other LLMs.
Fine-tuning a foundation model lets you specialise it on your own data — at the cost of training tokens billed per million. Use this calculator to estimate the upfront training cost of fine-tuning any supported LLM. Enter the model, the size of your training dataset in tokens, and the number of epochs to see your projected USD cost in seconds.
Estimated training cost
$75.00
1,000,000 tokens × 3 epochs × $25.00/M training rate
How fine-tuning costs are calculated
Fine-tuning cost is a function of three things: total training tokens, the number of epochs (passes over your dataset), and the model's training rate per million tokens. The formula is: training tokens × epochs × (training $/M ÷ 1,000,000). For example, fine-tuning GPT-4o mini at $3.00 per million training tokens with a 1M-token dataset over 3 epochs costs $9.00. Inference (using the fine-tuned model after training) is billed separately at the fine-tuned rate.
When fine-tuning is worth it
Fine-tuning pays off when you have stable, repeatable tasks (classification, structured extraction, branded tone) and enough volume to amortise the training cost. For exploratory work, prompt engineering or few-shot prompting on a base model is usually cheaper. Once you commit, GPT-4o mini and GPT-3.5 Turbo are the most economical options for fine-tuning training tokens; GPT-4o costs more but produces stronger results on harder tasks.
Frequently Asked Questions
How much does it cost to fine-tune GPT-4o?
How much does it cost to fine-tune GPT-4o mini?
Do all LLMs support fine-tuning?
How many epochs should I use?
Training rates from official provider pricing pages. Inference (post-training) costs are billed separately and not included in this estimate.