Gemini 2.5 Flash Lite vs Claude Haiku 3.5: Pricing Comparison

Compare pricing, capabilities, and costs for your LLM workloads.

Google

Gemini 2.5 Flash Lite

Pricing (per 1M tokens)

Input$0.1000
Output$0.4000
Cached Input$0.0100
Batch Input$0.0500
Batch Output$0.2000

Context & Output

Context Window1M tokens
Max Output65.5K tokens

Capabilities

Categorybudget
Multimodaltext + image + audio
Fine-tuningNo
StreamingYes

Anthropic

Claude Haiku 3.5

Pricing (per 1M tokens)

Input$0.8000
Output$4.00
Cached Input$0.0800
Batch Input$0.4000
Batch Output$2.00

Context & Output

Context Window200K tokens
Max Output8.2K tokens

Capabilities

Categorybudget
Multimodaltext + image
Fine-tuningNo
StreamingYes

Quick Verdict

Cheaper Input Price

Gemini 2.5 Flash Lite

87.5% cheaper

Cheaper Output Price

Gemini 2.5 Flash Lite

90.0% cheaper

Larger Context Window

Gemini 2.5 Flash Lite

+800K tokens

Cost Comparison

Sample workload: 1,000,000 input tokens + 1,000,000 output tokens

Gemini 2.5 Flash Lite

$0.5000

$0.1000/1M input + $0.4000/1M output

Claude Haiku 3.5

$4.80

$0.8000/1M input + $4.00/1M output

Gemini 2.5 Flash Lite is 89.6% cheaper for this workload.

Frequently Asked Questions

Which is cheaper, Gemini 2.5 Flash Lite or Claude Haiku 3.5?
For input tokens, Gemini 2.5 Flash Lite is cheaper at $0.1000 per 1M tokens. For output tokens, Gemini 2.5 Flash Lite is cheaper at $0.4000 per 1M tokens. The overall cost depends on your workload's input/output ratio.
What is the context window size of Gemini 2.5 Flash Lite vs Claude Haiku 3.5?
Gemini 2.5 Flash Lite has a context window of 1M tokens, while Claude Haiku 3.5 has 200K tokens. Gemini 2.5 Flash Lite supports a larger context window of 1M tokens, which is beneficial for processing longer documents.
How do Gemini 2.5 Flash Lite and Claude Haiku 3.5 compare for batch processing?
Both models support batch processing with discounted rates. Gemini 2.5 Flash Lite offers a better batch rate at $0.0500 per 1M input tokens. Batch processing is ideal for non-time-sensitive workloads where you can wait for processing.

Need more tools?

Explore our complete suite of LLM calculators and comparison tools.