LLM Prices
Stop hardcoding prices. Use our API instead.
Real-time LLM API Pricing
Compare actual prices across OpenAI, Claude, Google, and other leading AI models. Get live pricing data to optimize your AI costs.
Real-time Data
Updated hourly
Easy Integration
Simple REST API
All Providers
OpenAI, Claude & more
Free to Use
No API key needed
api-example.py
import requests
# Fetch pricing for GPT-4o
response = requests.get('https://llmprices.ai/api/pricing?model=openai/gpt-4o')
data = response.json()
print(data['pricing'])OpenAI & Claude Models
Live pricing for all OpenAI and Anthropic models
0 models
API Documentation
Integrate real-time pricing data into your applications
Loading models...
GET
https://llmprices.ai/api/pricing?model=:modelFast ResponseGet pricing for any model by passing the model ID as a query parameter. Simple and convenient - just one GET request. Our API is optimized for speed with sub-100ms response times and hourly caching.
Example Request
GET https://llmprices.ai/api/pricing?model=openai/gpt-4oResponse
{
"id": "openai/gpt-4o",
"name": "OpenAI: GPT-4o",
"pricing": {
"prompt": "0.0000025",
"completion": "0.00001",
"input_cache_read": "0.00000125"
}
}Response Fields
- •
id- Model identifier - •
name- Human-readable model name - •
pricing.prompt- Input token price (per token) - •
pricing.completion- Output token price (per token) - •
pricing.input_cache_read- Cached input token price (when available) - Note: All prices are per token. Multiply by 1,000,000 to get price per 1M tokens
Performance
- • Response time: <100ms average
- • Cache duration: 1 hour (3600 seconds)
- • Rate limit: No limits for public use
- • Uptime: 99.9% availability