Model ID: meta-llama/llama-3.3-70b-instruct
per 1M tokens
per 1M tokens
import requests
# Fetch pricing data from llmprices.ai
response = requests.get(
"https://llmprices.ai/api/pricing?model=meta-llama/llama-3.3-70b-instruct"
)
data = response.json()
print(f"Model: {data['name']}")
print(f"Input: {float(data["pricing"]["prompt"]) * 1000000:.2f}/1M tokens")
print(f"Output: {float(data["pricing"]["completion"]) * 1000000:.2f}/1M tokens")Endpoint:
GET https://llmprices.ai/api/pricing?model=meta-llama/llama-3.3-70b-instructExample Response:
{
"id": "meta-llama/llama-3.3-70b-instruct",
"name": "Meta: Llama 3.3 70B Instruct",
"pricing": {
"prompt": "0.0000001",
"completion": "0.00000032"
}
}