Skip to main content

Model List

Anyone currently supports the following AI models, all accessible through a unified OpenAI-compatible API.
Prices are for reference only. Check the Model Marketplace in your dashboard for real-time pricing. Unit: USD / million tokens.

OpenAI

ModelInputOutputContextFeatures
gpt-5.4$0.10$0.601MFlagship model, reasoning/code/creative
gpt-5.3-codex$0.08$0.641MCode specialist, programming/debug/refactor

Anthropic Claude

ModelInputOutputContextFeatures
claude-opus-4-6$1.00$5.001MMost powerful reasoning, complex analysis
claude-opus-4-6-fast$20.00$100.001MOpus fast variant, lower latency
claude-sonnet-4-6$0.45$2.251MBalanced performance and price, daily driver

Google Gemini

ModelInputOutputContextFeatures
gemini-3.1-pro-preview$1.20$9.001MMultimodal + long context, great value

DeepSeek

ModelInputOutputContextFeatures
DeepSeek-V3.2$0.10$0.15128KBest value, strong Chinese capability

Zhipu GLM

ModelInputOutputContextFeatures
GLM-5.1$1.00$3.00128KChinese LLM, excellent comprehension

xAI Grok

ModelInputOutputContextFeatures
grok-4.20$1.20$4.00256KReal-time info + reasoning, X/Twitter integrated

MiniMax

ModelInputOutputContextFeatures
MiniMax-M2.5$0.14$0.75256KLong context + multimodal

Moonshot Kimi

ModelInputOutputContextFeatures
Kimi-K2.5$0.15$2.00128KLong document processing, top Chinese

Billing

CategoryDescriptionCost
InputContent you send to the model (prompt)Base price
OutputContent the model generates3-5× input
Cache writeFirst-time prompt caching1.25× input
Cache readCache hit0.1× input
  • Prices may change with upstream providers; check the Model Marketplace for live rates
  • Failed requests are not charged

Save Money

  1. Choose the right model — use cheaper models (DeepSeek-V3.2, GLM-5.1) for simple tasks
  2. Keep prompts concise — shorter input = fewer tokens = lower cost
  3. Control output length — set max_tokens to limit output
  4. Leverage caching — fixed system prompts are cached automatically at 10% of input price
  5. Reduce context — don’t include unnecessary conversation history

Check Live Pricing

Log in to Anyone DashboardModel Marketplace to see real-time input/output pricing on each model card.

Usage Example

All models use the same OpenAI-compatible API — just change the model parameter:
from openai import OpenAI

client = OpenAI(
    base_url="https://api.anyone.ai/v1",
    api_key="your-anyone-token",
)

# GPT-5.4
response = client.chat.completions.create(
    model="gpt-5.4",
    messages=[{"role": "user", "content": "Hello!"}]
)

# Claude Opus 4.6
response = client.chat.completions.create(
    model="claude-opus-4-6",
    messages=[{"role": "user", "content": "Hello!"}]
)

# DeepSeek V3.2
response = client.chat.completions.create(
    model="DeepSeek-V3.2",
    messages=[{"role": "user", "content": "你好!"}]
)