跳转到主要内容
The full model list and reference pricing has moved to the Supported Models page.
Anyone bills by token usage, with different prices for different models.

Billing categories

CategoryDescriptionCost
InputContent you send to the model (prompt)Base price
OutputContent the model generates3-5× input
Cache writeFirst-time prompt caching1.25× input
Cache readCache hit0.1× input

How to save money

  1. Choose the right model — use cheaper models (DeepSeek-V3.2, GLM-5.1) for simple tasks
  2. Keep prompts concise — shorter input = fewer tokens = lower cost
  3. Control output length — set max_tokens to limit output
  4. Leverage caching — fixed system prompts are cached automatically at 10% of input price
  5. Reduce context — don’t include unnecessary conversation history

View real-time pricing

Log in to Anyone DashboardModel Marketplace for live pricing. 👉 View full model list and reference pricing