Different AI providers have different API formats. Anyone converts between them automatically, so you don’t need to worry about which provider is used on the backend.
How it works
You only need to send requests in one format — Anyone automatically converts to whatever format the target model requires.
For example, calling a Claude model with the OpenAI SDK:
from openai import OpenAI
client = OpenAI(
base_url="https://api.anyone.ai/v1",
api_key="YOUR_TOKEN",
)
# Call Claude using OpenAI format — Anyone converts automatically
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello"}],
)
You send an OpenAI-format request, and Anyone converts it to Claude Messages format behind the scenes, sends it to Anthropic, then converts Claude’s response back to OpenAI format before returning it to you. The whole process is completely transparent.
Supported conversions
| Your request format | Models you can call |
|---|
| OpenAI Chat Completions | All models (GPT, Claude, Gemini, DeepSeek, etc.) |
| Claude Messages | Claude series models |
| Gemini generateContent | Gemini series models |
We recommend using the OpenAI format — it works with all models and is the most versatile option. You only need a provider’s native format if you specifically need one of their unique features.
If you’re already using the Anthropic SDK or Google Gemini SDK, you can point them directly at Anyone:
# Anthropic SDK
import anthropic
client = anthropic.Anthropic(base_url="https://api.anyone.ai", api_key="YOUR_TOKEN")
# Google Gemini SDK
import google.generativeai as genai
# Set the base URL to api.anyone.ai
Both approaches work — use whichever SDK you prefer.