• Anthropic (Claude): Anthropic builds the Claude model family — the primary and most deeply integrated provider for OpenClaw. Claude models power the core agent loop, tool calling, thinking/reasoning, and vision capabilities that make OpenClaw tick.
  • OpenAI (GPT / Codex): OpenAI provides the GPT model family through their API, offering a strong alternative to Anthropic as an OpenClaw provider. The GPT-5.4 lineup covers everything from the budget Nano ($0.20/MTok input) to the flagship GPT-5.4 ($2.50/MTok input), all sharing an industry-leading 270K context window.
  • OpenAI Codex (Subscription): OpenAI Codex subscription access lets you use OpenAI's GPT models through your existing ChatGPT subscription instead of paying per-token API fees. This is the most cost-effective way to run OpenClaw with OpenAI models if you're already on a ChatGPT plan.
  • OpenRouter: Unified API routing requests to many model providers behind a single endpoint and API key. OpenAI-compatible. Model refs: openrouter/<provider>/<model>.
  • Ollama (Local Models): Local LLM runtime for open-source models. Supports native API with streaming and tool calling. Auto-discovers models from local instance. Free — all costs $0.
  • Amazon Bedrock: AWS managed model service using Bedrock Converse API. Auth via AWS SDK credential chain (IAM roles, env vars, profiles). Supports auto-discovery of Bedrock models.
  • Mistral AI: Mistral provides text/image models and Voxtral audio transcription. Also supports memory embeddings via mistral-embed. Open-weight models under Apache 2.0.
  • Together AI: Access leading open-source models including Llama, DeepSeek, Kimi, and GLM through a unified OpenAI-compatible API. Cost-effective inference for open models.
  • LiteLLM (Unified Gateway): Open-source LLM gateway providing unified API to 100+ model providers. Offers centralized cost tracking, logging, virtual keys with spend limits, and automatic failover.
  • Cloudflare AI Gateway: Cloudflare AI Gateway adds analytics, caching, and rate limiting in front of provider APIs. Uses Anthropic Messages API through your Cloudflare Gateway endpoint.
  • Vercel AI Gateway: Vercel's unified API to access hundreds of models through a single endpoint. Auto-discovers models via /v1/models catalog. Anthropic Messages compatible.
  • Moonshot AI (Kimi): Moonshot provides the Kimi API with OpenAI-compatible endpoints. Supports Kimi K2.5 and thinking models. Separate provider from Kimi Coding (different keys/endpoints).
  • Kimi Coding: Kimi Coding is a separate provider from Moonshot with different keys and endpoints. Optimized for coding tasks with Kimi K2.5.
  • MiniMax (M2.5 / M2.7): MiniMax builds the M2/M2.5/M2.7 model family — a full-stack AI company covering text, speech, video, image, and music. M2.5 excels at multi-language coding and agent compatibility. Supports OAuth via Coding Plan or API key. Anthropic Messages API compatible.
  • Hugging Face Inference Providers: Hugging Face Inference Providers route requests to 200+ models from leading providers through a single OpenAI-compatible API. Zero markup on pricing. Free monthly credits ($0.10 free, $2 PRO). Supports routing policies (:fastest, :cheapest, :provider) and BYOK.
  • Kilo Gateway (Kilocode): Unified API with smart routing model (kilo/auto) that selects best model per task. Auto-discovers available models. OpenAI-compatible with Bearer token auth.
  • Venice AI (Privacy-Focused): Privacy-focused inference platform with two modes: Private (fully ephemeral, open-source models, E2EE available) and Anonymized (proxied access to Claude, GPT, Gemini, Grok without your identity). 40+ models including frontier models at competitive rates. Credit-based API pricing.
  • NVIDIA (NIM): NVIDIA NIM provides GPU-accelerated inference via OpenAI-compatible API at build.nvidia.com. Hosts Nemotron models and open-source LLMs with free API credits for experimentation. NGC API key auth.
  • vLLM (Local Server): High-throughput, memory-efficient LLM inference engine with OpenAI-compatible API. Self-host any Hugging Face model on your own GPUs. Free and open-source with auto-discovery.
  • Z.AI (GLM Models): Z.AI (Zhipu AI) is the official API platform for the GLM model family. Offers GLM-5, GLM-4.7, and GLM-4.6 with tool_stream for streaming tool calls, vision models, image/video generation, and free Flash tiers. Bearer auth with API key.
  • Qwen Portal (OAuth Free Tier): Alibaba's Qwen Portal provides free OAuth access to Qwen Coder and Vision models — 2,000 requests/day with no token limits. Device-code OAuth flow syncs with Qwen Code CLI login. The most generous free coding model provider.
  • Qianfan (Baidu MaaS): Baidu's Model-as-a-Service platform providing access to ERNIE models and third-party LLMs via OpenAI-compatible API. Enterprise-focused with Baidu Cloud billing. Features ERNIE 5.0 and ERNIE 4.5 series models.
  • Xiaomi MiMo: Xiaomi's MiMo model platform with Anthropic Messages API compatibility. Features MiMo-V2-Flash (262K context) and the newly announced MiMo-V2-Pro (flagship reasoning) and MiMo-V2-Omni (multimodal). Backed by Xiaomi's $8.7B AI investment.
  • Synthetic (Privacy-First Free API): Synthetic runs open-source AI models in private, secure datacenters with both Anthropic-compatible and OpenAI-compatible APIs. 19+ models including MiniMax M2.5, DeepSeek V3.2, Kimi K2, Qwen3 VL, and GLM 4.7 — all at zero cost with a flat prompt-based quota system.
  • OpenCode (Zen + Go): OpenCode provides two curated model catalogs: Zen (pay-as-you-go with frontier models like Claude, GPT, Gemini at $20 top-ups) and Go (low-cost $10/mo subscription with open-source models like GLM-5, Kimi K2.5, MiniMax M2.5). Single API key for coding agents.
  • GitHub Copilot: Use your GitHub Copilot subscription as a model provider via device-flow OAuth. Access GPT-4o, GPT-4.1, Claude, and Gemini models through your existing plan. Free tier includes 50 premium requests/month; Pro ($10/mo) offers 300; Pro+ ($39/mo) offers 1,500.
  • Deepgram (Audio Transcription): Deepgram is a speech-to-text API powering inbound audio and voice note transcription in OpenClaw. Uses the Nova-3 model for highly accurate pre-recorded transcription at $0.0077/min. Not an LLM provider — transcription only. $200 free credit on signup.
  • Claude Max API Proxy (Community): Community tool that exposes your Claude Max subscription ($200/month) as an OpenAI-compatible API endpoint. Wraps the Claude Code CLI to provide unlimited Claude Opus 4 and Sonnet 4 access at a flat monthly cost instead of per-token API pricing.