Anthropic (Claude)
Anthropic builds the Claude model family — the primary and most deeply integrated provider for OpenClaw. Claude models power the core agent loop, tool calling, thinking/reasoning, and vision capabilities that make OpenClaw tick.
Tags: frontier, claude, api-key, oauth, prompt-caching
Use Cases
- Primary OpenClaw agent model — powering daily personal assistant workflows with tool use, vision, and extended thinking
- Code generation and review via Claude Code integration (ACP harness)
- Document analysis using vision capabilities — screenshots, PDFs, diagrams
- Complex multi-step reasoning tasks with adaptive thinking mode
- Cost-optimized batch processing using Batch API for data enrichment pipelines
- Prompt-cached conversational workflows where system context is large but stable
Tips
- Use Sonnet 4.5 or 4.6 as your default model — it's the best price/performance ratio for most OpenClaw workloads.
- Enable prompt caching for system prompts and SOUL.md/AGENTS.md content that stays constant across turns. This can save 50-90% on input costs.
- For cost-conscious setups, use Haiku 3.5 for heartbeats, cron jobs, and simple classification tasks. Reserve Sonnet/Opus for complex work.
- Use the Batch API (50% discount) for non-time-sensitive enrichment tasks, data processing, or bulk analysis.
- Set thinking mode to 'adaptive' to let the model decide when extended reasoning is needed — avoids wasting reasoning tokens on simple requests.
- Consider Claude Max subscription ($100/mo) + setup-token OAuth if you're consistently spending more than $100/mo on API calls.
Known Issues & Gotchas
- Rate limits can be aggressive on new API accounts — Tier 1 starts at just 50 RPM. You need to build usage history to unlock higher tiers.
- Prompt caching has a minimum of 1,024 tokens for the cached block (2,048 for Haiku). Smaller prompts won't benefit.
- The 5-minute cache TTL is short — if your requests are spaced further apart, cache misses will be common. Consider the 1-hour cache at 2x write cost for less frequent usage.
- Output tokens are 3-5x more expensive than input tokens across all Claude models. Long-form generation can get costly fast.
- Extended thinking (reasoning) tokens are billed as output tokens — budget accordingly for complex reasoning tasks.
- Fast Mode for Opus 4.6 is 6x standard pricing ($30/MTok input, $150/MTok output). Easy to accidentally burn budget.
- Data residency (US-only inference) adds a 1.1x multiplier on all token categories for Opus 4.6+.
- Anthropic does not offer embeddings — you'll need a separate provider (Mistral, OpenAI, or local) for memory/RAG.
Alternatives
- OpenAI (GPT)
- Amazon Bedrock
- OpenRouter
- Ollama (Local)
Community Feedback
Have you tested OpenClaw with Anthropic? I think it's incredible. I have just developed with Claude this week an app to run it on the cloud.
— Reddit r/Anthropic
All the OpenClaw bros are having a meltdown after Anthropic clarified subscription usage rules — but as of February 2025, you can still use Claude accounts to run OpenClaw.
— Reddit r/ClaudeAI
My main tip is to do lots of small tasks instead of doing few large tasks. Small tasks take way less tokens and are usually less buggy.
— Reddit r/ClaudeAI
From December 25 to December 31, Anthropic temporarily doubled usage limits. Starting January 1, usage limits were sharply reduced.
— Reddit r/ClaudeAI
Configuration Examples
Basic Anthropic API key setup
providers:
anthropic:
apiKey: sk-ant-api03-xxxxx
model: anthropic/claude-sonnet-4-6Claude Max subscription (OAuth setup-token)
providers:
anthropic:
setupToken: true
# Run: openclaw configure --provider anthropic
# Follow OAuth flow to link your Claude subscription
model: anthropic/claude-opus-4-6Cost-optimized multi-model setup
providers:
anthropic:
apiKey: sk-ant-api03-xxxxx
model: anthropic/claude-sonnet-4-5
# Use Haiku for lightweight tasks
# Override per-session with /model haiku
thinking: adaptive
reasoningEffort: medium