OpenAI (GPT / Codex)
OpenAI provides the GPT model family through their API, offering a strong alternative to Anthropic as an OpenClaw provider. The GPT-5.4 lineup covers everything from the budget Nano ($0.20/MTok input) to the flagship GPT-5.4 ($2.50/MTok input), all sharing an industry-leading 270K context window.
Tags: frontier, gpt, codex, api-key, oauth, responses-api
Use Cases
- Alternative primary agent model for OpenClaw — especially cost-effective with GPT-5.4 Mini
- Unified provider for chat + embeddings + vision in a single API key
- Budget-friendly agent automation using GPT-5.4 Nano for high-volume, simple tasks
- Codex CLI integration for cloud-based software engineering workflows
- Batch processing pipelines using 50% discounted Batch API
- Realtime voice interactions via GPT-realtime models (separate from agent use)
Tips
- GPT-5.4 Mini ($0.75/$4.50 per MTok) is the sweet spot for most OpenClaw workloads — strong reasoning at ~4x less than Sonnet.
- GPT-5.4 Nano ($0.20/$1.25 per MTok) is excellent for heartbeats, cron jobs, and simple classification — one of the cheapest capable models available.
- Use OpenAI's native embeddings (text-embedding-3-small) for OpenClaw memory features if you don't have a separate embeddings provider.
- Batch API gives 50% off for non-urgent workloads. Great for data enrichment, bulk analysis, or nightly processing.
- If you're on ChatGPT Plus ($20/mo) or Pro ($200/mo), use Codex OAuth to leverage your subscription instead of paying per-token.
- Set Nano as your default model and upgrade to Mini/5.4 per-session for complex tasks to keep costs minimal.
Known Issues & Gotchas
- GPT-5.4 output tokens ($15/MTok) are expensive for long-form generation. Use Mini or Nano for high-volume tasks.
- Cached input pricing is 10x cheaper ($0.25 vs $2.50/MTok for GPT-5.4), but caching is automatic and you can't control what gets cached as precisely as Anthropic's explicit cache_control.
- The Responses API is the default for OpenClaw — make sure you're not confusing it with the older Chat Completions API in documentation.
- Data residency endpoints (after March 5, 2026) add 10% to all model pricing.
- Flex processing offers lower costs but slower response times and occasional unavailability — not suitable for interactive agent use.
- Rate limits on new accounts can be restrictive. OpenAI uses a tier system based on spend history.
- GPT-4o is the legacy model — still works but GPT-5.4 Mini is generally better and comparable in price.
Alternatives
- Anthropic (Claude)
- OpenRouter
- Together AI
- Ollama (Local)
Community Feedback
For the money it will cost you to run OpenClaw, the benefits are significantly weak. First, it costs you like 50 cents to do one simple task.
— Reddit r/ArtificialIntelligence
OpenClaw on DigitalOcean with OpenAI Codex OAuth — always use the oc wrapper, never bare openclaw commands. Skip the DO setup wizard.
— Reddit r/codex
Claude Code is evolving so fast that I don't feel like I need anything else. OpenClaw is just an extra layer of complexity between me and AI.
— Reddit r/ClaudeCode
Configuration Examples
Basic OpenAI API key setup
providers:
openai:
apiKey: sk-proj-xxxxx
model: openai/gpt-5.4-miniCost-optimized with Nano default
providers:
openai:
apiKey: sk-proj-xxxxx
model: openai/gpt-5.4-nano
# Override per-session: /model gpt-5.4
# Nano handles heartbeats, cron, simple tasks
# Upgrade to Mini/5.4 for complex workCodex OAuth (subscription-based)
providers:
openai-codex:
# Uses ChatGPT subscription credits
# Run: openclaw configure --provider openai-codex
# Follow OAuth flow to link ChatGPT account
model: openai/gpt-5.4