OpenAI (GPT / Codex)

OpenAI provides the GPT model family through their API, offering a strong alternative to Anthropic as an OpenClaw provider. The GPT-5.4 lineup covers everything from the budget Nano ($0.20/MTok input) to the flagship GPT-5.4 ($2.50/MTok input), all sharing an industry-leading 270K context window.

OpenAI provides the GPT model family through their API, offering a strong alternative to Anthropic as an OpenClaw provider. The GPT-5.4 lineup covers everything from the budget Nano ($0.20/MTok input) to the flagship GPT-5.4 ($2.50/MTok input), all sharing an industry-leading 270K context window. OpenClaw connects to OpenAI via the Responses API — a newer, more capable API surface than the legacy Chat Completions endpoint. This enables WebSocket transport for lower-latency streaming, server-side compaction to manage long conversations efficiently, and priority processing tiers. OpenAI also natively supports embeddings, making it one of the few providers where you can run both your agent model and memory/RAG embeddings from a single API key. Authentication works two ways: standard API key (pay-per-token) or Codex OAuth (subscription-based, via ChatGPT login). The API key path gives you full control and access to all models including Batch API (50% off for async work). The Codex OAuth path lets you use your ChatGPT Plus/Pro subscription credits, which can be more economical for moderate usage. OpenAI excels for users who want a single provider for everything — chat, vision, embeddings, image generation, audio transcription, and even realtime voice. The ecosystem is the most mature, with extensive documentation, client libraries in every language, and the largest third-party tooling community.

Tags: frontier, gpt, codex, api-key, oauth, responses-api

Use Cases

  • Alternative primary agent model for OpenClaw — especially cost-effective with GPT-5.4 Mini
  • Unified provider for chat + embeddings + vision in a single API key
  • Budget-friendly agent automation using GPT-5.4 Nano for high-volume, simple tasks
  • Codex CLI integration for cloud-based software engineering workflows
  • Batch processing pipelines using 50% discounted Batch API
  • Realtime voice interactions via GPT-realtime models (separate from agent use)

Tips

  • GPT-5.4 Mini ($0.75/$4.50 per MTok) is the sweet spot for most OpenClaw workloads — strong reasoning at ~4x less than Sonnet.
  • GPT-5.4 Nano ($0.20/$1.25 per MTok) is excellent for heartbeats, cron jobs, and simple classification — one of the cheapest capable models available.
  • Use OpenAI's native embeddings (text-embedding-3-small) for OpenClaw memory features if you don't have a separate embeddings provider.
  • Batch API gives 50% off for non-urgent workloads. Great for data enrichment, bulk analysis, or nightly processing.
  • If you're on ChatGPT Plus ($20/mo) or Pro ($200/mo), use Codex OAuth to leverage your subscription instead of paying per-token.
  • Set Nano as your default model and upgrade to Mini/5.4 per-session for complex tasks to keep costs minimal.

Known Issues & Gotchas

  • GPT-5.4 output tokens ($15/MTok) are expensive for long-form generation. Use Mini or Nano for high-volume tasks.
  • Cached input pricing is 10x cheaper ($0.25 vs $2.50/MTok for GPT-5.4), but caching is automatic and you can't control what gets cached as precisely as Anthropic's explicit cache_control.
  • The Responses API is the default for OpenClaw — make sure you're not confusing it with the older Chat Completions API in documentation.
  • Data residency endpoints (after March 5, 2026) add 10% to all model pricing.
  • Flex processing offers lower costs but slower response times and occasional unavailability — not suitable for interactive agent use.
  • Rate limits on new accounts can be restrictive. OpenAI uses a tier system based on spend history.
  • GPT-4o is the legacy model — still works but GPT-5.4 Mini is generally better and comparable in price.

Alternatives

  • Anthropic (Claude)
  • OpenRouter
  • Together AI
  • Ollama (Local)

Community Feedback

For the money it will cost you to run OpenClaw, the benefits are significantly weak. First, it costs you like 50 cents to do one simple task.

— Reddit r/ArtificialIntelligence

OpenClaw on DigitalOcean with OpenAI Codex OAuth — always use the oc wrapper, never bare openclaw commands. Skip the DO setup wizard.

— Reddit r/codex

Claude Code is evolving so fast that I don't feel like I need anything else. OpenClaw is just an extra layer of complexity between me and AI.

— Reddit r/ClaudeCode

Configuration Examples

Basic OpenAI API key setup

providers:
  openai:
    apiKey: sk-proj-xxxxx
    model: openai/gpt-5.4-mini

Cost-optimized with Nano default

providers:
  openai:
    apiKey: sk-proj-xxxxx
    model: openai/gpt-5.4-nano
    # Override per-session: /model gpt-5.4
    # Nano handles heartbeats, cron, simple tasks
    # Upgrade to Mini/5.4 for complex work

Codex OAuth (subscription-based)

providers:
  openai-codex:
    # Uses ChatGPT subscription credits
    # Run: openclaw configure --provider openai-codex
    # Follow OAuth flow to link ChatGPT account
    model: openai/gpt-5.4