Mistral AI

Mistral provides text/image models and Voxtral audio transcription. Also supports memory embeddings via mistral-embed. Open-weight models under Apache 2.0.

Mistral AI is a Paris-based company building open-weight and proprietary language models. Their API platform (La Plateforme) offers models ranging from the tiny Mistral Small ($0.15/$0.60 per MTok) to the powerful Mistral Large 3 ($2/$6 per MTok), all through an OpenAI-compatible API. Mistral is notable for its European data residency options and strong multilingual performance. For OpenClaw users, Mistral fills two key niches: cost-effective capable models and native embeddings support. Mistral Small 4 at $0.15/MTok input is one of the cheapest capable models available from any major provider, making it excellent for high-volume tasks. And mistral-embed provides a solid embeddings model for OpenClaw's memory and RAG features, which is valuable since Anthropic doesn't offer embeddings. Mistral's models are known for strong code generation, multilingual capabilities (particularly European languages), and efficient architecture. The open-weight models (Mistral 7B, Mixtral) are available to run locally via Ollama, while the proprietary models (Large, Medium) are API-only. All API models support tool calling and streaming through the OpenAI-compatible endpoint. Mistral also offers Voxtral for audio transcription and Pixtral for vision tasks, making it a surprisingly comprehensive provider for its size. The platform includes a free tier (Le Chat) for individual use, though this can't be used with OpenClaw — you need a La Plateforme API key.

Tags: open-weight, european, embeddings, transcription

Use Cases

  • Budget-friendly OpenClaw agent using Mistral Small for routine tasks
  • Native embeddings provider for OpenClaw memory and RAG features
  • European-compliant AI deployment with GDPR data residency
  • Multilingual agent workflows — Mistral excels at European languages
  • Long-document processing leveraging the 256K context window on Large 3
  • Audio transcription via Voxtral integration

Tips

  • Use Mistral Small ($0.15/MTok) as a budget alternative for heartbeats, cron, and simple tasks — one of the cheapest capable models available.
  • Pair Mistral's mistral-embed with OpenClaw's memory features — it's a solid, affordable embeddings model.
  • Mistral Large 3's 256K context window is one of the largest available — great for processing long documents.
  • For European users, Mistral's data residency options help with GDPR compliance.
  • Pin model versions (e.g., mistral-large-2411) instead of using 'latest' aliases for consistent behavior.

Known Issues & Gotchas

  • Mistral models don't support extended thinking/reasoning mode — no equivalent of Claude's thinking or o1-style reasoning.
  • The 'latest' model aliases (mistral-large-latest, mistral-small-latest) can change underlying models without notice. Pin to specific versions for production stability.
  • Mistral Small doesn't support vision — only Mistral Large has multimodal capabilities.
  • API rate limits on free tier are restrictive. You need to add a payment method for production-level rate limits.
  • Mistral's tool calling can be less reliable than Claude or GPT for complex multi-step tool chains.
  • Le Chat (free chat interface) is NOT the same as API access. You need a La Plateforme API key for OpenClaw.

Alternatives

  • Anthropic (Claude)
  • OpenAI
  • Together AI
  • Ollama (Local)

Community Feedback

Mistral Small is criminally underrated. At $0.15/MTok input it's cheaper than most local inference when you factor in electricity, and it handles tool calling well.

— Reddit r/LocalLLaMA

Mistral Large 3 with 256K context at $2/MTok input is genuinely competitive. Not quite Claude Sonnet but close enough for many tasks, and significantly cheaper.

— Reddit r/MachineLearning

The European data residency option is a real differentiator for GDPR-conscious teams. No other major provider makes this as easy as Mistral.

— Hacker News

Configuration Examples

Basic Mistral setup

providers:
  mistral:
    apiKey: your-mistral-api-key
    model: mistral/mistral-large-latest

Budget Mistral Small setup

providers:
  mistral:
    apiKey: your-mistral-api-key
    model: mistral/mistral-small-latest
    # $0.15/MTok input — great for routine tasks

Mistral as embeddings provider

providers:
  anthropic:
    apiKey: sk-ant-xxxxx
    model: anthropic/claude-sonnet-4-6
  mistral:
    apiKey: your-mistral-api-key
    # Use for embeddings alongside Anthropic for chat
    embeddings: mistral/mistral-embed