Weco Logo
Weco Platform Docs

Supported Models

Complete list of AI models supported by Weco CLI

Weco supports models from multiple AI providers. The model you use affects the optimization quality, speed and credit cost. You can specify which model to use with the -M or --model flag when starting your optimization run with weco run. If you don't specify a model, Weco automatically uses the default model o4-mini.

OpenAI

  • gpt-5 (recommended)
  • gpt-5-mini
  • gpt-5-nano
  • o3-pro (recommended)
  • o3 (recommended)
  • o4-mini (recommended)
  • o3-mini
  • o1-pro
  • o1
  • gpt-4.1
  • gpt-4.1-mini
  • gpt-4.1-nano
  • gpt-4o
  • gpt-4o-mini
  • codex-mini-latest

Anthropic

  • claude-opus-4-1
  • claude-opus-4-0
  • claude-sonnet-4-0
  • claude-3-7-sonnet-latest

Google

  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.5-flash-lite

What's Next?

  • Start optimizing: Follow the Quickstart guide to set up your API key and run your first optimization
  • Choose the right model: Different models offer trade-offs between speed and quality - experiment to find what works best for your use case
  • Learn more about CLI options: Check the CLI Reference for all available command flags and options
  • Write better evaluations: Read our guide on Writing Good Evaluation Scripts to get the most out of your optimizations

On this page