Supported Models
Complete list of AI models supported by Weco CLI
Weco supports models from multiple AI providers. The model you use affects the optimization quality and speed. You can specify which model to use with the -M
or --model
flag when running weco run
.
Default Models
If you don't specify a model, Weco automatically selects a default based on your available API keys:
o4-mini
whenOPENAI_API_KEY
is setclaude-sonnet-4-0
whenANTHROPIC_API_KEY
is setgemini-2.5-pro
whenGEMINI_API_KEY
is set
If more than one API key is available and you don't specify a model, Weco will uses the following precedence:
OPENAI_API_KEY
ANTHROPIC_API_KEY
GEMINI_API_KEY
For information on setting up API keys, see our Getting Started guide.
OpenAI
o3
o3-mini
o4-mini
o1-pro
o1
gpt-4.1
gpt-4.1-mini
gpt-4.1-nano
gpt-4o
gpt-4o-mini
Anthropic
claude-opus-4-0
claude-sonnet-4-0
claude-3-7-sonnet-latest
gemini-2.5-pro
gemini-2.5-flash
gemini-2.5-flash-lite
What's Next?
- Start optimizing: Follow the Getting Started guide to set up your API key and run your first optimization
- Choose the right model: Different models offer trade-offs between speed and quality - experiment to find what works best for your use case
- Learn more about CLI options: Check the CLI Reference for all available command flags and options
- Write better evaluations: Read our guide on Writing Good Evaluation Scripts to get the most out of your optimizations