Weco Logo
Weco Docs

CLI Reference

Reference for the Weco CLI commands and options

This reference provides detailed information about the Weco CLI commands and their options. If ever in doubt, you can use weco --help to get a list of all the commands. Use weco <command> --help to get more information about a specific command (e.g. weco run --help).

For answers to common questions about command usage and troubleshooting, see our FAQ.

Logging In

When you run any Weco command on an unregistered machine, you'll be given an option to log in using Google OAuth.

Interactive Copilot (Preview)

Preview Feature: The interactive copilot is currently in alpha testing. We recommend using the manual weco run command with your own parameters.

When you run weco without any subcommands, it launches an interactive copilot (preview) that guides you through finding optimization opportunities and setting them up:

# Launch copilot in current directory
weco
 
# Launch copilot for specific project
weco /path/to/your/project

Copilot Features

The interactive copilot (preview) enables guided execution:

  • Codebase Analysis: Analyzes your project structure and identifies optimization opportunities
  • AI-Powered Suggestions: Generates specific optimization recommendations with estimated performance gains
  • Evaluation Script Generation: Creates custom evaluation scripts or helps configure existing ones
  • Command Building: Constructs the complete weco run command with appropriate parameters

This feature is available for testing, but we recommend the manual approach with weco run.

Running Optimizations

This is the primary command for starting the optimization process. It takes several arguments to configure how Weco should optimize your code.

⚠️ Warning: Code Modification

weco directly modifies the file specified by --source during the optimization process. It is strongly recommended to use version control (like Git) to track changes and revert if needed. Alternatively, ensure you have a backup of your original file before running the command. Upon completion, the file will contain the best-performing version of the code found during the run.

Command Arguments

Required:

ArgumentDescriptionExample
-s, --sourcePath to the source code file that will be optimized.-s model.py
-c, --eval-commandCommand to run for evaluating the code in --source. This command should print the target --metric and its value to the terminal (stdout/stderr). See note below.-c "python eval.py"
-m, --metricThe name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name does not need to match what's printed by your --eval-command exactly (e.g., its okay to use "speedup" instead of "Speedup:").-m speedup
-g, --goalmaximize/max to maximize the --metric or minimize/min to minimize it.-g maximize

Optional:

ArgumentDescriptionDefaultExample
-n, --stepsNumber of optimization steps (LLM iterations) to run.100-n 50
-M, --modelModel identifier for the LLM to use (e.g., o4-mini, claude-sonnet-4-0). See Supported Models for the complete list of available models.o4-mini when OPENAI_API_KEY is set; claude-sonnet-4-0 when ANTHROPIC_API_KEY is set; gemini-2.5-pro when GEMINI_API_KEY is set.-M o4-mini
-i, --additional-instructionsNatural language description of specific instructions or path to a file containing detailed instructions to guide the LLM.None-i instructions.md or -i "Optimize the model for faster inference"
-l, --log-dirPath to the directory to log intermediate steps and final optimization result..runs/-l ./logs/

Evaluation Requirements

The command specified by --eval-command is crucial for the optimization process. It must:

  1. Execute the modified code from --source
  2. Assess its performance
  3. Print the metric you specified with --metric along with its numerical value to the terminal

For example, if you set --metric speedup, your evaluation script should output a line like:

speedup: 1.5

or

Final speedup value = 1.5

Weco will parse this output to extract the numerical value (1.5 in this case) associated with the metric name ('speedup').

For detailed guidance on creating effective evaluation scripts, see Writing Good Evaluation Scripts.

Performance Expectations

Weco, powered by the AIDE algorithm, optimizes code iteratively based on your evaluation results. Achieving significant improvements, especially on complex research-level tasks, often requires substantial exploration time.

The following plot from the independent Research Engineering Benchmark (RE-Bench) report shows the performance of AIDE (the algorithm behind Weco) on challenging ML research engineering tasks over different time budgets:

RE-Bench Performance Across Time

As shown, AIDE demonstrates strong performance gains over time, surpassing lower human expert percentiles within hours and continuing to improve. This highlights the potential of evaluation-driven optimization but also indicates that reaching high levels of performance comparable to human experts on difficult benchmarks can take considerable time (tens of hours in this specific benchmark, corresponding to many --steps in the Weco CLI).

Factor this into your planning when setting the number of --steps for your optimization runs.

Logging Out

This command logs you out of your Weco account. It doesn't take any arguments.

weco logout

On this page