Overview
The Weco Command-Line Interface
Weco: The AI Research Engineer
Weco systematically optimizes your code, guided directly by your evaluation metrics.
The Weco CLI, powered by our core engine AIDE, leverages a tree search approach guided by Large Language Models (LLMs) to iteratively explore and refine your code. It automatically applies changes, runs your evaluation script, parses the results, and proposes further improvements based on the specified goal.
An example of the tree search process is shown below:
Key Applications
Weco can be applied to a wide range of optimization tasks, including:
- GPU Kernel Optimization: Reimplement PyTorch functions using CUDA or Triton optimizing for
latency
,throughput
, ormemory_bandwidth
. - Model Development: Tune feature transformations or architectures, optimizing for
validation_accuracy
,AUC
, orSharpe Ratio
. - Prompt Engineering: Refine prompts for LLMs, optimizing for
win_rate
,relevance
, orformat_adherence
.