Getting Started
Set Your LLM API Key
Create your OpenAI API key here.
Choose Your Approach
Let the Weco copilot analyze your codebase, suggest & setup optimizations for you. Run Weco without any arguments to start the interactive copilot:
What to Expect
Both approaches will show you the same optimization interface. Here's what you can expect to see (keep an eye on that Best Solution
panel):

Applying Weco to Your Own Project
Figure Out Your Evaluation Script
Your evaluation script needs to benchmark the code you'd like to optimize and print a metric you're optimizing for to the console.
For specific examples of evaluation scripts for kernel engineering (PyTorch, CUDA, Triton etc.,), ML research and prompt engineering, check out the Examples section. If you'd like to know how to write a good evaluation script, we've got you covered with this guide.
Basic Example
Here's a simple example that optimizes a PyTorch function for speedup. You can also follow along here on Google Colab.
First install the dependencies:
Here's a simple example of a PyTorch model that we'll optimize:
Here's what your evaluation script needs to do:
You can find the complete evaluate.py
file for this example here.
Now run Weco to optimize your code:
Note: If you have an NVIDIA GPU, change the device in the
--eval-command
tocuda
. If you are running this on Apple Silicon, set it tomps
.
What's Next?
- Try different optimizations: Explore Examples for kernel optimization, ML models, and prompt engineering
- Improve your results: Learn Writing Good Evaluation Scripts
- Advanced usage: Check the CLI Reference for all command options
- Need help?: Visit our FAQ for common questions