Quickstart
Welcome to Weco!
This quickstart guide will have you optimizing your first piece of code in just a few minutes. By the end, you'll understand how to use Weco's AI-powered optimization to improve your code's performance, quality, and cost metrics.
Before you begin
Make sure you have:
- Python 3.8 or newer installed
- A terminal or command prompt open
- Basic familiarity with Python
Your first optimization
Step 2: Get an example project
Let's start with a ready-to-run example. Clone the example repository and navigate to the demo:
Step 3: Run your first optimization
Now let's optimize some code! You have two options:
Recommended for learning: Run the optimization with explicit parameters to understand what Weco is doing:
Step 4: Watch the optimization in action
Weco will now iterate through multiple optimization attempts. Here's what you'll see:
- Current solution: The code Weco is testing
- Evaluation results: Performance metrics for each attempt
- Best solution: The highest-performing code so far (this is what you want!)

Weco will:
- Analyze your code
- Generate optimized versions
- Run your evaluation script on each version
- Track the best performing solution
- Save the winning code when complete
Use Weco with your own code
Now that you've seen Weco in action, let's apply it to your own project.
Understanding evaluation scripts
The key to using Weco is having an evaluation script that:
- Benchmarks your code's performance
- Validates that optimizations don't break functionality
- Prints a metric (like
speedup: 2.5x
) that Weco can read
Example use cases:
- Kernel optimization: Measure PyTorch, CUDA, or Triton kernel execution time → See examples
- ML research: Track model accuracy, training speed, or inference latency
- Prompt engineering: Evaluate LLM response quality or token efficiency
Pro tip: Check out our guide on Writing Good Evaluation Scripts to get better optimization results.
Step-by-step: Optimize your own code
Step 1: Create your evaluation script
Your evaluation script needs to benchmark the code you'd like to optimize and print a metric you're optimizing for to the console.
Step 2: Run Weco on your code
Here's a complete example showing how to optimize a PyTorch model. You can also follow along in Google Colab.
Install dependencies:
Create your code to optimize (optimize.py
):
Create your evaluation script (evaluate.py
):
This script will test each optimization and report how much faster it is:
See the complete example: Full
evaluate.py
code is available here.
Run the optimization:
Note: If you have an NVIDIA GPU, change the device in the
--eval-command
tocuda
. If you are running this on Apple Silicon, set it tomps
.
What's next?
Now that you've run your first optimization, explore more ways to use Weco:
- Example Dashboard Runs - See real optimization runs in the dashboard with impressive results
- Local Examples - See Weco optimize CUDA kernels, ML models, and more
- Writing Good Evaluation Scripts - Get better results with well-designed benchmarks
- CLI Reference - Master all Weco commands and options
- FAQ - Find answers to common questions