PyTorch Optimization
Optimize the speed of simple operation in PyTorch
You can:
- Follow along here
- Checkout the files from GitHub
- Run on Google Colab (recommended)
Setup
If you haven't already, follow the Installation guide to install the Weco CLI. Otherwise, install the CLI using pip
:
Choose your LLM provider:
Create your OpenAI API key here.
Install the dependencies of the scripts shown in subsequent sections.
Run Weco
Now run Weco to optimize your code:
Here's what you can expect to see (keep an eye on that Best Solution
panel):

Note: If you have an NVIDIA GPU, change the device in the
--eval-command
tocuda
. If you are running this on Apple Silicon, set it tomps
.
Next Steps
Now that you've optimized PyTorch operations, you might want to explore more advanced GPU optimization techniques. Try Triton Optimization for writing custom GPU kernels with a Python-like syntax, or CUDA Optimization for the ultimate performance with low-level kernel programming. If you're interested in different types of optimization, check out Model Development for complete machine learning workflows.
For more advanced usage and configuration options, visit the CLI Reference or learn about Writing Good Evaluation Scripts to improve your optimization results.
For more examples, visit the Examples section.