PyTorch Optimization
Optimize the speed of simple operation in PyTorch
You can:
- Follow along here
- Checkout the files from GitHub
- Run on Google Colab (recommended)
Setup
If you haven't already, follow the Installation guide to install the Weco CLI. Otherwise, install the CLI using pip
:
Choose your LLM provider:
Create your OpenAI API key here.
Install the dependencies of the scripts shown in subsequent sections.
Run Weco
Now run Weco to optimize your code:
Here's what you can expect to see (keep an eye on that Best Solution
panel):

Note: If you have an NVIDIA GPU, change the device in the
--eval-command
tocuda
. If you are running this on Apple Silicon, set it tomps
.
What's Next?
- Advanced GPU optimization: Try Triton or CUDA kernels
- Different optimization types: Explore Model Development or Prompt Engineering
- Better evaluation scripts: Learn Writing Good Evaluation Scripts
- All command options: Check the CLI Reference