Skills
Use Weco as a skill for AI coding assistants like Claude Code and Cursor. Describe what you want to optimize in plain language and your assistant handles the rest.
A skill is a set of instructions that gets installed into your AI coding assistant, teaching it how to use Weco end-to-end - from setting up optimizations to interpreting results. Once installed, just describe what you want to optimize in plain language and your assistant handles the rest.
Why use a skill?
Instead of remembering CLI flags and writing evaluation scripts from scratch, a skill lets you work naturally:
- Natural language - just say "make this function faster" or "optimize accuracy"
- Automated setup - your assistant inspects your code, writes the evaluation, and configures the run
- Live monitoring - the assistant watches the optimization, installs missing dependencies, and reports progress
- Results explained - you get a clear summary of what changed, why it worked, and the real-world impact
Skills require the Weco CLI to be installed first. If you haven't installed it yet, see the Installation guide.
Claude Code
Install the skill
weco setup claude-codeThis writes the Weco skill file to your Claude Code configuration at ~/.claude/skills/weco/. Claude will automatically discover it in future conversations.
What gets installed:
~/.claude/skills/weco/
├── CLAUDE.md # Trigger snippet (Claude reads this automatically)
├── SKILL.md # Full optimization workflow and instructions
├── references/ # Advanced documentation (benchmarking, evaluation, etc.)
└── assets/ # Template evaluation scriptsStart optimizing
Open Claude Code in any project and describe what you want to improve:
Use Weco to make this function faster.Optimize module.py for accuracy using evaluate.py as the benchmark.Reduce the latency of the API handler.Claude will analyze your code, set up the evaluation, run the optimization, and explain the results - no CLI flags needed.
Cursor
Install the skill
weco setup cursorThis installs the Weco skill plus an always-on Cursor rule that makes the skill available across all your projects.
What gets installed:
~/.cursor/
├── rules/
│ └── weco.mdc # Always-on trigger rule
└── skills/
└── weco/
├── SKILL.md # Full optimization workflow and instructions
├── references/ # Advanced documentation
└── assets/ # Template evaluation scriptsStart optimizing
Open Cursor in any project and describe what you want to improve:
Use Weco to make this function faster.Optimize module.py for accuracy using evaluate.py as the benchmark.Reduce the latency of the API handler.Cursor will analyze your code, set up the evaluation, run the optimization, and explain the results - no CLI flags needed.
How it works
When you ask your assistant to optimize something, the skill guides it through a structured workflow:
- Analyze - The assistant inspects your code and infers what metric to optimize (speed, accuracy, memory, etc.)
- Baseline - It runs your code to measure current performance and makes the impact tangible
- Evaluate - It writes (or reuses) an evaluation script that measures the metric
- Optimize - It runs
weco runwith the right flags, monitors progress, and handles errors automatically - Report - It presents results with a clear explanation of what changed, why it worked, and the real-world impact
- Apply - It asks your permission before modifying any project files
Two modes
When you start an optimization, the skill will ask you to choose a mode:
| Mode | Best for | Details |
|---|---|---|
| Vibe Mode | Quick optimizations, experienced users | Minimal questions, maximum action. The assistant infers everything and just goes. |
| Assistant Scientist Mode | Complex tasks, first-time users | Collaborative and educational. The assistant interviews you, validates alignment, and explains tradeoffs. |
Example: Hello, World!
Here's a complete walkthrough using the skill with the Quickstart example:
Clone the example project
git clone https://github.com/WecoAI/weco-cli.git
cd weco-cli/examples/hello-world/
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txtOpen your assistant
claudecursor .Prompt naturally
Use the Weco skill to make the forward pass faster.Your assistant will:
- Inspect the example project and understand the code
- Configure and run the Weco optimization on
module.py - Monitor the iterations as they progress
- Explain what changed and how much faster the code got
For the full step-by-step walkthrough (including CLI usage), see the Quickstart guide.
What can skills optimize?
Skills work with anything the Weco CLI supports:
- GPU kernels - CUDA, Triton, PyTorch
- ML models - Model development, training pipelines, inference
- Prompts - LLM prompt engineering, agent behaviors
- General code - Any language (Python, Rust, C++, JavaScript, Go, etc.), any metric you can measure
Troubleshooting
What's next?
- Quickstart - Run your first optimization step by step
- Set Up Your Own Optimization - Apply Weco to your own code
- CLI Reference - All CLI commands and options
- Writing Good Evaluation Scripts - Get better results with well-designed benchmarks
- Examples - See Weco optimize CUDA kernels, ML models, and more