Installation
System Requirements
Helion currently targets Linux systems and requires a recent Python and PyTorch environment:
Operating System
Linux-based OS
Other Unix-like systems may work but are not officially supported
Python Environment
Python 3.10, 3.11, or 3.12
We recommend using conda for environment management
Dependencies
Installation Methods
Method 1: Install via pip
The easiest way to install Helion is directly from GitHub:
pip install git+https://github.com/pytorch-labs/helion.git
We also publish PyPI releases, but the GitHub version is recommended for the latest features and fixes.
Method 2: Development Installation
For development purposes or if you want to modify Helion:
# Clone the repository
git clone https://github.com/pytorch-labs/helion.git
cd helion
# Install in editable mode with development dependencies
pip install -e '.[dev]'
This installs Helion in “editable” mode so that changes to the source code take effect without needing to reinstall.
Step-by-Step Setup Guide
1. Set Up Conda Environment
We recommend using conda to manage dependencies:
# Create a new environment
conda create -n helion python=3.12
conda activate helion
2. Install PyTorch Nightly
Install the latest PyTorch nightly build:
# For CUDA 12.6 systems
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126
see PyTorch installation instructions for other options.
3. Install Triton from Source
Helion requires a development version of Triton:
# Clone and install Triton
git clone https://github.com/triton-lang/triton.git
cd triton
# Install Triton
pip install -e .
# Return to your working directory
cd ..
4. Install Helion
Choose one of the installation methods above:
# Option A: From GitHub
pip install git+https://github.com/pytorch-labs/helion.git
# Option B: Development installation
git clone https://github.com/pytorch-labs/helion.git
cd helion
pip install -e '.[dev]'
Verification
Verify your installation by running a simple test:
import torch
import helion
import helion.language as hl
@helion.kernel(use_default_config=True)
def test_kernel(x: torch.Tensor) -> torch.Tensor:
out = torch.empty_like(x)
for tile in hl.tile(x.shape[0]):
out[tile] = x[tile] * 2
return out
x = torch.randn(100, device='cuda')
result = test_kernel(x)
torch.testing.assert_close(result, x * 2)
print("Verification successful!")
Development Dependencies
If you installed with [dev]
, you get additional development tools:
pytest - Test runner
pre-commit - Code formatting and linting hooks
Set up pre-commit hooks for development:
pre-commit install
Optional Dependencies
Documentation Building
To build documentation locally:
pip install -e '.[docs]'
cd docs
make html
GPU Requirements
Matches the requirements of Triton. At the time of writing:
NVIDIA GPUs (Compute Capability 8.0+)
AMD GPUs (ROCm 6.2+)
Via third-party forks: Intel XPU, CPU, and many other architectures
Next Steps
Once installation is complete:
Check out the API Reference for complete API documentation
Explore the examples/ folder for real-world patterns