This is the repository card of kernels-community/paged-attention that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/paged-attention")
convert_fp8 = kernel_module.convert_fp8
convert_fp8(...)
Available functions
convert_fp8copy_blocksopspaged_attention_v1paged_attention_v2reshape_and_cachereshape_and_cache_flashswap_blocks
Benchmarks
Benchmarking script is available for this kernel. Run kernels benchmark kernels-community/paged-attention.
Performance
- Downloads last month
- 775
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support