Kernels

This is the repository card of kernels-community/paged-attention that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.

How to use

# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel

kernel_module = get_kernel("kernels-community/paged-attention")
convert_fp8 = kernel_module.convert_fp8

convert_fp8(...)

Available functions

  • convert_fp8
  • copy_blocks
  • ops
  • paged_attention_v1
  • paged_attention_v2
  • reshape_and_cache
  • reshape_and_cache_flash
  • swap_blocks

Benchmarks

Benchmarking script is available for this kernel. Run kernels benchmark kernels-community/paged-attention.

Performance

Downloads last month
775
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support