Instructions to use Qwen/Qwen3-Coder-Next-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3-Coder-Next-FP8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Qwen/Qwen3-Coder-Next-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Coder-Next-FP8") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Coder-Next-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Qwen/Qwen3-Coder-Next-FP8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Qwen/Qwen3-Coder-Next-FP8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3-Coder-Next-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Qwen/Qwen3-Coder-Next-FP8
- SGLang
How to use Qwen/Qwen3-Coder-Next-FP8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Qwen/Qwen3-Coder-Next-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3-Coder-Next-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Qwen/Qwen3-Coder-Next-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3-Coder-Next-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Qwen/Qwen3-Coder-Next-FP8 with Docker Model Runner:
docker model run hf.co/Qwen/Qwen3-Coder-Next-FP8
Cannot deploy with vllm on p4de.24xlarge with vllmV1 using --tensor-parallelization 8
Hello Team,
I am unable to deploy the FP8 model, it seems that the sharding does not work?
Anyone seeing it too?
Note that I am able to deploy the unquantized Qwen3-Coder-Next on the same instance without a problem
Here is the config of the instance.
$nvidia-smi
NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8
...
| 0 NVIDIA A100-SXM4-80GB On | 00000000:xx:xx.x Off | 0 |
| N/A 52C P0 76W / 400W | 0MiB / 81920MiB | 0% Default |
...
ERROR AM SEEING
Detected some but not all shards of model.layers.0.linear_attn.in_proj are quantized. All shards of fused layers to have the same precision.
I have a question which might be unrelated, why 8 x A100, what sort of tokens capacity / tokens per second are you planning to process ?
Just trying to push the limit in terms of context size and speed generation, no specific goal in mind.
@HenryGuillaumet
Its not possible to run fp8 on A100, you have to use blackwell, hopper or Ada GPUs
Example: H100, H200, L40s, B200
Do you mind if I ask questions : we are a new stack helping people run inferencing faster than vanilla deployment. A one sentence would be
open-weight inference: one-click deployment, automatic optimization, and reliable capacity so teams ship faster, pay less per outcome, and don’t think about infrastructure.
Is this something you would care or solve a problem of yours ?
Thanks for your answer, however it is possible to run FP8 on A100, it falls back to Marlin which is less optimized but definitely possible as I was able to run other FP8 models.
I am not interested thanks.
Hi @HenryGuillaumet
I ran on 2x A100 and I did not have any issue, you were right about Marlin kernels and it did fallback but with a warning that heavy tasks maybe slower.
Speed is around 15 tokens a second, will be lower on larger context length.
CUDA_VISIBLE_DEVICES=0,1 vllm serve Qwen/Qwen3-Coder-Next-FP8 \
--tensor-parallel-size 2 \
--max-num-seqs 400 \
--max-model-len 15000 \
--disable-custom-all-reduce

