Model Card for Model ID
An implementation of Speculative Cascades, the implementation is based on this paper.
Uses
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('Qwen/Qwen3-1.7B')
assistant_model = AutoModelForCausalLM.from_pretrained('Qwen/Qwen3-0.6B')
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-1.7B')
question = "If I bought a shirt at 20% discount for $4, what was the initial price before the discount?"
inputs = tokenizer(question, return_tensors='pt')
output = model.generate(
**inputs,
assistant_model=assistant_model,
do_sample=True,
alpha=0.25, # Rate at which to defer to the target model
deferral='v3', # Deferral method. See paper for details.
custom_generate='radia/speculative-cascades',
trust_remote_code=True,
max_new_tokens=320,
)
outputs = model.generate(
assistant_model
)
print(tokenizer.decode(outputs.tolist()[0]))
alpha: The rate to which defer to the target model logits. The meaning of alpha depends on the deferral method.deferral: The deferral method used for speculative cascades.
Deferral Methods
There are three deferral methods based on the paper: opt, v1, v2, and v3.
Let q, p be the probability distributions over the tokens of the draft and target model respectively at time step t.
opt is the analytically optimal method, but not the most performant empirically.
It defers to target model when q_max < p_max - alpha * tv_distance, _max is the most probable token.
v1 defers to the target model when q < p_max - alpha , which ensures that all draft tokens are at least as confident as the target tokens.
v2 defers to the target model when p < p_max - alpha, which substitutse the less confident tokens with tokens from the draft model.
v3 is empirically the most performant. It defers to the target model when p < p_max * (1 - alpha).
This substitutes all tokens not in the top alpha in the target model with tokens from the draft model.
Citation
@misc{narasimhan2024fastercascadesspeculativedecoding,
title={Faster Cascades via Speculative Decoding},
author={Harikrishna Narasimhan and Wittawat Jitkrittum and Ankit Singh Rawat and Seungyeon Kim and Neha Gupta and Aditya Krishna Menon and Sanjiv Kumar},
year={2024},
eprint={2405.19261},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2405.19261},
}
- Downloads last month
- 66