Exploring the Latent Capacity of LLMs for One-Step Text Generation Paper โข 2505.21189 โข Published May 27 โข 61
Quartet: Native FP4 Training Can Be Optimal for Large Language Models Paper โข 2505.14669 โข Published May 20 โข 78
Hogwild! Inference: Parallel LLM Generation via Concurrent Attention Paper โข 2504.06261 โข Published Apr 8 โข 110
One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation Paper โข 2503.13358 โข Published Mar 17 โข 95
When an LLM is apprehensive about its answers -- and when its uncertainty is justified Paper โข 2503.01688 โข Published Mar 3 โข 21
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? Paper โข 2502.14502 โข Published Feb 20 โข 91