Ouro-2.6B-Thinking-nvfp4

Note: This model is experimental. While the quantization completed the resulting model does not seem to behave properly. I'm leaving this in case someone wants to look at it and see what it does but I think a different approach may be needed for Ouro architecture models.

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: ByteDance/Ouro-2.6B-Thinking
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with Rombo-Org/Optimized_Reasoning.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model.

Running the model with VLLM in Docker

Currently this model will not run in VLLM. It does run in Transformers but the output is erratic.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.

Downloads last month
65
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Firworks/Ouro-2.6B-Thinking-nvfp4

Quantized
(2)
this model

Dataset used to train Firworks/Ouro-2.6B-Thinking-nvfp4