Inference Providers
Active filters: ModelOpt
nvidia/Gemma-4-31B-IT-NVFP4
Text Generation
• 21B • Updated • 1.4M
• 417
mmangkad/Qwen3.6-35B-A3B-NVFP4
Text Generation
• Updated • 9.15k
• 6
nvidia/MiniMax-M2.5-NVFP4
Text Generation
• 116B • Updated • 52k
• 32
AxionML/Qwen3.5-27B-NVFP4
Image-Text-to-Text
• 17B • Updated • 12.4k
• 10
Text Generation
• 435B • Updated • 155k
• 28
nvidia/Qwen3-235B-A22B-NVFP4
Text Generation
• 133B • Updated • 9.77k
• 17
nvidia/Qwen3-30B-A3B-NVFP4
Text Generation
• 16B • Updated • 321k
• 30
nvidia/Qwen3-30B-A3B-Thinking-2507-Eagle3
Text Generation
• 0.1B • Updated • 190
• 2
nvidia/Qwen3.5-397B-A17B-NVFP4
Text Generation
• Updated • 485k
• 92
nvidia/DeepSeek-V3-0324-NVFP4
Text Generation
• 397B • Updated • 37.8k
• 17
nvidia/DeepSeek-R1-0528-NVFP4
Text Generation
• 397B • Updated • 7.08k
• 44
nvidia/Qwen3-235B-A22B-FP8
Text Generation
• 235B • Updated • 1.38k
• 5
nvidia/DeepSeek-R1-NVFP4-v2
Text Generation
• 394B • Updated • 6.2k
• 7
nvidia/Phi-4-multimodal-instruct-NVFP4
4B • Updated • 1.65k
• 11
nvidia/Phi-4-multimodal-instruct-FP8
6B • Updated • 1.32k
• 7
nvidia/Phi-4-reasoning-plus-FP8
15B • Updated • 535
• 6
nvidia/Phi-4-reasoning-plus-NVFP4
8B • Updated • 1.36k
• 9
nvidia/Llama-3.1-8B-Instruct-NVFP4
5B • Updated • 123k
• 9
Text Generation
• 5B • Updated • 16.5k
• 17
Text Generation
• 8B • Updated • 27.9k
• 5
Text Generation
• 8B • Updated • 10.5k
• 8
Text Generation
• 15B • Updated • 5.67k
• 5
Text Generation
• 17B • Updated • 121k
• 15
nvidia/Qwen2.5-VL-7B-Instruct-FP8
Text Generation
• 8B • Updated • 561
• 8
nvidia/Qwen2.5-VL-7B-Instruct-NVFP4
Text Generation
• 5B • Updated • 25.3k
• 15
nvidia/DeepSeek-V3.1-NVFP4
Text Generation
• 394B • Updated • 12.9k
• 16
nvidia/DeepSeek-V3.2-NVFP4
Text Generation
• 394B • Updated • 42.1k
• 15
nvidia/Qwen3-235B-A22B-Instruct-2507-NVFP4
Text Generation
• 120B • Updated • 8.98k
• 8
nvidia/Qwen3-Coder-480B-A35B-Instruct-NVFP4
Text Generation
• 241B • Updated • 1.41k
• 11
mmangkad/Qwen3.5-27B-NVFP4
Text Generation
• 20B • Updated • 459
• 1