Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
inference-optimization 's Collections
Granite 4 Small and Tiny Quantized Models
NVIDIA-Nemotron-3-Nano-30B-A3B Quantized Models
Qwen3-Next-80B-A3B Quantized Models
Mixed Precision Models
KV Cache Quantization

NVIDIA-Nemotron-3-Nano-30B-A3B Quantized Models

updated Jan 14

FP8-dynamic, FP8-block, NVFP4, INT4, versions of nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B

Upvote
-

  • inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8

    Text Generation • 32B • Updated Jan 9 • 494

  • inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4

    18B • Updated Jan 15 • 40

  • inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-quantized.w4a16

    6B • Updated Jan 7 • 119

  • inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8-dynamic

    32B • Updated Jan 6 • 8
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs