Description
NVFP4 Quantization of Qwen/Qwen3-Coder-30B-A3B-Instruct using TensorRT-Model-Optimizer. KV Cache quantized to FP8 for compatibility with inference backends.
- Downloads last month
- 100
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for rahtml/Qwen3-Coder-30B-A3B-Instruct-NVFP4
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct