Qwen3.6-35B-A3B NVFP4 GGUF
NVFP4 GGUF quantizations of Qwen/Qwen3.6-35B-A3B, produced for use with llama.cpp.
This is a MoE model — 35B total parameters, 3B activated per token (8 of 256 experts). The expert FFN tensors are quantized to NVFP4 (NVIDIA's 4-bit float with E4M3 block scale), repacked from mmangkad/Qwen3.6-35B-A3B-NVFP4 (NVIDIA ModelOpt v0.43 calibration). Because the experts dominate the model's memory footprint, NVFP4-quantizing them gives most of the size reduction; the remaining tensors (attention, shared experts, SSM linear_attn blocks, embeddings) use a conventional GGUF quant.
About LibertAI
LibertAI is a decentralized AI platform — private inference, an OpenAI-compatible API, and a chat UI, all running on community GPUs over Aleph Cloud instead of a single company's servers. No accounts required to chat, no logs sent home, and the same models you'd self-host are available behind a sovereign endpoint.
If you want to put this model (or any other) to work as an autonomous agent without running your own infrastructure, check out LiberClaw — Hermes-style agents hosted on Aleph Cloud with LibertAI inference. Free tier: 2 agents, no credit card, 5 minutes to deploy. Open source.
Why NVFP4? On NVIDIA Blackwell GPUs (RTX 50-series, B100/B200), llama.cpp uses native NVFP4 tensor-core MMA kernels (added in llama.cpp #22196) for the expert matmul — the dominant compute cost during MoE inference. On older GPUs the path falls back to
dp4a/MMQ kernels, where these GGUFs run but offer no perf advantage over standard K-quants.
Files
| File | Size | Experts | Other tensors | When to pick |
|---|---|---|---|---|
Qwen3.6-35B-A3B-NVFP4-Q4_K_M.gguf |
19 GB | NVFP4 | Q4_K_M | Recommended — fastest on Blackwell (smaller = less VRAM bandwidth) |
Qwen3.6-35B-A3B-NVFP4-Q8_0.gguf |
20 GB | NVFP4 | Q8_0 | Higher quality non-expert tensors |
Qwen3.6-35B-A3B-NVFP4-BF16.gguf |
22 GB | NVFP4 | BF16 | Max quality (preserves source precision for non-expert tensors) |
mmproj-Qwen3.6-35B-A3B-F16.gguf |
861 MB | — | F16 vision tower | Required for image/video input — reusable with any Qwen3.6-35B-A3B GGUF |
Performance
Measured on an NVIDIA RTX 5090 (32 GB, Blackwell, sm_120), llama.cpp build c84e6d6db.
Variant comparison (single-stream, llama-bench 512 in / 64 out)
| Variant | Size | PP512 (tok/s) | TG64 (tok/s) |
|---|---|---|---|
NVFP4-Q4_K_M |
18.41 GiB | 6698 | 223 |
NVFP4-Q8_0 |
19.36 GiB | 4440 | 196 |
NVFP4-BF16 |
21.48 GiB | 3736 | 171 |
Counterintuitively the smallest variant is the fastest here — for an MoE model only 3B parameters are active per token, so memory bandwidth dominates and the tighter quant wins. Pick Q4_K_M unless you specifically need higher precision for the attention/embedding tensors.
Note on MoE expert kernels (honest comparison vs stock Q4_K_M)
For our two dense NVFP4 GGUFs (Qwen3.6-27B and Gemma-4-31B-IT), our NVFP4-Q4_K_M variant beats stock Q4_K_M on serving throughput by ~5–14% on RTX 5090.
For this MoE model, however, llama.cpp's Q4_K_M MMQ kernel currently outperforms the NVFP4 expert path. At parallel=8, batched serving:
| Stock Q4_K_M (19.9 GiB) | NVFP4-Q4_K_M (18.4 GiB) | |
|---|---|---|
| Total throughput | 2988 tok/s | 2730 tok/s |
| TG throughput | 808 tok/s | 765 tok/s |
The NVFP4 MoE kernel has room to optimize upstream — we'll refresh these GGUFs (no re-conversion needed, just re-bench) once that lands. Until then this release is most useful for: (a) format parity with vLLM/SGLang/TensorRT-LLM checkpoints, (b) calibrated NVFP4 quality vs RTN, and (c) running the model bit-for-bit identically to the upstream NVIDIA-style quant.
Usage
Text-only (CLI)
llama-cli -m Qwen3.6-35B-A3B-NVFP4-Q4_K_M.gguf -ngl 999 -c 8192 -p "Your prompt here"
Multimodal (server, vision + text)
llama-server \
-m Qwen3.6-35B-A3B-NVFP4-Q4_K_M.gguf \
--mmproj mmproj-Qwen3.6-35B-A3B-F16.gguf \
-ngl 999 -c 32768 \
--host 0.0.0.0 --port 8080
Then POST to /v1/chat/completions with image content blocks — see the llama.cpp multimodal docs.
Recommended sampler
Qwen3.6 is a thinking model. Default chat template enables <think> blocks. For non-thinking usage pass --reasoning off (in llama-cli) or set chat_template_kwargs.enable_thinking=false in the API.
About the architecture
Qwen3.6-35B-A3B is a hybrid attention + SSM MoE model with 40 layers, 256 experts (8 active per token), and 35B total / 3B active parameters. The NVFP4 source from mmangkad keeps the standard attention projections, shared expert FFN (*_shexp), SSM linear_attn blocks, and embeddings at higher precision — only the routed expert FFN matmul (120 tensors: 40 layers × 3 projections) is NVFP4. The variants above differ only in how those non-expert tensors are stored.
Sources & credits
- Base model: Qwen/Qwen3.6-35B-A3B by Alibaba Qwen team — Apache 2.0
- NVFP4 calibration source: mmangkad/Qwen3.6-35B-A3B-NVFP4 (NVIDIA ModelOpt v0.43)
- mmproj source: official BF16 weights from
Qwen/Qwen3.6-35B-A3B - Tooling: llama.cpp
convert_hf_to_gguf.pyandllama-quantize
License
Apache 2.0, inherited from the upstream model.
- Downloads last month
- 956
4-bit
8-bit
16-bit
Model tree for LibertAIDAI/Qwen3.6-35B-A3B-NVFP4-GGUF
Base model
Qwen/Qwen3.6-35B-A3B