MagicQuant Hybrids (v2.0) - Qwen3.6-35B-A3B Uncensored (By llmfan46)

MagicQuant is a benchmark driven GGUF hybrid discovery and validation system focused on finding real, practical GGUF quants specific to each architecture.

Whether it's a pure baseline model built by llama.cpp, learned tensor configurations from Unsloth, or a custom built MagicQuant hybrid, the model table below shows quants that have won dominance checks, survived collapse spaces, and/or were found to be nonlinearly better. Instead of dumping every quant type possible, MagicQuant tests, validates, and brutally murders anything deemed unworthy.

Support MagicQuant

I’m a solo developer working full time for myself to achieve my dream. I build open source code on the side. If you like any of my work, buying me a coffee is always appreciated. Otherwise, I hope you enjoy, maybe give me a star or something. Or just send me good vibes. Either way, thank you!

Click here to see ways to support - BTC, Paypal, GitHub sponsors.

Clone Notice

This repository did not run through the full MagicQuant evolution/search pipeline. It is a clone of the final survivor tensor configurations from magiccodingman/Qwen3.6-35B-A3B-MagicQuant-GGUF, rebuilt and benchmarked locally for this model.

The archived MagicQuant JSON files in magicquant-manifest/ are copied from the source release for durability. The clone benchmark JSON and the table below are from this clone run, so those metrics reflect the rebuilt outputs in this repository.


Final survivors

Name Provider KLD Size (GB) Download
LM-Q8_0 llama.cpp 0.004771 36.91 Link
MQ-Q6_K_1 MagicQuant 0.005383 31.59 Link
MQ-Q5_K_1 MagicQuant 0.006012 29.19 Link
MQ-Q5_K_S_1 MagicQuant 0.007155 26.33 Link
MQ-Q4_K_M_1 MagicQuant 0.007832 24.82 Link
MQ-Q4_K_M_2 MagicQuant 0.010894 22.32 Link
MQ-IQ4_NL_1 MagicQuant 0.013040 20.89 Link
MQ-IQ3_M_1 MagicQuant 0.026825 17.60 Link
UD-IQ3_S Unsloth 0.068513 13.68 Link
MQ-IQ2_XXS_1 MagicQuant 0.275805 9.59 Link
Provider credits
  • llama.cpp — Baseline quantization formats and llama.cpp tooling.
Warning - Is MagicQuant Better? (hint: how you frame the question matters)

External/custom baselines are normalized into MagicQuant's controlled comparison flow. MagicQuant rebuilds a learned baseline under native-source / MagicQuant-controlled conditions, including its own imatrix handling, so hybrids or external baselines (like Unsloth) can be judged on a more equal footing. That does not mean MagicQuant proved the original upstream artifact or upstream imatrix was worse. These comparisons exist for internal hybrid-search consistency and equal playing field comparisons, not as a universal judgment of the original creator's exact release artifact.

Easier to digest explanation:

MagicQuant compares and benchmarks the models quant to tensor configurations, but not the original artifact. And there's different reasons MagicQuant chooses to lift up a winning quant, not all winners are purely "better". It depends heavily on a variety of factors. Though choices are always documented in the repo under the manifest folder. You can always view what and why decisions were made by the automated system.

So, MagicQuant can confidently tell you, "under the same quantization to tensor configurations and identical imatrix, with this benchmark, I deemed this a winner".

Re-Uploading External Provider Baselines

By default, if an external provider like Unsloth is deemed the winner, the repo should generally link directly to the original provider instead of re-hosting the quant. External GGUFs are normally only re-uploaded when a specific winning variant does not already exist (e.g. Heretic models or similar).


Release metadata

  • Final survivor metrics — full file names, KLD, PPL, PPL delta %, byte sizes, download targets, and replacement lineage. PPL delta % is measured against the native/reference PPL when available; negative is better and larger positive values are worse.
  • Hybrid tensor map — tensor-group assignments and effective-state details for MagicQuant hybrid GGUFs.
  • Clone tensor configs — exact per-GGUF tensor quantization maps for reproducing this final output list in repository clone mode.
  • Isolation samples — isolated base/group probe samples with KLD, PPL, PPL delta %, and size truth.
  • Bad trade details — structured bad-trade pruning decisions from the isolation optimizer.
  • Clone benchmark summary — fresh benchmark results from this clone run.
  • Replacement details — structured details for baselines or anchors removed from the final download table, including reason codes, KLD deltas, PPL delta %, and size deltas.
Replacement reason codes
  • STRICT_DOMINANCE — the winner was no larger and had lower real KLD than the removed anchor.
  • NEAR_BASELINE_PREMIUM — the winner used only the configured near-baseline size premium and beat the real linear KLD trade line.
  • INTERIOR_DISCOVERY — the winner was selected as a useful interior point inside a size/KLD gap between anchors.
  • SPACING_COLLAPSE — two candidates were too close in practical output space, so the stronger one was kept.
  • FINAL_DOMINANCE — a later validated survivor dominated this artifact in final real benchmark comparison.

Underlined names in the table replaced or ultimately inherited the replacement of another artifact. Hover the name for the short replacement summary, or inspect magicquant-manifest/magicquant.replacements.json for exact KLD/PPL/size deltas.


Downloads last month
7,968
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for magiccodingman/Qwen3.6-35B-A3B-Uncensored-MagicQuant-GGUF

Quantized
(18)
this model

Collection including magiccodingman/Qwen3.6-35B-A3B-Uncensored-MagicQuant-GGUF