Llama-3.2-3B-RYS-21-24

Llama-3.2-3B-Instruct with layers 21-23 duplicated. A late-stack math circuit runs twice on every forward pass.

28 base layers β†’ 31 after duplication. No training, no merging, no weight changes.

Math 0.470 β†’ 0.6132 (+14.32 β€” the biggest math lift in the v2 corpus). Reasoning 88.24% β†’ 82.36% (βˆ’5.88). EQ 84.38 β†’ 84.11 (βˆ’0.27).

Results

Metric Baseline RYS (21,24) Delta
Math 0.470 0.6132 +14.32
EQ 84.38 84.11 βˆ’0.27
Reasoning 88.24% 82.36% βˆ’5.88

The math amplifier. Llama-3.2-3B-Instruct has the second-highest baseline reasoning in the v2 corpus (88.24%) β€” near-ceiling, so RYS has little reasoning room to lift. But the same train-free intervention applied to the late-stack block (21,24) produces the biggest math lift anywhere in the corpus (+14.32 absolute, ~30% relative). Math and reasoning circuits sit at different depths in this model; the math one has headroom.

Pick this when math throughput matters and reasoning is already strong enough. The within-family contrast is the loudest in the corpus: sibling Llama-3.2-1B-RYS-10-13-GGUF lifts reasoning from 0% β†’ 64.71% on the same block-duplication mechanism applied to a different depth.

Usage

llama-server -m Llama-3.2-3B-RYS-21-24-Q4_K_M.gguf -ngl 99

Full sweep data

54 configurations tested. (21,24) block-3 is the best-combined pick (math-optimal). Full per-config sweep + cross-architecture analysis: v2 dataset.

Part of the RYS Sovereign Collection v2.


Where this sits in the Sovereign Collection

v1 β€” Qwen2.5 cross-scale + Qwen3-32B headline crossover. 5 model repos.

v2 β€” cross-architecture corpus. 21 model variants across 10 architecture families. Inverse correlation (r = βˆ’0.726): weak baselines lift more, in their weakest dimension. The Llama-3.2 family alone (1B + 3B) spans the entire baseline-vs-magnitude curve in the v2 corpus. 13 deployable RYS-applied weight repos covering every non-zero-lift variant.

Within-family sibling: john-broadway/Llama-3.2-1B-RYS-10-13-GGUF β€” the 0%β†’64.71% reasoning unlock at the weak-baseline end.

Credit

John Broadway, with collaboration from Claude (Opus 4.6 in April 2026 sweep generation and build pipeline; Opus 4.7 in May 2026 cross-architecture analysis and publication). Original RYS method by David Ng on Qwen2-72B; sweep + probe toolkit by alainnothere.

Downloads last month
139
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for john-broadway/Llama-3.2-3B-RYS-21-24-GGUF

Quantized
(453)
this model

Collection including john-broadway/Llama-3.2-3B-RYS-21-24-GGUF