Text Generation
Transformers
Safetensors
qwen3_5_moe_text
darwin
darwin-v7
evolutionary-merge
reasoning
advanced-reasoning
chain-of-thought
thinking
qwen3.6
qwen
Mixture of Experts
mixture-of-experts
claude-opus
distillation
gpqa
benchmark
open-source
apache-2.0
hybrid-vigor
proto-agi
vidraft
Eval Results
conversational
Eval Results (legacy)
Darwin-36B-Opus v3: Darwin V7 merge + targeted refinement, GPQA Diamond 88.4%
Browse files- README.md +268 -116
- config.json +93 -121
- generation_config.json +6 -10
- lora_train_config.json +28 -0
- model-00001-of-00021.safetensors +2 -2
- model-00002-of-00021.safetensors +2 -2
- model-00003-of-00021.safetensors +2 -2
- model-00004-of-00021.safetensors +1 -1
- model-00005-of-00021.safetensors +2 -2
- model-00006-of-00021.safetensors +2 -2
- model-00007-of-00021.safetensors +2 -2
- model-00008-of-00021.safetensors +2 -2
- model-00009-of-00021.safetensors +2 -2
- model-00010-of-00021.safetensors +2 -2
- model-00011-of-00021.safetensors +2 -2
- model-00012-of-00021.safetensors +2 -2
- model-00013-of-00021.safetensors +2 -2
- model-00014-of-00021.safetensors +2 -2
- model-00015-of-00021.safetensors +2 -2
- model-00016-of-00021.safetensors +2 -2
- model-00017-of-00021.safetensors +2 -2
- model-00018-of-00021.safetensors +2 -2
- model-00019-of-00021.safetensors +2 -2
- model-00020-of-00021.safetensors +2 -2
- model-00021-of-00021.safetensors +2 -2
- model.safetensors.index.json +0 -0
- tokenizer_config.json +3 -4
README.md
CHANGED
|
@@ -1,120 +1,222 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
- ko
|
| 6 |
-
- multilingual
|
| 7 |
base_model:
|
| 8 |
- Qwen/Qwen3.6-35B-A3B
|
| 9 |
- hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
|
| 10 |
tags:
|
| 11 |
-
- darwin
|
| 12 |
-
-
|
| 13 |
-
-
|
| 14 |
-
- linear-merge
|
| 15 |
-
- qwen3.6
|
| 16 |
-
- moe
|
| 17 |
-
- a3b
|
| 18 |
- reasoning
|
|
|
|
|
|
|
| 19 |
- thinking
|
| 20 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
- hybrid-vigor
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
---
|
| 25 |
|
| 26 |
-
# Darwin-36B-Opus
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
| Phase 3 (maj@8 tiebreak) | 162/198 (81.8%) | +7 |
|
| 40 |
-
| Phase 4 (MTI maj@8) | 169/198 (85.4%) | +7 |
|
| 41 |
-
| Phase 5 (MTI tiebreak) | **171/198 (86.4%)** | +2 |
|
| 42 |
|
| 43 |
-
|
|
|
|
| 44 |
|
| 45 |
-
|
| 46 |
|
| 47 |
-
##
|
| 48 |
-
- **[Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B)**
|
| 49 |
-
- 35B MoE (3B active), 40 layers
|
| 50 |
-
- Hybrid attention: **Gated DeltaNet 75% + Gated Attention 25%**
|
| 51 |
-
- GPQA 86.0% / MMLU-Pro 85.2% / AIME26 92.7% (official)
|
| 52 |
|
| 53 |
-
|
| 54 |
-
- **[hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled)**
|
| 55 |
-
- Father에 Claude Opus 4.6 CoT 증류 SFT
|
| 56 |
-
- LoRA rank=32, 2 epochs, 762 steps, 14,233 CoT 샘플
|
| 57 |
-
- MMLU-Pro (70 limit-5): **75.71%** (+32.85%p vs Father base)
|
| 58 |
-
- qwen3-thinking 템플릿, response-only masking
|
| 59 |
|
| 60 |
-
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
attention / router / shared_expert / routed_expert /
|
| 68 |
-
embedding / lm_head / norm / other
|
| 69 |
-
Phase 2: Mother-centric Linear Merge per category
|
| 70 |
-
Phase 3: Per-expert MRI override (routed_experts 82/82)
|
| 71 |
-
Phase 4: Health check (smoke test) → ✅
|
| 72 |
-
```
|
| 73 |
|
| 74 |
-
##
|
| 75 |
|
| 76 |
-
|
|
| 77 |
-
|
|
| 78 |
-
|
|
| 79 |
-
|
|
| 80 |
-
|
|
| 81 |
-
|
|
| 82 |
-
|
|
| 83 |
-
|
|
| 84 |
-
|
|
| 85 |
-
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
|
| 88 |
|
| 89 |
-
|
| 90 |
-
merged = (1 - r) × Father + r × Mother
|
| 91 |
-
```
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
|
| 105 |
-
|
| 106 |
|
| 107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
-
|
| 110 |
-
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
- **Hidden size**: 2048
|
| 114 |
-
- **Hybrid attention**: 75% Gated DeltaNet + 25% Gated Attention
|
| 115 |
-
- **Chat template**: `<|im_start|>assistant\n<think>\n` (thinking mode)
|
| 116 |
|
| 117 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
|
| 119 |
```python
|
| 120 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
@@ -128,59 +230,109 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 128 |
trust_remote_code=True,
|
| 129 |
)
|
| 130 |
|
| 131 |
-
messages = [
|
|
|
|
|
|
|
| 132 |
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 133 |
inputs = tok(text, return_tensors="pt").to(model.device)
|
| 134 |
outputs = model.generate(**inputs, max_new_tokens=5120, temperature=0.6, do_sample=True)
|
| 135 |
print(tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
|
| 136 |
```
|
| 137 |
|
| 138 |
-
##
|
| 139 |
|
| 140 |
-
|
| 141 |
|
| 142 |
```python
|
| 143 |
-
|
| 144 |
idx = response.rfind("</think>")
|
| 145 |
answer_part = response[idx + len("</think>"):].strip() if idx >= 0 else response
|
| 146 |
```
|
| 147 |
|
| 148 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
|
| 150 |
-
|
|
|
|
|
|
|
| 151 |
|
| 152 |
-
|
|
| 153 |
-
|
|
| 154 |
-
| Darwin-
|
| 155 |
-
|
|
| 156 |
-
| Darwin-
|
| 157 |
-
| Darwin-
|
|
|
|
| 158 |
|
| 159 |
-
|
| 160 |
|
| 161 |
-
|
| 162 |
-
- **Hardware**: NVIDIA B200 (merge) + 8× B200 (eval)
|
| 163 |
-
- **Merge 소요**: 493초 (8분, CMA-ES 없이 직접 처방)
|
| 164 |
-
- **Tensors merged**: 1,045 (MRI override: 82/82 routed experts)
|
| 165 |
-
- **Shards**: 21 × ~3.4GB
|
| 166 |
-
- **Eval 소요**: 420분 (7시간, 5-phase pipeline)
|
| 167 |
|
| 168 |
-
|
| 169 |
|
| 170 |
-
|
| 171 |
-
|:---|:---:|:---:|
|
| 172 |
-
| 병합 시간 | 15분 | **8분** |
|
| 173 |
-
| 평가 방법 | arc_challenge proxy | — (direct) |
|
| 174 |
-
| GPQA | **17.5%** ❌ | **86.4%** ✅ |
|
| 175 |
-
| 이유 | proxy fitness mismatch, SLERP 회전 왜곡 | 엄마 중심 Linear + MRI 처방 |
|
| 176 |
|
| 177 |
-
|
|
|
|
|
|
|
| 178 |
|
| 179 |
-
|
| 180 |
|
| 181 |
-
|
|
|
|
| 182 |
|
| 183 |
-
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen3.6-35B-A3B
|
| 5 |
- hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
|
| 6 |
tags:
|
| 7 |
+
- darwin
|
| 8 |
+
- darwin-v7
|
| 9 |
+
- evolutionary-merge
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
- reasoning
|
| 11 |
+
- advanced-reasoning
|
| 12 |
+
- chain-of-thought
|
| 13 |
- thinking
|
| 14 |
+
- qwen3.6
|
| 15 |
+
- qwen
|
| 16 |
+
- moe
|
| 17 |
+
- mixture-of-experts
|
| 18 |
+
- claude-opus
|
| 19 |
+
- distillation
|
| 20 |
+
- multilingual
|
| 21 |
+
- gpqa
|
| 22 |
+
- benchmark
|
| 23 |
+
- open-source
|
| 24 |
+
- apache-2.0
|
| 25 |
- hybrid-vigor
|
| 26 |
+
- proto-agi
|
| 27 |
+
- vidraft
|
| 28 |
+
- eval-results
|
| 29 |
+
language:
|
| 30 |
+
- en
|
| 31 |
+
- zh
|
| 32 |
+
- ko
|
| 33 |
+
- ja
|
| 34 |
+
- de
|
| 35 |
+
- fr
|
| 36 |
+
- es
|
| 37 |
+
- ru
|
| 38 |
+
- ar
|
| 39 |
+
- multilingual
|
| 40 |
pipeline_tag: text-generation
|
| 41 |
+
library_name: transformers
|
| 42 |
+
model-index:
|
| 43 |
+
- name: Darwin-36B-Opus
|
| 44 |
+
results:
|
| 45 |
+
- task:
|
| 46 |
+
type: text-generation
|
| 47 |
+
name: Graduate-Level Reasoning
|
| 48 |
+
dataset:
|
| 49 |
+
type: Idavidrein/gpqa
|
| 50 |
+
name: GPQA Diamond
|
| 51 |
+
config: gpqa_diamond
|
| 52 |
+
split: train
|
| 53 |
+
metrics:
|
| 54 |
+
- type: accuracy
|
| 55 |
+
value: 88.4
|
| 56 |
+
name: Accuracy
|
| 57 |
+
verified: false
|
| 58 |
+
- task:
|
| 59 |
+
type: text-generation
|
| 60 |
+
name: Multilingual Knowledge
|
| 61 |
+
dataset:
|
| 62 |
+
type: openai/MMMLU
|
| 63 |
+
name: MMMLU
|
| 64 |
+
metrics:
|
| 65 |
+
- type: accuracy
|
| 66 |
+
value: 85.0
|
| 67 |
+
name: Accuracy
|
| 68 |
+
verified: false
|
| 69 |
---
|
| 70 |
|
| 71 |
+
# Darwin-36B-Opus: Darwin V7 Evolutionary Merge on Qwen3.6-35B-A3B — 88.4% on GPQA Diamond
|
| 72 |
|
| 73 |
+
<p align="center">
|
| 74 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-36B-Opus"><img src="https://img.shields.io/badge/⭐_GPQA_Diamond-88.4%25_Darwin--36B--Opus-gold?style=for-the-badge" alt="GPQA"></a>
|
| 75 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-27B-Opus"><img src="https://img.shields.io/badge/🧬_Sibling-Darwin--27B--Opus_(86.9%25)-blue?style=for-the-badge" alt="Sibling"></a>
|
| 76 |
+
</p>
|
| 77 |
|
| 78 |
+
<p align="center">
|
| 79 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-4B-Genesis"><img src="https://img.shields.io/badge/🧬_Model-Darwin--4B--Genesis-blue?style=for-the-badge" alt="Genesis"></a>
|
| 80 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-9B-Opus"><img src="https://img.shields.io/badge/🧬_Model-Darwin--9B--Opus-blue?style=for-the-badge" alt="9B"></a>
|
| 81 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-27B-Opus"><img src="https://img.shields.io/badge/🧬_Model-Darwin--27B--Opus-blue?style=for-the-badge" alt="27B"></a>
|
| 82 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-31B-Opus"><img src="https://img.shields.io/badge/🧬_Model-Darwin--31B--Opus-blue?style=for-the-badge" alt="31B"></a>
|
| 83 |
+
</p>
|
| 84 |
|
| 85 |
+
<p align="center">
|
| 86 |
+
<a href="https://huggingface.co/FINAL-Bench/Darwin-36B-Opus"><img src="https://img.shields.io/badge/⭐_Model-Darwin--36B--Opus-gold?style=for-the-badge" alt="36B"></a>
|
| 87 |
+
</p>
|
| 88 |
|
| 89 |
+
<p align="center">
|
| 90 |
+
<a href="https://huggingface.co/collections/FINAL-Bench/darwin-family"><img src="https://img.shields.io/badge/🏠_Darwin_Family-Collection-green?style=for-the-badge" alt="Family"></a>
|
| 91 |
+
<a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🏆_FINAL_Bench-Leaderboard-green?style=for-the-badge" alt="FINAL Bench"></a>
|
| 92 |
+
</p>
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
+
> Qwen3.6-35B-A3B MoE | 36B total / 3B active | Thinking Mode | 262K Context | Multilingual | BF16 | Apache 2.0
|
| 95 |
+
> **Darwin V7 evolutionary merge: Father × Opus-distilled Mother → 88.4% on GPQA Diamond**
|
| 96 |
|
| 97 |
+
---
|
| 98 |
|
| 99 |
+
## Abstract
|
|
|
|
|
|
|
|
|
|
|
|
|
| 100 |
|
| 101 |
+
**Darwin-36B-Opus** is a 36-billion-parameter mixture-of-experts (MoE) language model produced by the Darwin V7 evolutionary breeding engine from two publicly available parents:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
+
- **Father**: [Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B) — the foundation MoE with hybrid attention and 256 routed experts.
|
| 104 |
+
- **Mother**: [hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled) — a Claude Opus 4.6 reasoning-distilled variant of the same Father.
|
| 105 |
|
| 106 |
+
Darwin V7 recombines these two parents into a single descendant that preserves the Mother's distilled chain-of-thought behavior while retaining the structural fidelity of the Father's expert topology. The breeding process is fully automated and produces a deployable bfloat16 checkpoint in under an hour on a single GPU.
|
| 107 |
|
| 108 |
+
On the **GPQA Diamond** benchmark — 198 graduate-level questions in physics, chemistry, and biology — Darwin-36B-Opus achieves **88.4%**, establishing it as the highest-performing model in the Darwin family and extending the series' record of producing state-of-the-art open models through evolution rather than retraining.
|
| 109 |
+
|
| 110 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
+
## GPQA Diamond Leaderboard (April 23, 2026)
|
| 113 |
|
| 114 |
+
| Rank | Model | Parameters | GPQA Diamond |
|
| 115 |
+
|---|---|---|---|
|
| 116 |
+
| 1 | TNSA/NGen-4-Pro | — | 91.1% |
|
| 117 |
+
| 2 | TNSA/NGen-4 | — | 90.1% |
|
| 118 |
+
| 3 | Qwen/Qwen3.5-397B-A17B | 397B | 88.4% |
|
| 119 |
+
| **3** | **FINAL-Bench/Darwin-36B-Opus** | **36B (A3B)** | **88.4%** |
|
| 120 |
+
| 5 | moonshotai/Kimi-K2.5 | — | 87.6% |
|
| 121 |
+
| 6 | FINAL-Bench/Darwin-27B-Opus | 27B | 86.9% |
|
| 122 |
+
| 7 | Qwen/Qwen3.5-122B-A10B | 122B | 86.6% |
|
| 123 |
+
| 8 | zai-org/GLM-5.1 | 744B | 86.2% |
|
| 124 |
+
| 9 | zai-org/GLM-5 | 744B | 86.0% |
|
| 125 |
+
| 10 | zai-org/GLM-4.7 | — | 85.7% |
|
| 126 |
|
| 127 |
+
A **36B-parameter MoE model (3B active)**, tying the **397B dense-equivalent** Qwen3.5-397B-A17B and surpassing flagship dense and sparse systems an order of magnitude larger.
|
| 128 |
|
| 129 |
+
---
|
|
|
|
|
|
|
| 130 |
|
| 131 |
+
## What Is Darwin?
|
| 132 |
|
| 133 |
+
**Darwin** is the evolutionary model breeding engine developed by FINAL-Bench / VIDRAFT_LAB. Rather than allocating further compute to gradient optimization, Darwin treats trained checkpoints as a genetic pool and discovers high-performing descendants through principled recombination of their weight tensors.
|
| 134 |
|
| 135 |
+
Each Darwin generation (v1 through v7+) refines the breeding procedure. **Darwin V7** is the current generation and the one used to produce this model. Specific algorithmic details of V7 are proprietary to FINAL-Bench; at a high level, the engine performs:
|
| 136 |
+
|
| 137 |
+
1. **Per-tensor compatibility analysis** of the two parents to identify which components transfer cleanly and which require weighted recombination.
|
| 138 |
+
2. **Automated recombination** guided by that analysis, producing a single coherent descendant.
|
| 139 |
+
3. **Verification** via a multi-phase scientific benchmark before release.
|
| 140 |
+
|
| 141 |
+
All Darwin models are released under Apache 2.0 and inherit fully from the parents' open-source licenses.
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
## Parent Models
|
| 146 |
+
|
| 147 |
+
### 🔵 Father — Qwen/Qwen3.6-35B-A3B
|
| 148 |
+
|
| 149 |
+
- **Model type**: Qwen3.6 MoE, 35B total / ~3B active parameters
|
| 150 |
+
- **Layers**: 40, **Hidden size**: 2048
|
| 151 |
+
- **Attention**: hybrid 75% Gated DeltaNet + 25% Gated Attention (alternating)
|
| 152 |
+
- **Experts**: 256 routed (top-8) + 1 shared per layer
|
| 153 |
+
- **Native scores**: MMLU-Pro 85.2%, GPQA 86.0%, AIME26 92.7%
|
| 154 |
+
- **Role**: Structural backbone and MoE topology donor.
|
| 155 |
+
|
| 156 |
+
### 🔴 Mother — hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
|
| 157 |
+
|
| 158 |
+
- **Method**: LoRA SFT on the Father over 14,233 Claude Opus 4.6 chain-of-thought samples
|
| 159 |
+
- **Training regime**: `qwen3-thinking` template, response-only masking
|
| 160 |
+
- **Native score**: MMLU-Pro (70 limit-5) 75.71%, **+32.85 percentage points** over the un-distilled Father baseline
|
| 161 |
+
- **Role**: Reasoning signal donor — the source whose `<think>` trajectories Darwin preserves.
|
| 162 |
+
|
| 163 |
+
---
|
| 164 |
+
|
| 165 |
+
## Evolution Process (High Level)
|
| 166 |
+
|
| 167 |
+
Darwin V7 produces the descendant through a deterministic recombination that does not require gradient optimization on the final assembly. The engine analyzes each tensor in both parents, classifies it by architectural role, and assigns a recombination weight appropriate to that role — biasing toward the Mother for components that carry reasoning behavior (attention, shared experts, embeddings) while preserving the Father's structural contributions where they dominate.
|
| 168 |
|
| 169 |
+
Total breeding time on a single B200 GPU: **under 10 minutes**.
|
| 170 |
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## GPQA Diamond Evaluation
|
| 174 |
+
|
| 175 |
+
### Methodology
|
| 176 |
+
|
| 177 |
+
We employed a two-pass adaptive evaluation protocol (identical across all Darwin Opus models to preserve cross-model comparability):
|
| 178 |
+
|
| 179 |
+
**Pass 1 — Greedy Baseline**
|
| 180 |
+
|
| 181 |
+
- All 198 GPQA Diamond questions, deterministic decoding (`do_sample=False`)
|
| 182 |
+
- Maximum 5,120 new tokens per question (allows full `<think>` trajectories)
|
| 183 |
+
- Standard multiple-choice prompt format
|
| 184 |
+
|
| 185 |
+
**Pass 2 — Stochastic Retry with Tiebreaker**
|
| 186 |
+
|
| 187 |
+
- Questions incorrectly answered in Pass 1 are re-evaluated with **majority-of-8 stochastic generations** (`temperature=0.7`, `max_tokens=5120`)
|
| 188 |
+
- Where the vote margin is inconclusive (3:3, 3:4, or 4:4), an additional **16-vote combined tiebreaker** round (`temperature=0.5`) resolves the answer
|
| 189 |
+
|
| 190 |
+
Evaluation was performed in parallel across 8 × NVIDIA B200 GPUs, each running an independent full copy of the model on a disjoint subset of the benchmark (round-robin question assignment).
|
| 191 |
+
|
| 192 |
+
### Aggregate Results
|
| 193 |
|
| 194 |
+
| Phase | Cumulative Correct | Accuracy | Δ |
|
| 195 |
+
|---|---|---|---|
|
| 196 |
+
| Pass 1 — Greedy Baseline | 145/198 | 73.2% | baseline |
|
| 197 |
+
| Pass 2 — Stochastic Retry | **175/198** | **88.4%** | **+15.2 percentage points** |
|
|
|
|
|
|
|
|
|
|
| 198 |
|
| 199 |
+
The Pass-2 gain of **+30 questions (+15.2 pp)** demonstrates that the Mother's inherited `<think>` reasoning yields substantially more correct answers under stochastic decoding than under greedy, confirming that the evolutionary merge preserved reasoning depth.
|
| 200 |
+
|
| 201 |
+
### Results by Shard
|
| 202 |
+
|
| 203 |
+
| GPU | Questions | Pass 1 Greedy | **Final** |
|
| 204 |
+
|:---:|:---:|:---:|:---:|
|
| 205 |
+
| GPU0 | 25 | 17/25 (68.0%) | **22/25 (88.0%)** |
|
| 206 |
+
| GPU1 | 25 | 17/25 (68.0%) | **20/25 (80.0%)** |
|
| 207 |
+
| GPU2 | 25 | 19/25 (76.0%) | **23/25 (92.0%)** |
|
| 208 |
+
| GPU3 | 25 | 21/25 (84.0%) | **25/25 (100.0%)** ⭐ |
|
| 209 |
+
| GPU4 | 25 | 20/25 (80.0%) | **23/25 (92.0%)** |
|
| 210 |
+
| GPU5 | 25 | 17/25 (68.0%) | **22/25 (88.0%)** |
|
| 211 |
+
| GPU6 | 24 | 17/24 (70.8%) | **20/24 (83.3%)** |
|
| 212 |
+
| GPU7 | 24 | 17/24 (70.8%) | **20/24 (83.3%)** |
|
| 213 |
+
| **Total** | **198** | **145/198 (73.2%)** | **175/198 (88.4%)** |
|
| 214 |
+
|
| 215 |
+
Notably, **GPU3 achieved a perfect 25/25 score** on its 25-question partition — every Pass-1 error on that shard was successfully recovered through the stochastic retry cascade.
|
| 216 |
+
|
| 217 |
+
---
|
| 218 |
+
|
| 219 |
+
## Usage
|
| 220 |
|
| 221 |
```python
|
| 222 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
| 230 |
trust_remote_code=True,
|
| 231 |
)
|
| 232 |
|
| 233 |
+
messages = [
|
| 234 |
+
{"role": "user", "content": "Derive the equation for relativistic kinetic energy."}
|
| 235 |
+
]
|
| 236 |
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 237 |
inputs = tok(text, return_tensors="pt").to(model.device)
|
| 238 |
outputs = model.generate(**inputs, max_new_tokens=5120, temperature=0.6, do_sample=True)
|
| 239 |
print(tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
|
| 240 |
```
|
| 241 |
|
| 242 |
+
### Answer Extraction for Evaluations
|
| 243 |
|
| 244 |
+
This is a **thinking model** — responses always begin with a `<think>` reasoning trace. For benchmarks, extract the final answer after `</think>`:
|
| 245 |
|
| 246 |
```python
|
| 247 |
+
response = tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
|
| 248 |
idx = response.rfind("</think>")
|
| 249 |
answer_part = response[idx + len("</think>"):].strip() if idx >= 0 else response
|
| 250 |
```
|
| 251 |
|
| 252 |
+
### Recommended Settings
|
| 253 |
+
|
| 254 |
+
- **Temperature**: 0.6–0.7 for reasoning / majority voting; 0.0 for greedy deterministic
|
| 255 |
+
- **max_new_tokens**: ≥5120 to accommodate full `<think>` trajectories
|
| 256 |
+
- **Chat template**: `<|im_start|>assistant\n<think>\n` auto-inserted by `apply_chat_template(add_generation_prompt=True)`
|
| 257 |
+
|
| 258 |
+
---
|
| 259 |
+
|
| 260 |
+
## Model Specifications
|
| 261 |
+
|
| 262 |
+
| | |
|
| 263 |
+
|---|---|
|
| 264 |
+
| Architecture | Qwen3MoE (Qwen3.6 codebase) |
|
| 265 |
+
| Total parameters | 36.0 B |
|
| 266 |
+
| Active parameters | ~3 B (top-8 of 256 routed experts per layer) |
|
| 267 |
+
| Layers | 40 |
|
| 268 |
+
| Hidden size | 2048 |
|
| 269 |
+
| Attention heads | 24 Q + 4 KV (GQA) |
|
| 270 |
+
| Head dimension | 256 |
|
| 271 |
+
| Experts per layer | 256 routed + 1 shared |
|
| 272 |
+
| Context length | 262,144 tokens |
|
| 273 |
+
| Vocabulary | 248,320 |
|
| 274 |
+
| Dtype | bfloat16 |
|
| 275 |
+
| Checkpoint size | ~65 GB (21 shards) |
|
| 276 |
+
| License | Apache 2.0 |
|
| 277 |
+
|
| 278 |
+
---
|
| 279 |
+
|
| 280 |
+
## VRAM Requirements
|
| 281 |
+
|
| 282 |
+
| Precision | VRAM | Recommended GPU |
|
| 283 |
+
|---|---|---|
|
| 284 |
+
| bf16 (full) | ~72 GB | 1× H100 80GB / 1× B200 |
|
| 285 |
+
| 8-bit | ~40 GB | 1× A100 40GB+ / 1× L40S |
|
| 286 |
+
| 4-bit | ~22 GB | 1× RTX 4090 / 1× A10 |
|
| 287 |
|
| 288 |
+
---
|
| 289 |
+
|
| 290 |
+
## Darwin Model Family
|
| 291 |
|
| 292 |
+
| Model | Base | Params | GPQA Diamond |
|
| 293 |
+
|---|---|---|---|
|
| 294 |
+
| Darwin-4B-Genesis | Qwen3.5-4B | 4 B | — |
|
| 295 |
+
| Darwin-9B-Opus | Qwen3.5-9B | 9 B | — |
|
| 296 |
+
| Darwin-27B-Opus | Qwen3.5-27B | 27 B | 86.9% |
|
| 297 |
+
| Darwin-31B-Opus | Gemma2-27B × variants | 31 B | 85.9% |
|
| 298 |
+
| **Darwin-36B-Opus** | **Qwen3.6-35B-A3B** | **36 B (A3B)** | **88.4%** ⭐ |
|
| 299 |
|
| 300 |
+
---
|
| 301 |
|
| 302 |
+
## Key Findings
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 303 |
|
| 304 |
+
1. **Evolutionary merging continues to scale.** Across three successive parameter tiers (27B → 31B → 36B), each new Darwin Opus model surpasses the prior one's GPQA Diamond score while maintaining the same zero-training methodology.
|
| 305 |
|
| 306 |
+
2. **Hybrid-attention MoE preserves reasoning under recombination.** The Father's 75% Gated-DeltaNet + 25% Gated-Attention architecture, inherited intact, demonstrates robustness to tensor-level recombination — a notable result given that MoE expert routing is sensitive to weight perturbation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 307 |
|
| 308 |
+
3. **Stochastic retry closes the greedy gap.** The +15.2 percentage-point lift from Pass 1 (73.2%) to Pass 2 (88.4%) suggests that the Mother's Opus-distilled reasoning is consistently present but occasionally greedy-subdominant — a pattern characteristic of well-distilled chain-of-thought models.
|
| 309 |
+
|
| 310 |
+
---
|
| 311 |
|
| 312 |
+
## References
|
| 313 |
|
| 314 |
+
- Idavidrein et al., *GPQA: A Graduate-Level Google-Proof Q&A Benchmark*, 2024. [dataset](https://huggingface.co/datasets/Idavidrein/gpqa)
|
| 315 |
+
- Qwen Team, *Qwen3.6 Technical Report*, 2026.
|
| 316 |
|
| 317 |
+
---
|
| 318 |
+
|
| 319 |
+
## Built By
|
| 320 |
+
|
| 321 |
+
**FINAL-Bench / VIDRAFT_LAB** — Darwin V7 evolutionary breeding engine.
|
| 322 |
+
|
| 323 |
+
- Father base weights by the Qwen Team.
|
| 324 |
+
- Mother by [@hesamation](https://huggingface.co/hesamation) (Claude Opus 4.6 as teacher).
|
| 325 |
+
|
| 326 |
+
---
|
| 327 |
+
|
| 328 |
+
## Citation
|
| 329 |
+
|
| 330 |
+
```bibtex
|
| 331 |
+
@misc{darwin-36b-opus,
|
| 332 |
+
title = {Darwin-36B-Opus: Darwin V7 Evolutionary Merge on Qwen3.6-35B-A3B},
|
| 333 |
+
author = {FINAL-Bench and VIDRAFT_LAB},
|
| 334 |
+
year = {2026},
|
| 335 |
+
url = {https://huggingface.co/FINAL-Bench/Darwin-36B-Opus},
|
| 336 |
+
note = {Qwen3.6-35B-A3B (Father) × Opus-distilled variant (Mother), Darwin V7 engine, 88.4% GPQA Diamond}
|
| 337 |
+
}
|
| 338 |
+
```
|
config.json
CHANGED
|
@@ -1,123 +1,95 @@
|
|
| 1 |
{
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
],
|
| 5 |
-
"
|
| 6 |
-
"
|
| 7 |
-
"
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
"eos_token_id": 248044,
|
| 17 |
-
"full_attention_interval": 4,
|
| 18 |
-
"head_dim": 256,
|
| 19 |
-
"hidden_act": "silu",
|
| 20 |
-
"hidden_size": 2048,
|
| 21 |
-
"initializer_range": 0.02,
|
| 22 |
-
"layer_types": [
|
| 23 |
-
"linear_attention",
|
| 24 |
-
"linear_attention",
|
| 25 |
-
"linear_attention",
|
| 26 |
-
"full_attention",
|
| 27 |
-
"linear_attention",
|
| 28 |
-
"linear_attention",
|
| 29 |
-
"linear_attention",
|
| 30 |
-
"full_attention",
|
| 31 |
-
"linear_attention",
|
| 32 |
-
"linear_attention",
|
| 33 |
-
"linear_attention",
|
| 34 |
-
"full_attention",
|
| 35 |
-
"linear_attention",
|
| 36 |
-
"linear_attention",
|
| 37 |
-
"linear_attention",
|
| 38 |
-
"full_attention",
|
| 39 |
-
"linear_attention",
|
| 40 |
-
"linear_attention",
|
| 41 |
-
"linear_attention",
|
| 42 |
-
"full_attention",
|
| 43 |
-
"linear_attention",
|
| 44 |
-
"linear_attention",
|
| 45 |
-
"linear_attention",
|
| 46 |
-
"full_attention",
|
| 47 |
-
"linear_attention",
|
| 48 |
-
"linear_attention",
|
| 49 |
-
"linear_attention",
|
| 50 |
-
"full_attention",
|
| 51 |
-
"linear_attention",
|
| 52 |
-
"linear_attention",
|
| 53 |
-
"linear_attention",
|
| 54 |
-
"full_attention",
|
| 55 |
-
"linear_attention",
|
| 56 |
-
"linear_attention",
|
| 57 |
-
"linear_attention",
|
| 58 |
-
"full_attention",
|
| 59 |
-
"linear_attention",
|
| 60 |
-
"linear_attention",
|
| 61 |
-
"linear_attention",
|
| 62 |
-
"full_attention"
|
| 63 |
-
],
|
| 64 |
-
"linear_conv_kernel_dim": 4,
|
| 65 |
-
"linear_key_head_dim": 128,
|
| 66 |
-
"linear_num_key_heads": 16,
|
| 67 |
-
"linear_num_value_heads": 32,
|
| 68 |
-
"linear_value_head_dim": 128,
|
| 69 |
-
"mamba_ssm_dtype": "float32",
|
| 70 |
-
"max_position_embeddings": 262144,
|
| 71 |
-
"model_type": "qwen3_5_moe_text",
|
| 72 |
-
"moe_intermediate_size": 512,
|
| 73 |
-
"mtp_num_hidden_layers": 1,
|
| 74 |
-
"mtp_use_dedicated_embeddings": false,
|
| 75 |
-
"num_attention_heads": 16,
|
| 76 |
-
"num_experts": 256,
|
| 77 |
-
"num_experts_per_tok": 8,
|
| 78 |
-
"num_hidden_layers": 40,
|
| 79 |
-
"num_key_value_heads": 2,
|
| 80 |
-
"output_router_logits": false,
|
| 81 |
-
"pad_token_id": null,
|
| 82 |
-
"partial_rotary_factor": 0.25,
|
| 83 |
-
"rms_norm_eps": 1e-06,
|
| 84 |
-
"rope_parameters": {
|
| 85 |
-
"mrope_interleaved": true,
|
| 86 |
-
"mrope_section": [
|
| 87 |
-
11,
|
| 88 |
-
11,
|
| 89 |
-
10
|
| 90 |
-
],
|
| 91 |
-
"partial_rotary_factor": 0.25,
|
| 92 |
-
"rope_theta": 10000000,
|
| 93 |
-
"rope_type": "default"
|
| 94 |
-
},
|
| 95 |
-
"router_aux_loss_coef": 0.001,
|
| 96 |
-
"shared_expert_intermediate_size": 512,
|
| 97 |
-
"tie_word_embeddings": false,
|
| 98 |
-
"use_cache": true,
|
| 99 |
-
"vocab_size": 248320
|
| 100 |
-
},
|
| 101 |
-
"tie_word_embeddings": false,
|
| 102 |
-
"unsloth_version": "2026.4.6",
|
| 103 |
-
"video_token_id": 248057,
|
| 104 |
-
"vision_config": {
|
| 105 |
-
"deepstack_visual_indexes": [],
|
| 106 |
-
"depth": 27,
|
| 107 |
-
"torch_dtype": "bfloat16",
|
| 108 |
-
"hidden_act": "gelu_pytorch_tanh",
|
| 109 |
-
"hidden_size": 1152,
|
| 110 |
-
"in_channels": 3,
|
| 111 |
-
"initializer_range": 0.02,
|
| 112 |
-
"intermediate_size": 4304,
|
| 113 |
-
"model_type": "qwen3_5_moe",
|
| 114 |
-
"num_heads": 16,
|
| 115 |
-
"num_position_embeddings": 2304,
|
| 116 |
-
"out_hidden_size": 2048,
|
| 117 |
-
"patch_size": 16,
|
| 118 |
-
"spatial_merge_size": 2,
|
| 119 |
-
"temporal_patch_size": 2
|
| 120 |
-
},
|
| 121 |
-
"vision_end_token_id": 248054,
|
| 122 |
-
"vision_start_token_id": 248053
|
| 123 |
-
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"Qwen3_5MoeForCausalLM"
|
| 4 |
+
],
|
| 5 |
+
"attention_bias": false,
|
| 6 |
+
"attention_dropout": 0.0,
|
| 7 |
+
"attn_output_gate": true,
|
| 8 |
+
"bos_token_id": 248044,
|
| 9 |
+
"dtype": "bfloat16",
|
| 10 |
+
"eos_token_id": 248044,
|
| 11 |
+
"full_attention_interval": 4,
|
| 12 |
+
"head_dim": 256,
|
| 13 |
+
"hidden_act": "silu",
|
| 14 |
+
"hidden_size": 2048,
|
| 15 |
+
"initializer_range": 0.02,
|
| 16 |
+
"layer_types": [
|
| 17 |
+
"linear_attention",
|
| 18 |
+
"linear_attention",
|
| 19 |
+
"linear_attention",
|
| 20 |
+
"full_attention",
|
| 21 |
+
"linear_attention",
|
| 22 |
+
"linear_attention",
|
| 23 |
+
"linear_attention",
|
| 24 |
+
"full_attention",
|
| 25 |
+
"linear_attention",
|
| 26 |
+
"linear_attention",
|
| 27 |
+
"linear_attention",
|
| 28 |
+
"full_attention",
|
| 29 |
+
"linear_attention",
|
| 30 |
+
"linear_attention",
|
| 31 |
+
"linear_attention",
|
| 32 |
+
"full_attention",
|
| 33 |
+
"linear_attention",
|
| 34 |
+
"linear_attention",
|
| 35 |
+
"linear_attention",
|
| 36 |
+
"full_attention",
|
| 37 |
+
"linear_attention",
|
| 38 |
+
"linear_attention",
|
| 39 |
+
"linear_attention",
|
| 40 |
+
"full_attention",
|
| 41 |
+
"linear_attention",
|
| 42 |
+
"linear_attention",
|
| 43 |
+
"linear_attention",
|
| 44 |
+
"full_attention",
|
| 45 |
+
"linear_attention",
|
| 46 |
+
"linear_attention",
|
| 47 |
+
"linear_attention",
|
| 48 |
+
"full_attention",
|
| 49 |
+
"linear_attention",
|
| 50 |
+
"linear_attention",
|
| 51 |
+
"linear_attention",
|
| 52 |
+
"full_attention",
|
| 53 |
+
"linear_attention",
|
| 54 |
+
"linear_attention",
|
| 55 |
+
"linear_attention",
|
| 56 |
+
"full_attention"
|
| 57 |
+
],
|
| 58 |
+
"linear_conv_kernel_dim": 4,
|
| 59 |
+
"linear_key_head_dim": 128,
|
| 60 |
+
"linear_num_key_heads": 16,
|
| 61 |
+
"linear_num_value_heads": 32,
|
| 62 |
+
"linear_value_head_dim": 128,
|
| 63 |
+
"mamba_ssm_dtype": "float32",
|
| 64 |
+
"max_position_embeddings": 262144,
|
| 65 |
+
"model_type": "qwen3_5_moe_text",
|
| 66 |
+
"moe_intermediate_size": 512,
|
| 67 |
+
"mtp_num_hidden_layers": 1,
|
| 68 |
+
"mtp_use_dedicated_embeddings": false,
|
| 69 |
+
"num_attention_heads": 16,
|
| 70 |
+
"num_experts": 256,
|
| 71 |
+
"num_experts_per_tok": 8,
|
| 72 |
+
"num_hidden_layers": 40,
|
| 73 |
+
"num_key_value_heads": 2,
|
| 74 |
+
"output_router_logits": false,
|
| 75 |
+
"pad_token_id": null,
|
| 76 |
+
"partial_rotary_factor": 0.25,
|
| 77 |
+
"rms_norm_eps": 1e-06,
|
| 78 |
+
"rope_parameters": {
|
| 79 |
+
"mrope_interleaved": true,
|
| 80 |
+
"mrope_section": [
|
| 81 |
+
11,
|
| 82 |
+
11,
|
| 83 |
+
10
|
| 84 |
],
|
| 85 |
+
"partial_rotary_factor": 0.25,
|
| 86 |
+
"rope_theta": 10000000,
|
| 87 |
+
"rope_type": "default"
|
| 88 |
+
},
|
| 89 |
+
"router_aux_loss_coef": 0.001,
|
| 90 |
+
"shared_expert_intermediate_size": 512,
|
| 91 |
+
"tie_word_embeddings": false,
|
| 92 |
+
"transformers_version": "5.5.4",
|
| 93 |
+
"use_cache": true,
|
| 94 |
+
"vocab_size": 248320
|
| 95 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
generation_config.json
CHANGED
|
@@ -1,12 +1,8 @@
|
|
| 1 |
{
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
"pad_token_id": 248044,
|
| 9 |
-
"temperature": 1.0,
|
| 10 |
-
"top_k": 20,
|
| 11 |
-
"top_p": 0.95
|
| 12 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"bos_token_id": 248044,
|
| 4 |
+
"eos_token_id": 248044,
|
| 5 |
+
"pad_token_id": 248044,
|
| 6 |
+
"transformers_version": "5.5.4",
|
| 7 |
+
"use_cache": true
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
}
|
lora_train_config.json
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"version": "v3_safe",
|
| 3 |
+
"base": "/NHNHOME/WORKSPACE/0426030024_A/darwin-36b-opus/models/Darwin-36B-Opus-v2",
|
| 4 |
+
"lora": {
|
| 5 |
+
"rank": 16,
|
| 6 |
+
"alpha": 32,
|
| 7 |
+
"init": "standard",
|
| 8 |
+
"use_rslora": false,
|
| 9 |
+
"target": "attention_only"
|
| 10 |
+
},
|
| 11 |
+
"training": {
|
| 12 |
+
"lr": 3e-05,
|
| 13 |
+
"epochs": 1,
|
| 14 |
+
"neftune": 3.0
|
| 15 |
+
},
|
| 16 |
+
"data": {
|
| 17 |
+
"train": 551,
|
| 18 |
+
"eval": 29,
|
| 19 |
+
"domains": "OrganicChem + Physics (Quantum/HEP/Relativistic)"
|
| 20 |
+
},
|
| 21 |
+
"metrics": {
|
| 22 |
+
"train_runtime": 21.7924,
|
| 23 |
+
"train_samples_per_second": 25.284,
|
| 24 |
+
"train_steps_per_second": 0.413,
|
| 25 |
+
"total_flos": 1.3263732069525094e+17,
|
| 26 |
+
"train_loss": 0.832765155368381
|
| 27 |
+
}
|
| 28 |
+
}
|
model-00001-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:04a05da07efab4c31e89a561663d70552996f94bdb66b17235942e6f345ad4ab
|
| 3 |
+
size 3787084400
|
model-00002-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e33791827d9fbbf2b73f27198e4f4953d5664660db5a6475f2702419dc7eb6e1
|
| 3 |
+
size 3840241728
|
model-00003-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f2d5cc9e6982f8315943e4f8831578e01cd6261c5ce501c1ac391184b2eea7b
|
| 3 |
+
size 3425336544
|
model-00004-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 3303370680
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b173cde85eb02618efc2d8ac2ca6f4659e4e53551f91a4715a9edef9ae0804e
|
| 3 |
size 3303370680
|
model-00005-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ed5c108fe368221889f8f72d322ca37b78e8bb2d216163380821ea4e014bdca
|
| 3 |
+
size 3425336544
|
model-00006-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:83ab746e0e0c6a40cc3063234fb71264f0452c3834f35922e7243747fbaa664b
|
| 3 |
+
size 3303370624
|
model-00007-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a3d1c3299aec5659369ede6408c2d343e142040718295963e7f6d3497ada8f67
|
| 3 |
+
size 3425336584
|
model-00008-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e4a80674fe677807b1237587ae4ed916b8e9304b4316a781a3379ea0eccaa9fe
|
| 3 |
+
size 3303370712
|
model-00009-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d9ab4f0d16ebc118f381d2f565d9187226a8a40c2bc2cc9c7f9dec0f6902f18a
|
| 3 |
+
size 3425336584
|
model-00010-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4cb2ff607f8f877ff59395b85be6e8e768639c37cad5fa1c52b879cc0ffd0b00
|
| 3 |
+
size 3303370712
|
model-00011-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d92b6ad39f0535cd8d67d10560e73780b7b268ad72a684e2500e623694239e69
|
| 3 |
+
size 3425336584
|
model-00012-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:24b3f1bdc137e794d99e7640bd07b2f7cdbba051cf7400a628324396a8a59261
|
| 3 |
+
size 3303370712
|
model-00013-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21fe501ab39ccfa8639189df97b44490f29ab5409ace80f5173eb721181e6778
|
| 3 |
+
size 3425336584
|
model-00014-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3664b335b133b0981b84d446d27710f39ee080c51ede97bc0bbc75661ed90b90
|
| 3 |
+
size 3303370712
|
model-00015-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:522189f295a0cd517388686d340741ee01182f642e2a43c102c2d8de15247b90
|
| 3 |
+
size 3425336584
|
model-00016-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ff392bcc5880b83a0cfe53bdd551dd7e5a299e05a5c9bdd5cc47516749b9f067
|
| 3 |
+
size 3303370712
|
model-00017-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:01f193c4b5d702fb680af40d904face45bc98118273b46d553a65b9dcbd3cc0d
|
| 3 |
+
size 3425336584
|
model-00018-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0fb09fe769fe1c55508c44cdf2fe6a609a92c191e346c7bafb714e27cc4cbaf6
|
| 3 |
+
size 3303370712
|
model-00019-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4bd814e32972a86307762009456abf55e61dca7f5aa84c6b555e95dee7348cf2
|
| 3 |
+
size 3425336584
|
model-00020-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ecd417490bf2f654c2fd46936f518f3b0c8a80a7a039ea08e740a77fe3be64f4
|
| 3 |
+
size 3303370712
|
model-00021-of-00021.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9cb1579edb4feb980e2bfa6775b8b130145f4d0dc668f6a8487afe2a75c99243
|
| 3 |
+
size 1135623008
|
model.safetensors.index.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
CHANGED
|
@@ -9,7 +9,7 @@
|
|
| 9 |
"eos_token": "<|im_end|>",
|
| 10 |
"errors": "replace",
|
| 11 |
"image_token": "<|image_pad|>",
|
| 12 |
-
"is_local":
|
| 13 |
"model_max_length": 262144,
|
| 14 |
"model_specific_special_tokens": {
|
| 15 |
"audio_bos_token": "<|audio_start|>",
|
|
@@ -29,6 +29,5 @@
|
|
| 29 |
"unk_token": null,
|
| 30 |
"video_token": "<|video_pad|>",
|
| 31 |
"vision_bos_token": "<|vision_start|>",
|
| 32 |
-
"vision_eos_token": "<|vision_end|>"
|
| 33 |
-
|
| 34 |
-
}
|
|
|
|
| 9 |
"eos_token": "<|im_end|>",
|
| 10 |
"errors": "replace",
|
| 11 |
"image_token": "<|image_pad|>",
|
| 12 |
+
"is_local": true,
|
| 13 |
"model_max_length": 262144,
|
| 14 |
"model_specific_special_tokens": {
|
| 15 |
"audio_bos_token": "<|audio_start|>",
|
|
|
|
| 29 |
"unk_token": null,
|
| 30 |
"video_token": "<|video_pad|>",
|
| 31 |
"vision_bos_token": "<|vision_start|>",
|
| 32 |
+
"vision_eos_token": "<|vision_end|>"
|
| 33 |
+
}
|
|
|