SeaWolf-AI commited on
Commit
c2c3f15
·
verified ·
1 Parent(s): f119914

Darwin-36B-Opus v2: MRI-only Mother-centric merge, GPQA 86.4%

Browse files
README.md CHANGED
@@ -8,10 +8,10 @@ base_model:
8
  - Qwen/Qwen3.6-35B-A3B
9
  - hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
10
  tags:
11
- - darwin-v7
12
- - evolutionary-merge
13
- - mri-guided
14
- - slerp
15
  - qwen3.6
16
  - moe
17
  - a3b
@@ -27,7 +27,20 @@ pipeline_tag: text-generation
27
 
28
  **Darwin Opus 시리즈 — Qwen3.6 세대 (A3B MoE)**
29
 
30
- Qwen3.6-35B-A3B 기반 진화적 병합 모델. Father(순정 base) × Mother(Claude Opus 4.6 Reasoning Distilled)를 Darwin V7 엔진의 **MRI 처방 + CMA-ES 진화 + SLERP** 기법으로 자동 병합.
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## 🧬 계보 (Parents)
33
 
@@ -44,54 +57,62 @@ Qwen3.6-35B-A3B 기반 진화적 병합 모델. Father(순정 base) × Mother(Cl
44
  - MMLU-Pro (70 limit-5): **75.71%** (+32.85%p vs Father base)
45
  - qwen3-thinking 템플릿, response-only masking
46
 
47
- ## 🔬 Darwin V7 교배 방식
 
 
48
 
49
  ```
50
- Phase 0: Auto-Profile (아키텍처 호환 검사) → COMPATIBLE
51
- Phase 1: MRI Scan (텐서별 norm/entropy/std + probe)
52
- Phase 2a: CMA-ES Evolution (500 steps, 8-block genome)
53
- proxy score 0.8403
54
- Phase 2b: Real SLERP Merge (top-5 candidate evaluation)
55
- method=SLERP ratio=0.416 mri_trust=0.783
56
- Phase 3: Health check (perplexity + smoke gen) healthy ✓
57
- Phase 4: Upload
58
  ```
59
 
60
- ### 병합 공식
61
 
62
- ```
63
- 각 텐서별 최종 비율:
64
- final_ratio = mri_ratio × 0.783 + genome_ratio × 0.217
 
 
 
 
 
 
 
 
 
65
 
66
- - 0.416 = global blend ratio (Mother 41.6% + Father 58.4%)
67
- - 0.783 = MRI 처방 신뢰도 (norm/entropy 기반 처방 비중)
68
- - 8 블록 × 40 레이어 genome 진화 최적화
69
  ```
70
 
71
- ### 왜 SLERP?
72
- 두 모델 가중치는 고차원 곡면 위의 벡터. 선형 보간(linear avg)은 매니폴드를 벗어나 무의미한 위치로 이동하지만, **구면선형보간(SLERP)**은 곡면을 따라 부드럽게 이동하여 양쪽 특성을 보존.
73
 
74
- ## 🏷️ 시리즈 포지셔닝
75
 
76
- | Darwin Opus 모델 | Father | Mother | GPQA |
77
- |-----------------|--------|--------|:----:|
78
- | Darwin-27B-Opus | Qwen3.5-27B | Jackrong Claude-4.6-Opus distilled | 86.9 |
79
- | Darwin-31B-Opus | Gemma2-27B × 다양 | Opus variants | 85.9 |
80
- | Darwin-35B-A3B-Opus | Qwen3.5-35B-A3B | Jackrong Opus distilled | (측정중) |
81
- | **Darwin-36B-Opus** | **Qwen3.6-35B-A3B** | **hesamation Qwen3.6 Opus distilled** | **(측정중)** |
 
82
 
83
- `36B`는 **Qwen3.6 세대** 표시로 naming에 (파라미터는 실제 36.0B).
84
 
85
  ## 🧠 아키텍처
86
 
87
- - **Architecture**: Qwen3.5MoE (Qwen3.6 3.5 코드베이스 재활용)
88
  - **Total params**: 36.0B
89
- - **Active params**: ~3B (MoE sparse)
90
  - **Layers**: 40
91
  - **Hidden size**: 2048
92
- - **Experts**: 256 routed, top-8 activation
93
  - **Hybrid attention**: 75% Gated DeltaNet + 25% Gated Attention
94
- - **Chat template**: `<|im_start|>assistant\n<think>\n` (thinking mode default)
95
 
96
  ## 💡 사용법
97
 
@@ -110,7 +131,7 @@ model = AutoModelForCausalLM.from_pretrained(
110
  messages = [{"role": "user", "content": "What is the derivative of sin(x²)?"}]
111
  text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
112
  inputs = tok(text, return_tensors="pt").to(model.device)
113
- outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.6, do_sample=True)
114
  print(tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
115
  ```
116
 
@@ -124,14 +145,34 @@ idx = response.rfind("</think>")
124
  answer_part = response[idx + len("</think>"):].strip() if idx >= 0 else response
125
  ```
126
 
 
 
 
 
 
 
 
 
 
 
 
127
  ## 🏗️ 제작
128
 
129
- - **Engine**: Darwin V7 (FINAL-Bench proprietary)
130
- - **Hardware**: NVIDIA B200 (merge GPUs)
131
- - **Evolution**: 500 steps in ~15 minutes
132
- - **Cache ID**: `merged_6edaacaf`
133
- - **Proxy fitness (arc_challenge)**: 0.8403
134
- - **Commit**: `e56adcfb` (2026-04-22)
 
 
 
 
 
 
 
 
 
135
 
136
  ## 📜 라이선스
137
 
@@ -142,4 +183,4 @@ Apache 2.0 (Qwen3.6 라이선스 승계)
142
  - Qwen Team (Father base)
143
  - @hesamation (Mother: Opus distillation)
144
  - Anthropic Claude Opus 4.6 (Teacher)
145
- - FINAL-Bench / VIDRAFT_LAB (Darwin V7 engine + breeding)
 
8
  - Qwen/Qwen3.6-35B-A3B
9
  - hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
10
  tags:
11
+ - darwin-v7-plus
12
+ - mri-only-merge
13
+ - mother-centric
14
+ - linear-merge
15
  - qwen3.6
16
  - moe
17
  - a3b
 
27
 
28
  **Darwin Opus 시리즈 — Qwen3.6 세대 (A3B MoE)**
29
 
30
+ Qwen3.6-35B-A3B 기반 **Darwin V7+ MRI-only Mother-centric** 병합 모델.
31
+ Father(Qwen3.6 순정 base) × Mother(Claude Opus 4.6 Reasoning Distilled)를 **엄마 중심 선형 병합 + MRI 텐서별 처방** 기법으로 자동 병합.
32
+
33
+ ## 🏆 GPQA Diamond 결과: **86.4%** (171/198)
34
+
35
+ | Phase | Score | Δ |
36
+ |:---|:---:|:---:|
37
+ | Phase 1 (Greedy) | 145/198 (73.2%) | baseline |
38
+ | Phase 2 (maj@8 5120t) | 155/198 (78.3%) | +10 |
39
+ | Phase 3 (maj@8 tiebreak) | 162/198 (81.8%) | +7 |
40
+ | Phase 4 (MTI maj@8) | 169/198 (85.4%) | +7 |
41
+ | Phase 5 (MTI tiebreak) | **171/198 (86.4%)** | +2 |
42
+
43
+ 평가: 8-GPU Data Parallel, 5120 token budget, 198 questions (GPQA Diamond full set)
44
 
45
  ## 🧬 계보 (Parents)
46
 
 
57
  - MMLU-Pro (70 limit-5): **75.71%** (+32.85%p vs Father base)
58
  - qwen3-thinking 템플릿, response-only masking
59
 
60
+ ## 🔬 Darwin V7+ MRI-only 교배 방식
61
+
62
+ 초기 V7 pipeline(CMA-ES + SLERP) 실패 후 **Darwin 고유 기법을 단순화**하여 재설계:
63
 
64
  ```
65
+ Phase 0: Per-tensor MRI Scan (norm / entropy / std)
66
+ Phase 1: Semantic Tensor Classification
67
+ attention / router / shared_expert / routed_expert /
68
+ embedding / lm_head / norm / other
69
+ Phase 2: Mother-centric Linear Merge per category
70
+ Phase 3: Per-expert MRI override (routed_experts 82/82)
71
+ Phase 4: Health check (smoke test)
 
72
  ```
73
 
74
+ ### 엄마 중심 처방 (Mother-centric ratios)
75
 
76
+ | 텐서 분류 | 개수 | Mother 비율 | 이유 |
77
+ |:---|:---:|:---:|:---|
78
+ | Attention | 485 | 0.90 | Opus 추론 엔진 보존 |
79
+ | Router | 41 | 1.00 | 엄마 experts 전용 라우터 유지 |
80
+ | Shared Expert | 164 | 0.90 | 엄마 강점 |
81
+ | Routed Expert | 82 | 0.80 | + MRI override 전부 |
82
+ | Embedding | 5 | 1.00 | 토큰 coherence |
83
+ | LM Head | 1 | 1.00 | 출력 정합 |
84
+ | LayerNorm | 154 | 0.95 | 안정성 |
85
+ | Other | 113 | 0.85 | 기본 |
86
+
87
+ ### 선형 병합 (Linear, not SLERP)
88
 
89
+ ```
90
+ merged = (1 - r) × Father + r × Mother
 
91
  ```
92
 
93
+ **SLERP 회전 곡을 제거**한 순수 선형 병합. MoE 전문가 기하구조가 구면이 아닌 manifold에 있을 때 SLERP이 오히려 reasoning을 망가뜨리는 문제(이전 17.5% 실패)를 회피.
 
94
 
95
+ ### MRI Override (routed experts)
96
 
97
+ ```python
98
+ for each routed_expert tensor t:
99
+ base_ratio = 0.80 # Mother default
100
+ nf = Father_norm, nm = Mother_norm
101
+ mri_ratio = nm / (nf + nm) # norm-based prescription
102
+ final = base * 0.2 + mri_ratio * 0.8 # mri_trust = 0.8
103
+ ```
104
 
105
+ 82 routed expert 텐서 전부 (100%) MRI 기세밀 조정 적용.
106
 
107
  ## 🧠 아키텍처
108
 
109
+ - **Architecture**: Qwen3MoE (Qwen3.6 코드베이스)
110
  - **Total params**: 36.0B
111
+ - **Active params**: ~3B (MoE sparse, top-8 of 256 experts)
112
  - **Layers**: 40
113
  - **Hidden size**: 2048
 
114
  - **Hybrid attention**: 75% Gated DeltaNet + 25% Gated Attention
115
+ - **Chat template**: `<|im_start|>assistant\n<think>\n` (thinking mode)
116
 
117
  ## 💡 사용법
118
 
 
131
  messages = [{"role": "user", "content": "What is the derivative of sin(x²)?"}]
132
  text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
133
  inputs = tok(text, return_tensors="pt").to(model.device)
134
+ outputs = model.generate(**inputs, max_new_tokens=5120, temperature=0.6, do_sample=True)
135
  print(tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
136
  ```
137
 
 
145
  answer_part = response[idx + len("</think>"):].strip() if idx >= 0 else response
146
  ```
147
 
148
+ **권장 설정**: `max_new_tokens=5120` (thinking budget), `temperature=0.6~0.7` (다수결 유효)
149
+
150
+ ## 🏷️ Darwin Opus 시리즈
151
+
152
+ | 모델 | GPQA Diamond |
153
+ |:---|:---:|
154
+ | Darwin-27B-Opus | 86.9% |
155
+ | **Darwin-36B-Opus (this, v2)** | **86.4%** |
156
+ | Darwin-31B-Opus | 85.9% |
157
+ | Darwin-35B-A3B-Opus | (측정중) |
158
+
159
  ## 🏗️ 제작
160
 
161
+ - **Engine**: Darwin V7+ MRI-only (FINAL-Bench proprietary)
162
+ - **Hardware**: NVIDIA B200 (merge) + 8× B200 (eval)
163
+ - **Merge 소요**: 493초 (8분, CMA-ES 없이 직접 처방)
164
+ - **Tensors merged**: 1,045 (MRI override: 82/82 routed experts)
165
+ - **Shards**: 21 × ~3.4GB
166
+ - **Eval 소요**: 420분 (7시간, 5-phase pipeline)
167
+
168
+ ## 🔁 v1 vs v2
169
+
170
+ | 구분 | v1 (SLERP + CMA-ES) | **v2 (MRI-only)** |
171
+ |:---|:---:|:---:|
172
+ | 병합 시간 | 15분 | **8분** |
173
+ | 평가 방법 | arc_challenge proxy | — (direct) |
174
+ | GPQA | **17.5%** ❌ | **86.4%** ✅ |
175
+ | 이유 | proxy fitness mismatch, SLERP 회전 왜곡 | 엄마 중심 Linear + MRI 처방 |
176
 
177
  ## 📜 라이선스
178
 
 
183
  - Qwen Team (Father base)
184
  - @hesamation (Mother: Opus distillation)
185
  - Anthropic Claude Opus 4.6 (Teacher)
186
+ - FINAL-Bench / VIDRAFT_LAB (Darwin V7+ engine + MRI-only breeding)
config.json CHANGED
@@ -1,96 +1,123 @@
1
  {
2
- "attention_bias": false,
3
- "attention_dropout": 0.0,
4
- "attn_output_gate": true,
5
- "bos_token_id": 248044,
6
- "dtype": "bfloat16",
7
- "eos_token_id": 248044,
8
- "full_attention_interval": 4,
9
- "head_dim": 256,
10
- "hidden_act": "silu",
11
- "hidden_size": 2048,
12
- "initializer_range": 0.02,
13
- "layer_types": [
14
- "linear_attention",
15
- "linear_attention",
16
- "linear_attention",
17
- "full_attention",
18
- "linear_attention",
19
- "linear_attention",
20
- "linear_attention",
21
- "full_attention",
22
- "linear_attention",
23
- "linear_attention",
24
- "linear_attention",
25
- "full_attention",
26
- "linear_attention",
27
- "linear_attention",
28
- "linear_attention",
29
- "full_attention",
30
- "linear_attention",
31
- "linear_attention",
32
- "linear_attention",
33
- "full_attention",
34
- "linear_attention",
35
- "linear_attention",
36
- "linear_attention",
37
- "full_attention",
38
- "linear_attention",
39
- "linear_attention",
40
- "linear_attention",
41
- "full_attention",
42
- "linear_attention",
43
- "linear_attention",
44
- "linear_attention",
45
- "full_attention",
46
- "linear_attention",
47
- "linear_attention",
48
- "linear_attention",
49
- "full_attention",
50
- "linear_attention",
51
- "linear_attention",
52
- "linear_attention",
53
- "full_attention"
54
- ],
55
- "linear_conv_kernel_dim": 4,
56
- "linear_key_head_dim": 128,
57
- "linear_num_key_heads": 16,
58
- "linear_num_value_heads": 32,
59
- "linear_value_head_dim": 128,
60
- "mamba_ssm_dtype": "float32",
61
- "max_position_embeddings": 262144,
62
- "model_type": "qwen3_5_moe",
63
- "moe_intermediate_size": 512,
64
- "mtp_num_hidden_layers": 1,
65
- "mtp_use_dedicated_embeddings": false,
66
- "num_attention_heads": 16,
67
- "num_experts": 256,
68
- "num_experts_per_tok": 8,
69
- "num_hidden_layers": 40,
70
- "num_key_value_heads": 2,
71
- "output_router_logits": false,
72
- "pad_token_id": null,
73
- "partial_rotary_factor": 0.25,
74
- "rms_norm_eps": 1e-06,
75
- "rope_parameters": {
76
- "mrope_interleaved": true,
77
- "mrope_section": [
78
- 11,
79
- 11,
80
- 10
81
  ],
82
- "partial_rotary_factor": 0.25,
83
- "rope_theta": 10000000,
84
- "rope_type": "default"
85
- },
86
- "router_aux_loss_coef": 0.001,
87
- "shared_expert_intermediate_size": 512,
88
- "tie_word_embeddings": false,
89
- "use_cache": true,
90
- "vocab_size": 248320,
91
- "architectures": [
92
- "Qwen3_5MoeForCausalLM"
93
- ],
94
- "torch_dtype": "bfloat16",
95
- "transformers_version": "4.57.1"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  }
 
1
  {
2
+ "architectures": [
3
+ "Qwen3_5MoeForConditionalGeneration"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ],
5
+ "torch_dtype": "bfloat16",
6
+ "image_token_id": 248056,
7
+ "model_name": "Qwen/Qwen3.6-35B-A3B",
8
+ "model_type": "qwen3_5_moe",
9
+ "pad_token_id": 248044,
10
+ "text_config": {
11
+ "attention_bias": false,
12
+ "attention_dropout": 0.0,
13
+ "attn_output_gate": true,
14
+ "bos_token_id": 248044,
15
+ "torch_dtype": "bfloat16",
16
+ "eos_token_id": 248044,
17
+ "full_attention_interval": 4,
18
+ "head_dim": 256,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 2048,
21
+ "initializer_range": 0.02,
22
+ "layer_types": [
23
+ "linear_attention",
24
+ "linear_attention",
25
+ "linear_attention",
26
+ "full_attention",
27
+ "linear_attention",
28
+ "linear_attention",
29
+ "linear_attention",
30
+ "full_attention",
31
+ "linear_attention",
32
+ "linear_attention",
33
+ "linear_attention",
34
+ "full_attention",
35
+ "linear_attention",
36
+ "linear_attention",
37
+ "linear_attention",
38
+ "full_attention",
39
+ "linear_attention",
40
+ "linear_attention",
41
+ "linear_attention",
42
+ "full_attention",
43
+ "linear_attention",
44
+ "linear_attention",
45
+ "linear_attention",
46
+ "full_attention",
47
+ "linear_attention",
48
+ "linear_attention",
49
+ "linear_attention",
50
+ "full_attention",
51
+ "linear_attention",
52
+ "linear_attention",
53
+ "linear_attention",
54
+ "full_attention",
55
+ "linear_attention",
56
+ "linear_attention",
57
+ "linear_attention",
58
+ "full_attention",
59
+ "linear_attention",
60
+ "linear_attention",
61
+ "linear_attention",
62
+ "full_attention"
63
+ ],
64
+ "linear_conv_kernel_dim": 4,
65
+ "linear_key_head_dim": 128,
66
+ "linear_num_key_heads": 16,
67
+ "linear_num_value_heads": 32,
68
+ "linear_value_head_dim": 128,
69
+ "mamba_ssm_dtype": "float32",
70
+ "max_position_embeddings": 262144,
71
+ "model_type": "qwen3_5_moe_text",
72
+ "moe_intermediate_size": 512,
73
+ "mtp_num_hidden_layers": 1,
74
+ "mtp_use_dedicated_embeddings": false,
75
+ "num_attention_heads": 16,
76
+ "num_experts": 256,
77
+ "num_experts_per_tok": 8,
78
+ "num_hidden_layers": 40,
79
+ "num_key_value_heads": 2,
80
+ "output_router_logits": false,
81
+ "pad_token_id": null,
82
+ "partial_rotary_factor": 0.25,
83
+ "rms_norm_eps": 1e-06,
84
+ "rope_parameters": {
85
+ "mrope_interleaved": true,
86
+ "mrope_section": [
87
+ 11,
88
+ 11,
89
+ 10
90
+ ],
91
+ "partial_rotary_factor": 0.25,
92
+ "rope_theta": 10000000,
93
+ "rope_type": "default"
94
+ },
95
+ "router_aux_loss_coef": 0.001,
96
+ "shared_expert_intermediate_size": 512,
97
+ "tie_word_embeddings": false,
98
+ "use_cache": true,
99
+ "vocab_size": 248320
100
+ },
101
+ "tie_word_embeddings": false,
102
+ "unsloth_version": "2026.4.6",
103
+ "video_token_id": 248057,
104
+ "vision_config": {
105
+ "deepstack_visual_indexes": [],
106
+ "depth": 27,
107
+ "torch_dtype": "bfloat16",
108
+ "hidden_act": "gelu_pytorch_tanh",
109
+ "hidden_size": 1152,
110
+ "in_channels": 3,
111
+ "initializer_range": 0.02,
112
+ "intermediate_size": 4304,
113
+ "model_type": "qwen3_5_moe",
114
+ "num_heads": 16,
115
+ "num_position_embeddings": 2304,
116
+ "out_hidden_size": 2048,
117
+ "patch_size": 16,
118
+ "spatial_merge_size": 2,
119
+ "temporal_patch_size": 2
120
+ },
121
+ "vision_end_token_id": 248054,
122
+ "vision_start_token_id": 248053
123
  }
darwin_mri_report.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "Darwin V7+ MRI-only (Mother-centric Linear)",
3
+ "father": "/NHNHOME/WORKSPACE/0426030024_A/darwin-36b-opus/models/Father-Qwen3.6-35B-A3B",
4
+ "mother": "/NHNHOME/WORKSPACE/0426030024_A/darwin-36b-opus/models/Mother-hesamation-Opus46",
5
+ "mri_trust": 0.8,
6
+ "mother_bias": 0.85,
7
+ "category_ratios": {
8
+ "attention": 0.9,
9
+ "router": 1.0,
10
+ "shared_expert": 0.9,
11
+ "routed_expert": 0.8,
12
+ "embedding": 1.0,
13
+ "lm_head": 1.0,
14
+ "norm": 0.95,
15
+ "other": 0.85
16
+ },
17
+ "mri_overrides": 82,
18
+ "tensor_categories": {
19
+ "attention": 485,
20
+ "router": 41,
21
+ "shared_expert": 164,
22
+ "routed_expert": 82,
23
+ "embedding": 5,
24
+ "lm_head": 1,
25
+ "norm": 154,
26
+ "other": 113
27
+ },
28
+ "total_tensors": 1045,
29
+ "total_shards": 21,
30
+ "elapsed_sec": 493
31
+ }
model-00001-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:186e4d10f94145b953fbe7396313ca5056dfc03a1a3a1eb60bb5c5c54eeb9bcd
3
+ size 3787084368
model-00002-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0eb87dda850f5552d4c748718695171d09755271941719792f09a2f4f315f968
3
+ size 3840241712
model-00003-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:104f943b8ab637f1ef8c5674905f5ecce74fdc216638222216b1ef02534c8fb9
3
+ size 3425336552
model-00004-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e74bbf5d7243cc669d15ab412bdfed25542adcb779d00a5b7cf005715b0c992
3
+ size 3303370680
model-00005-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a638fa2c4e3e5703f51a08cf060838d746cb80a876860fa7756f82aaa24146e4
3
+ size 3425336552
model-00006-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:798f2e0e7a77d6561cf7a0ff2222699e8cae09b627905ab81e8688b87bf9934d
3
+ size 3303370680
model-00007-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdd6981e8f75abe560563ed98359cc0e7fdf97a79468c6cfd9cf4574c164bcbe
3
+ size 3425336536
model-00008-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63ee27cf19fe1e536890bd64e557eb58b864d5bc91e34c646f187ad25d601db9
3
+ size 3370808792
model-00009-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12e123031e28f9f40cb3aa75f53149429437bca1cd6c6321c88500e8794df939
3
+ size 3357898440
model-00010-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:200d27af33c55b9de07a720d2118ffb3680609222926ba5eeef5047c3c64222c
3
+ size 3370808792
model-00011-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cda718dc4829963a954963f3335586aad62be6b71979aa5e65b766ec796a60aa
3
+ size 3357898440
model-00012-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf449e0d44b1388ca43bf15012b20c0f7b4661286a82226e0396c27ac028f03b
3
+ size 3303370672
model-00013-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e333fad5524cf3eb706b480d823eb1f770d654a8119bf1b1befb2453585cc9c9
3
+ size 3357898424
model-00014-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:465b7a144770d51fe545fff8b6a9bd088e7400e9ce5dc960fe4ce537ecd85455
3
+ size 3425336552
model-00015-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e2587b3cf9ce2f5e3843b5030e9f577573edaaf92d68575cb3e42279577b836
3
+ size 3303370680
model-00016-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76a10cc5a97ddf8f5489e935edbf9fe93d2a7ae19a9d29284a87a5a2798c5772
3
+ size 3425336552
model-00017-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a934473a2117e7d2296da17b8bcd777a17e4df5861a222edcb0e0e78ae5d7e47
3
+ size 3303370680
model-00018-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afd836378b4913d3eecb9d48f4a63a8c228334084341d5b84dfaf43062f5f6a4
3
+ size 3425336528
model-00019-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63bc517590ca03b748f36946bd7ca0e98ef717466723a3636cc8014968bd07f4
3
+ size 3303370648
model-00020-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b8256f14ba9d2e862b2fb3155fd379a51211b21c4e26279b20029accac7a118
3
+ size 3425336512
model-00021-of-00021.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cc4cbf5a1386345e7d3d95cb527e335f267b2e63586d2dc1d6834c924c7eeda
3
+ size 3663559120
model.safetensors.index.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -9,7 +9,7 @@
9
  "eos_token": "<|im_end|>",
10
  "errors": "replace",
11
  "image_token": "<|image_pad|>",
12
- "is_local": true,
13
  "model_max_length": 262144,
14
  "model_specific_special_tokens": {
15
  "audio_bos_token": "<|audio_start|>",
@@ -21,11 +21,14 @@
21
  "vision_eos_token": "<|vision_end|>"
22
  },
23
  "pad_token": "<|endoftext|>",
 
24
  "pretokenize_regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?[\\p{L}\\p{M}]+|\\p{N}| ?[^\\s\\p{L}\\p{M}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
 
25
  "split_special_tokens": false,
26
  "tokenizer_class": "TokenizersBackend",
27
  "unk_token": null,
28
  "video_token": "<|video_pad|>",
29
  "vision_bos_token": "<|vision_start|>",
30
- "vision_eos_token": "<|vision_end|>"
31
- }
 
 
9
  "eos_token": "<|im_end|>",
10
  "errors": "replace",
11
  "image_token": "<|image_pad|>",
12
+ "is_local": false,
13
  "model_max_length": 262144,
14
  "model_specific_special_tokens": {
15
  "audio_bos_token": "<|audio_start|>",
 
21
  "vision_eos_token": "<|vision_end|>"
22
  },
23
  "pad_token": "<|endoftext|>",
24
+ "padding_side": "left",
25
  "pretokenize_regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?[\\p{L}\\p{M}]+|\\p{N}| ?[^\\s\\p{L}\\p{M}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
26
+ "processor_class": "Qwen3VLProcessor",
27
  "split_special_tokens": false,
28
  "tokenizer_class": "TokenizersBackend",
29
  "unk_token": null,
30
  "video_token": "<|video_pad|>",
31
  "vision_bos_token": "<|vision_start|>",
32
+ "vision_eos_token": "<|vision_end|>",
33
+ "chat_template": "{%- set image_count = namespace(value=0) %}\n{%- set video_count = namespace(value=0) %}\n{%- macro render_content(content, do_vision_count, is_system_content=false) %}\n {%- if content is string %}\n {{- content }}\n {%- elif content is iterable and content is not mapping %}\n {%- for item in content %}\n {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}\n {%- if is_system_content %}\n {{- raise_exception('System message cannot contain images.') }}\n {%- endif %}\n {%- if do_vision_count %}\n {%- set image_count.value = image_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}\n {{- 'Picture ' ~ image_count.value ~ ': ' }}\n {%- endif %}\n {{- '<|vision_start|><|image_pad|><|vision_end|>' }}\n {%- elif 'video' in item or item.type == 'video' %}\n {%- if is_system_content %}\n {{- raise_exception('System message cannot contain videos.') }}\n {%- endif %}\n {%- if do_vision_count %}\n {%- set video_count.value = video_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}\n {{- 'Video ' ~ video_count.value ~ ': ' }}\n {%- endif %}\n {{- '<|vision_start|><|video_pad|><|vision_end|>' }}\n {%- elif 'text' in item %}\n {{- item.text }}\n {%- else %}\n {{- raise_exception('Unexpected item type in content.') }}\n {%- endif %}\n {%- endfor %}\n {%- elif content is none or content is undefined %}\n {{- '' }}\n {%- else %}\n {{- raise_exception('Unexpected content type.') }}\n {%- endif %}\n{%- endmacro %}\n{%- if not messages %}\n {{- raise_exception('No messages provided.') }}\n{%- endif %}\n{%- if tools and tools is iterable and tools is not mapping %}\n {{- '<|im_start|>system\\n' }}\n {{- \"# Tools\\n\\nYou have access to the following functions:\\n\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\" }}\n {{- '\\n\\nIf you choose to call a function ONLY reply in the following format with NO suffix:\\n\\n<tool_call>\\n<function=example_function_name>\\n<parameter=example_parameter_1>\\nvalue_1\\n</parameter>\\n<parameter=example_parameter_2>\\nThis is the value for the second parameter\\nthat can span\\nmultiple lines\\n</parameter>\\n</function>\\n</tool_call>\\n\\n<IMPORTANT>\\nReminder:\\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\\n- Required parameters MUST be specified\\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\\n</IMPORTANT>' }}\n {%- if messages[0].role == 'system' %}\n {%- set content = render_content(messages[0].content, false, true)|trim %}\n {%- if content %}\n {{- '\\n\\n' + content }}\n {%- endif %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {%- set content = render_content(messages[0].content, false, true)|trim %}\n {{- '<|im_start|>system\\n' + content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" %}\n {%- set content = render_content(message.content, false)|trim %}\n {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if ns.multi_step_tool %}\n {{- raise_exception('No user query found in messages.') }}\n{%- endif %}\n{%- for message in messages %}\n {%- set content = render_content(message.content, true)|trim %}\n {%- if message.role == \"system\" %}\n {%- if not loop.first %}\n {{- raise_exception('System message must be at the beginning.') }}\n {%- endif %}\n {%- elif message.role == \"user\" %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- set reasoning_content = reasoning_content|trim %}\n {%- if (preserve_thinking is defined and preserve_thinking is true) or (loop.index0 > ns.last_query_index) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content + '\\n</think>\\n\\n' + content }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {%- if loop.first %}\n {%- if content|trim %}\n {{- '\\n\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- else %}\n {{- '<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- else %}\n {{- '\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- if tool_call.arguments is defined %}\n {%- for args_name, args_value in tool_call.arguments|items %}\n {{- '<parameter=' + args_name + '>\\n' }}\n {%- set args_value = args_value | string if args_value is string else args_value | tojson | safe %}\n {{- args_value }}\n {{- '\\n</parameter>\\n' }}\n {%- endfor %}\n {%- endif %}\n {{- '</function>\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.previtem and loop.previtem.role != \"tool\" %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if not loop.last and loop.nextitem.role != \"tool\" %}\n {{- '<|im_end|>\\n' }}\n {%- elif loop.last %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- else %}\n {{- raise_exception('Unexpected message role.') }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- else %}\n {{- '<think>\\n' }}\n {%- endif %}\n{%- endif %}"
34
+ }