sswoo123 commited on
Commit
853d70c
Β·
verified Β·
1 Parent(s): 19be70a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -28
README.md CHANGED
@@ -1,55 +1,45 @@
1
- ---
2
- library_name: transformers
3
- tags: []
4
- ---
5
-
6
-
7
  <p align="center">
8
  <img src="https://github.com/MLP-Lab/KORMo-tutorial/blob/main/tutorial/attachment/kormo_logo.png?raw=true" style="width: 100%; max-width: 1100px;">
9
  </p>
10
 
11
-
12
- ---
13
  # 🦾 KORMo-10B
14
 
15
- **KORMo-10B**λŠ” ν•œκ΅­μ–΄μ™€ μ˜μ–΄λ₯Ό λͺ¨λ‘ λ‹€λ£¨λŠ” **10.8B νŒŒλΌλ―Έν„° Fully Open LLM**μž…λ‹ˆλ‹€!
16
- λͺ¨λΈ, ν•™μŠ΅ μ½”λ“œ, ν•™μŠ΅ λ°μ΄ν„°κΉŒμ§€ **λͺ¨λ‘ μ™„μ „ 곡개(Full Stack Open)** λ˜μ–΄ μžˆμ–΄ λˆ„κ΅¬λ‚˜ μž¬ν˜„ 및 ν™•μž₯이 κ°€λŠ₯ν•©λ‹ˆλ‹€!
17
 
18
- - 🧠 **λͺ¨λΈ 크기**: 10.8B νŒŒλΌλ―Έν„°
19
- - πŸ—£οΈ **μ–Έμ–΄**: ν•œκ΅­μ–΄ / μ˜μ–΄
20
- - πŸͺ„ **ν•™μŠ΅ 데이터**: ν•©μ„± 데이터 + 곡개 데이터 μ‘°ν•©
21
- - πŸ§ͺ **λΌμ΄μ„ μŠ€**: Apache 2.0 (상업적 μ‚¬μš© κ°€λŠ₯)
22
 
23
  ---
24
 
25
  ## πŸ”— Links
26
 
27
- - πŸ€— **Hugging Face**: [πŸ‘‰ λͺ¨λΈ λ‹€μš΄λ‘œλ“œ]([https://](https://huggingface.co/KORMo-Team))
28
- - πŸ’» **GitHub Repository**: [πŸ‘‰ ν•™μŠ΅ 및 μΆ”λ‘  μ½”λ“œ]([https:/](https://github.com/MLP-Lab/KORMo-tutorial))
29
 
30
  ---
31
 
32
- ## πŸ†• μ—…λ°μ΄νŠΈ μ†Œμ‹
33
- - πŸš€ **2025.10**: KORMo v1.0 정식 릴리슀!
34
 
35
  ---
36
 
37
- ## λͺ¨λΈ μ•„ν‚€ν…μ²˜
38
- | ν•­λͺ© | λ‚΄μš© |
39
- |:----|:----|
40
  | Architecture | Transformer Decoder |
41
  | Parameters | 10.8B |
42
  | Context Length | 128K |
43
  | Languages | Korean, English |
44
  | License | Apache 2.0 |
45
 
46
-
47
  ---
48
 
49
- ## πŸ“ˆ 벀치마크 μ„±λŠ₯
50
-
51
- ### πŸ“Š μ •λŸ‰ 평가 (Quantitative Evaluation)
52
 
 
53
 
54
  | Benchmark | **KORMo-10B** | smolLM3-3B | olmo2-7B | olmo2-13B | kanana1.5-8B | qwen3-8B | llama3.1-8B | gemma3-4B | gemma3-12B |
55
  |:-----------|---------------:|-----------:|---------:|---------:|------------:|--------:|-----------:|---------:|----------:|
@@ -81,7 +71,9 @@ tags: []
81
  | kr_clinical_qa | 77.32 | 53.97 | 48.33 | 46.22 | 65.84 | 80.00 | 63.54 | 60.00 | 77.22 |
82
  | **Korean Avg.** | **58.15** | 47.37 | 35.82 | 39.34 | 60.94 | 63.35 | 49.60 | 49.60 | 60.37 |
83
 
84
- ## πŸ“ μ •μ„± 평가 (LLM-as-a-Judge)
 
 
85
 
86
  | Benchmark | KORMo-10B | smolLM3-3B | olmo2-7B | olmo2-13B | kanana1.5-8B | qwen3-8B | llama3.1-8B | exaone3.5-8B* | gemma3-12B |
87
  |:----------|---------:|----------:|---------:|---------:|------------:|--------:|------------:|-------------:|-----------:|
@@ -93,6 +85,6 @@ tags: []
93
  ---
94
 
95
  ## Contact
96
- - μž„κ²½νƒœ(KyungTae Lim), Professor at Seoultech. `[email protected]`
97
 
98
- ## Contributor
 
 
 
 
 
 
 
1
  <p align="center">
2
  <img src="https://github.com/MLP-Lab/KORMo-tutorial/blob/main/tutorial/attachment/kormo_logo.png?raw=true" style="width: 100%; max-width: 1100px;">
3
  </p>
4
 
 
 
5
  # 🦾 KORMo-10B
6
 
7
+ **KORMo-10B** is a **10.8B parameter fully open LLM** capable of handling both **Korean and English**.
8
+ The model, training code, and training data are all **fully open**, allowing anyone to reproduce and extend it.
9
 
10
+ - 🧠 **Model Size**: 10.8B parameters
11
+ - πŸ—£οΈ **Languages**: Korean / English
12
+ - πŸͺ„ **Training Data**: Synthetic data + public datasets
13
+ - πŸ§ͺ **License**: Apache 2.0 (commercial use permitted)
14
 
15
  ---
16
 
17
  ## πŸ”— Links
18
 
19
+ - πŸ€— **Hugging Face**: [πŸ‘‰ Model Download](https://huggingface.co/KORMo-Team)
20
+ - πŸ’» **GitHub Repository**: [πŸ‘‰ Training and Inference Code](https://github.com/MLP-Lab/KORMo-tutorial)
21
 
22
  ---
23
 
24
+ ## πŸ†• Update News
25
+ - πŸš€ **Oct 2025**: Official release of KORMo v1.0!
26
 
27
  ---
28
 
29
+ ## Model Architecture
30
+ | Item | Description |
31
+ |:----|:------------|
32
  | Architecture | Transformer Decoder |
33
  | Parameters | 10.8B |
34
  | Context Length | 128K |
35
  | Languages | Korean, English |
36
  | License | Apache 2.0 |
37
 
 
38
  ---
39
 
40
+ ## πŸ“ˆ Benchmark Performance
 
 
41
 
42
+ ### πŸ“Š Quantitative Evaluation
43
 
44
  | Benchmark | **KORMo-10B** | smolLM3-3B | olmo2-7B | olmo2-13B | kanana1.5-8B | qwen3-8B | llama3.1-8B | gemma3-4B | gemma3-12B |
45
  |:-----------|---------------:|-----------:|---------:|---------:|------------:|--------:|-----------:|---------:|----------:|
 
71
  | kr_clinical_qa | 77.32 | 53.97 | 48.33 | 46.22 | 65.84 | 80.00 | 63.54 | 60.00 | 77.22 |
72
  | **Korean Avg.** | **58.15** | 47.37 | 35.82 | 39.34 | 60.94 | 63.35 | 49.60 | 49.60 | 60.37 |
73
 
74
+ ---
75
+
76
+ ## πŸ“ Qualitative Evaluation (LLM-as-a-Judge)
77
 
78
  | Benchmark | KORMo-10B | smolLM3-3B | olmo2-7B | olmo2-13B | kanana1.5-8B | qwen3-8B | llama3.1-8B | exaone3.5-8B* | gemma3-12B |
79
  |:----------|---------:|----------:|---------:|---------:|------------:|--------:|------------:|-------------:|-----------:|
 
85
  ---
86
 
87
  ## Contact
88
+ - KyungTae Lim, Professor at Seoultech. `[email protected]`
89
 
90
+ ## Contributor