STiFLeR7 commited on
Commit
0056b5d
·
verified ·
1 Parent(s): a15c650

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -2
README.md CHANGED
@@ -1,3 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # 🧠 Phi-2 GPTQ (Quantized)
2
 
3
  This repository provides a 4-bit GPTQ quantized version of the **Phi-2** model by Microsoft, optimized for efficient inference using `gptqmodel`.
@@ -30,8 +46,8 @@ This model is ready-to-use with the Hugging Face `transformers` library.
30
 
31
  ## 📖 References
32
 
33
- - Microsoft Phi-2: https://huggingface.co/microsoft/phi-2
34
- - GPTQModel: https://github.com/ModelCoud/GPTQModel
35
  - Transformers: https://github.com/huggingface/transformers
36
 
37
  ## ⚖️ License
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - gptq
5
+ - quantized
6
+ - causal-lm
7
+ - transformers
8
+ - pytorch
9
+ - phi-2
10
+ - text-generation
11
+ library_name: transformers
12
+ pipeline_tag: text-generation
13
+ base_model: microsoft/phi-2
14
+ inference: true
15
+ ---
16
+
17
  # 🧠 Phi-2 GPTQ (Quantized)
18
 
19
  This repository provides a 4-bit GPTQ quantized version of the **Phi-2** model by Microsoft, optimized for efficient inference using `gptqmodel`.
 
46
 
47
  ## 📖 References
48
 
49
+ - Microsoft Phi-2: https://huggingface.co/microsoft/phi-2
50
+ - GPTQModel: https://github.com/ModelCoud/GPTQModel
51
  - Transformers: https://github.com/huggingface/transformers
52
 
53
  ## ⚖️ License