lingyezhixing
lingyezhixing
AI & ML interests
None yet
Recent Activity
new activity 20 days ago
ubergarm/Qwen3.5-122B-A10B-GGUF:Missing about 50~55GB of Q3? new activity 20 days ago
unsloth/Qwen3.5-122B-A10B-GGUF:Q3 quantization performance issuesOrganizations
None yet
Missing about 50~55GB of Q3?
5
#7 opened 20 days ago
by
lingyezhixing
Q3 quantization performance issues
1
#7 opened 20 days ago
by
lingyezhixing
VLLM的AWQ格式?
2
#2 opened about 2 months ago
by
Laoxu
Please regenerate to adapt to the latest improvements in llama.cpp
🔥 1
1
#4 opened 2 months ago
by
lingyezhixing
Where IQ quantize?
5
#1 opened 4 months ago
by
lingyezhixing
IQ4_XS Please
3
#6 opened 4 months ago
by
lingyezhixing
这次会不会有14B的AutoAWQ或者GPTQ?
1
#2 opened 4 months ago
by
lingyezhixing
Will there still be 32B dense models?
➕👀 8
2
#18 opened 7 months ago
by
lingyezhixing
Hello, I want to know if the draft model will reduce the model generation quality?
1
#2 opened 7 months ago
by
lingyezhixing
Smashed 💪 Scored to 82.86 🔥2bit IQ2_M on MMLU Pro single shot benchmark
❤️🔥 2
5
#7 opened 7 months ago
by
xbruce22
There must be something wrong with the size
👀 2
2
#8 opened 11 months ago
by
lingyezhixing
Native FP4 seems to make quantization meaningless
3
#7 opened 7 months ago
by
lingyezhixing
Can you provide some low-precision quantization options?
➕👍 3
11
#3 opened 8 months ago
by
lingyezhixing
Is the GGUF file still being uploaded?
👍 2
3
#2 opened 8 months ago
by
lingyezhixing
FastLLM support?
#17 opened 9 months ago
by
lingyezhixing
There must be something wrong with the size
#7 opened 11 months ago
by
lingyezhixing
期待更大规模的RP模型
#10 opened 11 months ago
by
lingyezhixing
能否提供Q6_K的量化文件?
🔥 2
3
#10 opened about 1 year ago
by
lingyezhixing