Downloads last month
3,743
GGUF
Model size
30B params
Architecture
deepseek2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ngxson/GLM-4.7-Flash-GGUF

Quantized
(68)
this model