Spaces:
Sleeping
Sleeping
Commit History
finetune: SGD optimizer, more CLI args (llama/13873)
f585fe7
ggml : update `ggml_rope_multi` (llama/12665)
b4896dc
ggml : remove invalid portPos specifiers from dot files (llama/14838)
a91e2f3
sync : resolve conflicts (ggml/0)
497add0
ggml : add ggml_scale_bias (llama/14417)
573d50a
ggml : implement GEGLU_ERF and GEGLU_QUICK ops (llama/14445)
f798922
Sigbjørn Skjæret
commited on
ggml: backward pass for split swiglu (llama/14483)
45c8df6
ggml : fix FA mask dim 2 and 3 (llama/14505)
a89dc81
llama : initial Mamba-2 support (llama/9126)
1b4087e
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (llama/14435)
ebacb3e
ggml : Callback before abort (llama/14481)
ccee17d
ggml : remove trailing whitespace (llama/0)
e37767f
Add Conv2d for CPU (llama/14388)
68eb27a
ggml : implement REGLU/GEGLU/SWIGLU ops (llama/14158)
add5c0f
vulkan: Add fusion support for RMS_NORM+MUL (llama/14366)
737f12d
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
fea8f94
ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285)
88e7829
Add `ggml_roll` (ggml/1274)
71923e5
ggml : remove unused ggml_context_container (ggml/1272)
e6d6988
ggml : add ggml_repeat_4d (llama/13824)
3fe8af8
ggml : Fix backtrace breaking Windows build (#3203)
3f352bd
unverified
Daniel Tang
commited on
ggml : Print backtrace on uncaught C++ exceptions (ggml/1232)
1459465
Daniel Tang
commited on
ggml : add ggml_gelu_erf() (llama/13667)
6c9cd9a
ggml : fix apple OS check in ggml_print_backtrace (ggml/1229)
5c0b540
Diego Devesa
commited on
ggml : Fix missing backtrace on Linux (ggml/1228)
82ee857
Daniel Tang
commited on
metal : optimize MoE for large batches (llama/13388)
d51c0d3
llama/ggml: add LLM training support (llama/10544)
8d3b3c1
CUDA: fix bad asserts for partial offload (llama/13337)
23e676b
ggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion APIs (llama/13107)
c47823e
ggml : fix trailing whitespaces (llama/0)
5d27bbf
ggml : Depthwise 2D convolution (ggml/1152)
0c950d5
ggml : add bilinear upscale support (ggml/1185)
4c5e449
Diego Devesa
commited on
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
ba7a5f8
Diego Devesa
commited on
llama : add option to override model tensor buffers (llama/11397)
3d000b6
Diego Devesa
commited on
metal : improve FA + improve MoE (llama/12612)
04a3389
llama: Add support for RWKV v7 architecture (llama/12412)
727de7e
ggml : ggml_compute_forward_concat() for arbitrary tensor type (ggml/1118)
c9a49f9
vmobilis
commited on
ggml : portability fixes for VS 2017 (llama/12150)
49e3343
mgroeber9110
Marcus Groeber
commited on
ggml-cpu: Support s390x SIMD Instruction Set (llama/12019)
4aa54ec
Aaron Teo
Jinyang He
junchao-zhao
commited on
fix: typos in documentation files (llama/11791)
5c6d350
Maxim Evtush
commited on
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380)
855a9fe
CUDA: backwards pass for misc. ops, add tests (llama/11257)
2fbcec1
RoPE: fix back, CUDA support for back + noncont. (llama/11240)
131a21e
ggml : add option to not print stack on abort (ggml/1081)
9b2706e
William Tambellini
Diego Devesa
commited on