Fetching metadata from the HF Docker repository...
gitignore : update
d55a6cc
unverified
-
.github
ci : upgrade gradle to 2.4.2 (#1263)
-
bindings
sign jar for Maven Central repo
-
cmake
cmake : update to 3.19 (#351)
-
coreml
coreml : wrap inference call in @autoreleasepool to fix memory leak (#1218)
-
examples
build : do not use _GNU_SOURCE gratuitously (#1129)
-
extra
extra : update 'quantize-all.sh' to quantize all downloaded models (#1054)
-
models
models : add quantum models to download-ggml-model.sh (#1235)
-
openvino
whisper : add OpenVINO support (#1037)
-
samples
Create README.md
-
tests
tests : add "threads" to run-tests.sh
-
804 Bytes
Initial release
-
673 Bytes
gitignore : update
-
96 Bytes
cmake : add submodule whisper.spm
-
16.4 kB
ci : upgrade gradle to 2.4.2 (#1263)
-
1.07 kB
license : update year (#739)
-
13.9 kB
build : do not use _GNU_SOURCE gratuitously (#1129)
-
34.8 kB
readme : update CMake build commands (#1231)
-
22.1 kB
ggml : sync (ggml-alloc, GPU, eps, etc.) (#1220)
-
905 Bytes
ggml : sync (ggml-alloc, GPU, eps, etc.) (#1220)
-
257 kB
sync : ggml (CUDA faster rope)
-
1.72 kB
ggml : sync (ggml-alloc, GPU, eps, etc.) (#1220)
-
3.43 kB
ggml : sync (ggml-alloc, GPU, eps, etc.) (#1220)
-
62.1 kB
sync : ggml (HBM + Metal + style) (#1264)
-
78.3 kB
sync : ggml (HBM + Metal + style) (#1264)
-
68.9 kB
ggml : sync latest llama.cpp (view_src + alloc improvements) (#1247)
-
845 Bytes
ggml : sync latest ggml lib
-
685 kB
sync : ggml (HBM + Metal + style) (#1264)
-
72.3 kB
ggml : sync latest llama.cpp (view_src + alloc improvements) (#1247)
-
191 kB
ggml : sync (ggml-alloc, GPU, eps, etc.) (#1220)
-
26 kB
whisper : significantly improve the inference quality (#1148)