0xsourcecode commited on
Commit
ee7503d
·
unverified ·
1 Parent(s): ad4065a

readme : highlight OpenBLAS support (#956)

Browse files

* highlight openblas support

* Update README.md

Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -21,6 +21,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
21
  - Runs on the CPU
22
  - [Partial GPU support for NVIDIA via cuBLAS](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
23
  - [Partial OpenCL GPU support via CLBlast](https://github.com/ggerganov/whisper.cpp#opencl-gpu-support-via-clblast)
 
24
  - [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
25
 
26
  Supported platforms:
@@ -346,6 +347,18 @@ cp bin/* ../
346
 
347
  Run all the examples as usual.
348
 
 
 
 
 
 
 
 
 
 
 
 
 
349
  ## Limitations
350
 
351
  - Inference only
 
21
  - Runs on the CPU
22
  - [Partial GPU support for NVIDIA via cuBLAS](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
23
  - [Partial OpenCL GPU support via CLBlast](https://github.com/ggerganov/whisper.cpp#opencl-gpu-support-via-clblast)
24
+ - [BLAS CPU support via OpenBLAS]((https://github.com/ggerganov/whisper.cpp#blas-cpu-support-via-openblas)
25
  - [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
26
 
27
  Supported platforms:
 
347
 
348
  Run all the examples as usual.
349
 
350
+ ## BLAS CPU support via OpenBLAS
351
+
352
+ Encoder processing can be accelerated on the CPU via OpenBLAS.
353
+ First, make sure you have installed `openblas`: https://www.openblas.net/
354
+
355
+ Now build `whisper.cpp` with OpenBLAS support:
356
+
357
+ ```
358
+ make clean
359
+ WHISPER_OPENBLAS=1 make -j
360
+ ```
361
+
362
  ## Limitations
363
 
364
  - Inference only