Spaces:
Running
Running
examples : update whisper.objc README.md (#2916)
Browse filesThis commit updates the hisper.objc README.md to reflect the changes of
using the xcframework and the new build process.
Since whisper.cpp is no longer compiled by the example project, instead
the library from the xframework will be used, the build instructions
have been removed.
- examples/whisper.objc/README.md +13 -29
examples/whisper.objc/README.md
CHANGED
|
@@ -11,39 +11,23 @@ https://user-images.githubusercontent.com/1991296/204126266-ce4177c6-6eca-4bd9-b
|
|
| 11 |
|
| 12 |
## Usage
|
| 13 |
|
|
|
|
| 14 |
```bash
|
| 15 |
-
|
| 16 |
-
open whisper.cpp/examples/whisper.objc/whisper.objc.xcodeproj/
|
| 17 |
-
|
| 18 |
-
# if you don't want to convert a Core ML model, you can skip this step by create dummy model
|
| 19 |
-
mkdir models/ggml-base.en-encoder.mlmodelc
|
| 20 |
```
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
Also, don't forget to add the `-DGGML_USE_ACCELERATE` compiler flag for `ggml.c` in Build Phases.
|
| 27 |
-
This can significantly improve the performance of the transcription:
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## Core ML
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
Then follow the [`Core ML support` section of readme](../../README.md#core-ml-support) for convert the model.
|
| 38 |
-
|
| 39 |
-
In this project, it also added `-O3 -DNDEBUG` to `Other C Flags`, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project.
|
| 40 |
-
|
| 41 |
-
## Metal
|
| 42 |
-
|
| 43 |
-
You can also enable Metal to make the inference run on the GPU of your device. This might or might not be more efficient
|
| 44 |
-
compared to Core ML depending on the model and device that you use.
|
| 45 |
-
|
| 46 |
-
To enable Metal, just add `-DGGML_USE_METAL` instead off the `-DWHISPER_USE_COREML` flag and you are ready.
|
| 47 |
-
This will make both the Encoder and the Decoder run on the GPU.
|
| 48 |
-
|
| 49 |
-
If you want to run the Encoder with Core ML and the Decoder with Metal then simply add both `-DWHISPER_USE_COREML -DGGML_USE_METAL` flags. That's all!
|
|
|
|
| 11 |
|
| 12 |
## Usage
|
| 13 |
|
| 14 |
+
This example uses the whisper.xcframework which needs to be built first using the following command:
|
| 15 |
```bash
|
| 16 |
+
./build_xcframework.sh
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
```
|
| 18 |
|
| 19 |
+
A model is also required to be downloaded and can be done using the following command:
|
| 20 |
+
```bash
|
| 21 |
+
./models/download-ggml-model.sh base.en
|
| 22 |
+
```
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
If you don't want to convert a Core ML model, you can skip this step by creating dummy model:
|
| 25 |
+
```bash
|
| 26 |
+
mkdir models/ggml-base.en-encoder.mlmodelc
|
| 27 |
+
```
|
| 28 |
|
| 29 |
## Core ML
|
| 30 |
|
| 31 |
+
Follow the [`Core ML support` section of readme](../../README.md#core-ml-support) to convert the model.
|
| 32 |
+
That is all the needs to be done to use the Core ML model in the app. The converted model is a
|
| 33 |
+
resource in the project and will be used if it is available.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|