Update README.md
Browse files
README.md
CHANGED
|
@@ -119,9 +119,9 @@ You can directly pass tools as JSON schema or Python functions with `.apply_chat
|
|
| 119 |
|
| 120 |
### 1. Transformers
|
| 121 |
|
| 122 |
-
To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers)
|
| 123 |
```bash
|
| 124 |
-
pip install
|
| 125 |
```
|
| 126 |
|
| 127 |
Here is an example of how to generate an answer with transformers in Python:
|
|
@@ -221,7 +221,6 @@ for i, output in enumerate(outputs):
|
|
| 221 |
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
| 222 |
```
|
| 223 |
|
| 224 |
-
|
| 225 |
### 3. llama.cpp
|
| 226 |
|
| 227 |
You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-8B-A1B-GGUF). Find more information in the model card.
|
|
|
|
| 119 |
|
| 120 |
### 1. Transformers
|
| 121 |
|
| 122 |
+
To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers):
|
| 123 |
```bash
|
| 124 |
+
pip install transformers
|
| 125 |
```
|
| 126 |
|
| 127 |
Here is an example of how to generate an answer with transformers in Python:
|
|
|
|
| 221 |
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
| 222 |
```
|
| 223 |
|
|
|
|
| 224 |
### 3. llama.cpp
|
| 225 |
|
| 226 |
You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-8B-A1B-GGUF). Find more information in the model card.
|