mlabonne commited on
Commit
458d938
·
verified ·
1 Parent(s): bce14d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -119,9 +119,9 @@ You can directly pass tools as JSON schema or Python functions with `.apply_chat
119
 
120
  ### 1. Transformers
121
 
122
- To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) from source as follows:
123
  ```bash
124
- pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6
125
  ```
126
 
127
  Here is an example of how to generate an answer with transformers in Python:
@@ -221,7 +221,6 @@ for i, output in enumerate(outputs):
221
  print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
222
  ```
223
 
224
-
225
  ### 3. llama.cpp
226
 
227
  You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-8B-A1B-GGUF). Find more information in the model card.
 
119
 
120
  ### 1. Transformers
121
 
122
+ To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers):
123
  ```bash
124
+ pip install transformers
125
  ```
126
 
127
  Here is an example of how to generate an answer with transformers in Python:
 
221
  print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
222
  ```
223
 
 
224
  ### 3. llama.cpp
225
 
226
  You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-8B-A1B-GGUF). Find more information in the model card.