| | --- |
| | base_model: |
| | - OpenPipe/mistral-ft-optimized-1227 |
| | - mlabonne/NeuralHermes-2.5-Mistral-7B |
| | tags: |
| | - merge |
| | - mergekit |
| | - lazymergekit |
| | - OpenPipe/mistral-ft-optimized-1227 |
| | - mlabonne/NeuralHermes-2.5-Mistral-7B |
| | license: apache-2.0 |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | # llambses-1 |
| |
|
| | llambses-1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
| | * [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) |
| | * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) |
| |
|
| | ## 🧩 Configuration |
| |
|
| | ```yaml |
| | models: |
| | - model: Kukedlc/NeuralSynthesis-7b-v0.4-slerp |
| | - model: OpenPipe/mistral-ft-optimized-1227 |
| | parameters: |
| | density: 0.5 |
| | weight: 0.6 |
| | - model: mlabonne/NeuralHermes-2.5-Mistral-7B |
| | parameters: |
| | density: 0.5 |
| | weight: 0.4 |
| | merge_method: ties |
| | base_model: Kukedlc/NeuralSynthesis-7b-v0.4-slerp |
| | parameters: |
| | normalize: true |
| | dtype: float16 |
| | ``` |
| |
|
| | ## LightEval |
| |
|
| | | Task |Version| Metric |Value | |Stderr| |
| | |---------------------------|------:|--------|-----:|---|-----:| |
| | | | |acc |0.5870|± |0.0144| |
| | | | |acc_norm|0.6058|± |0.0143| |
| | |leaderboard:arc:challenge:0| 0|acc |0.5870|± |0.0144| |
| | | | |acc_norm|0.6058|± |0.0143| |
| | | | |acc |0.6000|± |0.0356| |
| | | | |acc_norm|0.5895|± |0.0358| |
| | |harness:bigbench:causal_judgment:0| 0|acc |0.6000|± |0.0356| |
| | | | |acc_norm|0.5895|± |0.0358| |
| | |
| | ## 💻 Usage |
| | |
| | ```python |
| | !pip install -qU transformers accelerate |
| | |
| | from transformers import AutoTokenizer |
| | import transformers |
| | import torch |
| | |
| | model = "bfuzzy1/llambses-1" |
| | |
| | chat_template = """{% for message in messages %} |
| | {% if message['role'] == 'user' %} |
| | {{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }} |
| | {% elif message['role'] == 'assistant' %} |
| | {{ message['content'] + eos_token }} |
| | {% elif message['role'] == 'system' %} |
| | {{ '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }} |
| | {% endif %} |
| | {% endfor %} |
| | """ |
| | |
| | messages = [ |
| | {"role": "system", "content": "You are a helpful AI assistant."}, |
| | {"role": "user", "content": "What is a large language model?"} |
| | ] |
| | |
| |
|
| | tokenizer = AutoTokenizer.from_pretrained(model) |
| | template = tokenizer.chat_template = chat_template |
| | prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | pipeline = transformers.pipeline( |
| | "text-generation", |
| | model=model, |
| | torch_dtype=torch.float16, |
| | device_map="auto", |
| | ) |
| | |
| | outputs = pipeline(prompt, max_new_tokens=100, do_sample=True, temperature=0.7, top_k=3, top_p=0.95) |
| | print(outputs[0]["generated_text"]) |
| | ``` |