yam-peleg/Hebrew-Mistral-7B
Text Generation • 8B • Updated
• 5.12k • 73
Open source models pretrained in hebrew
Note Current state-of-the-art base model trained in Hebrew. Continuously pre-trained from Mistral-7B, vocab extended with additional 32,000 hebrew tokens.
Note Hebrew-Mistral-7B Continuously pre-trained with 200K context window.
Note The largest Hebrew base model at the moment. Important note: Under-trained comparing to the others.
Note Previous state-of-the-art base model trained in Hebrew. Continuously pre-trained from Gemma-7B and extended to 11B parameters.
Note Updated: V2! Previous state-of-the-art base model trained in Hebrew. Continuously pre-trained from Gemma-7B and extended to 11B parameters.
Note Instruct fine tune of Hebrew-Gemma-11B. Quickly trained for demonstration purposes.