Instructions to use SanctumAI/mathstral-7B-v0.1-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SanctumAI/mathstral-7B-v0.1-GGUF with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("SanctumAI/mathstral-7B-v0.1-GGUF", dtype="auto") - llama-cpp-python
How to use SanctumAI/mathstral-7B-v0.1-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="SanctumAI/mathstral-7B-v0.1-GGUF", filename="mathstral-7B-v0.1.Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use SanctumAI/mathstral-7B-v0.1-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
Use Docker
docker model run hf.co/SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use SanctumAI/mathstral-7B-v0.1-GGUF with Ollama:
ollama run hf.co/SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
- Unsloth Studio new
How to use SanctumAI/mathstral-7B-v0.1-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SanctumAI/mathstral-7B-v0.1-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SanctumAI/mathstral-7B-v0.1-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for SanctumAI/mathstral-7B-v0.1-GGUF to start chatting
- Docker Model Runner
How to use SanctumAI/mathstral-7B-v0.1-GGUF with Docker Model Runner:
docker model run hf.co/SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
- Lemonade
How to use SanctumAI/mathstral-7B-v0.1-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull SanctumAI/mathstral-7B-v0.1-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.mathstral-7B-v0.1-GGUF-Q4_K_M
List all available models
lemonade list
This model was quantized by SanctumAI. To leave feedback, join our community in Discord.
Mathstral 7B v0.1 GGUF
Model creator: mistralai
Original model: mathstral-7B-v0.1
Model Summary:
Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B. You can read more in the official blog post.
Prompt Template:
If you're using Sanctum app, simply use Mistral model preset.
Prompt template:
[INST]{prompt} [/INST]
Hardware Requirements Estimate
| Name | Quant method | Size | Memory (RAM, vRAM) required |
|---|---|---|---|
| mathstral-7B-v0.1.Q2_K.gguf | Q2_K | 2.72 GB | ? GB |
| mathstral-7B-v0.1.Q3_K_S.gguf | Q3_K_S | 3.17 GB | ? GB |
| mathstral-7B-v0.1.Q3_K_M.gguf | Q3_K_M | 3.52 GB | ? GB |
| mathstral-7B-v0.1.Q3_K_L.gguf | Q3_K_L | 3.83 GB | ? GB |
| mathstral-7B-v0.1.Q4_0.gguf | Q4_0 | 4.11 GB | ? GB |
| mathstral-7B-v0.1.Q4_K_S.gguf | Q4_K_S | 4.14 GB | ? GB |
| mathstral-7B-v0.1.Q4_K_M.gguf | Q4_K_M | 4.37 GB | ? GB |
| mathstral-7B-v0.1.Q4_K.gguf | Q4_K | 4.37 GB | ? GB |
| mathstral-7B-v0.1.Q4_1.gguf | Q4_1 | 4.56 GB | ? GB |
| mathstral-7B-v0.1.Q5_0.gguf | Q5_0 | 5.00 GB | ? GB |
| mathstral-7B-v0.1.Q5_K_S.gguf | Q5_K_S | 5.00 GB | ? GB |
| mathstral-7B-v0.1.Q5_K_M.gguf | Q5_K_M | 5.14 GB | ? GB |
| mathstral-7B-v0.1.Q5_K.gguf | Q5_K | 5.14 GB | ? GB |
| mathstral-7B-v0.1.Q5_1.gguf | Q5_1 | 5.45 GB | ? GB |
| mathstral-7B-v0.1.Q6_K.gguf | Q6_K | 5.95 GB | ? GB |
| mathstral-7B-v0.1.Q8_0.gguf | Q8_0 | 7.70 GB | ? GB |
| mathstral-7B-v0.1.f16.gguf | f16 | 14.50 GB | ? GB |
Disclaimer
Sanctum is not the creator, originator, or owner of any Model featured in the Models section of the Sanctum application. Each Model is created and provided by third parties. Sanctum does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model listed there. You understand that supported Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. Sanctum may not monitor or control the Models supported and cannot, and does not, take responsibility for any such Model. Sanctum disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. Sanctum further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through Sanctum.
- Downloads last month
- 982
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit