Instructions to use JetBrains/Mellum-4b-sft-python-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use JetBrains/Mellum-4b-sft-python-gguf with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("JetBrains/Mellum-4b-sft-python-gguf", dtype="auto") - llama-cpp-python
How to use JetBrains/Mellum-4b-sft-python-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="JetBrains/Mellum-4b-sft-python-gguf", filename="mellum-4b-sft-python.Q8_0.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use JetBrains/Mellum-4b-sft-python-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0 # Run inference directly in the terminal: llama-cli -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0 # Run inference directly in the terminal: llama-cli -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf JetBrains/Mellum-4b-sft-python-gguf:Q8_0
Use Docker
docker model run hf.co/JetBrains/Mellum-4b-sft-python-gguf:Q8_0
- LM Studio
- Jan
- Ollama
How to use JetBrains/Mellum-4b-sft-python-gguf with Ollama:
ollama run hf.co/JetBrains/Mellum-4b-sft-python-gguf:Q8_0
- Unsloth Studio new
How to use JetBrains/Mellum-4b-sft-python-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for JetBrains/Mellum-4b-sft-python-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for JetBrains/Mellum-4b-sft-python-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for JetBrains/Mellum-4b-sft-python-gguf to start chatting
- Docker Model Runner
How to use JetBrains/Mellum-4b-sft-python-gguf with Docker Model Runner:
docker model run hf.co/JetBrains/Mellum-4b-sft-python-gguf:Q8_0
- Lemonade
How to use JetBrains/Mellum-4b-sft-python-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull JetBrains/Mellum-4b-sft-python-gguf:Q8_0
Run and chat with the model
lemonade run user.Mellum-4b-sft-python-gguf-Q8_0
List all available models
lemonade list
Model Description
Mellum-4b-sft-python is a fine-tuned version of JetBrains' first open-source large language model (LLM) optimized for code-related tasks.
Pre-trained on over 4 trillion tokens with a context window of 8192 tokens across multiple programming languages, and then fine-tuned, Mellum-4b-sft-python is tailored specifically for code completion in Python. The model follows a LLaMA-style architecture with 4 billion parameters, making it efficient for both cloud inference (e.g., via vLLM) and local deployment (e.g., using llama.cpp or Ollama).
Mellum was trained using Automatic Mixed Precision (AMP) with bf16 precision. The uploaded version on Hugging Face retains the bf16 format for public use.
Designed for integration into professional developer tooling (e.g., intelligent code suggestions in IDEs), AI-powered coding assistants, and research on code understanding and generation, Mellum is also well-suited for educational applications and fine-tuning experiments.
Limitations
- Biases: May reflect biases present in public codebases. For example it will likely produce code which is similar in style to the open-source repositories.
- Security: Code suggestions should not be assumed to be secure or free of vulnerabilities.
- Format: This model is suitable mostly for FIM Completion objective with context's files.
Sample Usage
Here are examples of how to run and sample from the model.
Fill-in-the-middle example
llama-cli -m mellum-4b-sft-python.Q8_0.gguf --temp 0 -p $'<filename>main.py\n<fim_suffix><fim_prefix>def fibonacci(n):\n <fim_middle>'
Citation
If you use this model, please cite:
@misc{Mellum-4b-base,
title = {Mellum-4b-base},
author = {Pavlichenko, Nikita and Nazarov, Iurii and Dolgov, Ivan and Garanina, Ekaterina and Lasocki, Karol and Reshetnikova, Julia and Boitsov, Sergei and Bondyrev, Ivan and Karaeva, Dariia and Sheptyakov, Maksim and Ustalov, Dmitry and Mukhin, Artem and Proshev, Semyon and Abramov, Nikita and Kolomyttseva, Olga and Lysaniuk, Kseniia and Zavidnyi, Ilia and Semenkin, Anton and Tankov, Vladislav and Sazanovich, Uladzislau},
year = {2025},
}
Contact
For questions, collaborations and requests reach us out via mellum@jetbrains.com
- Downloads last month
- 93
8-bit
Model tree for JetBrains/Mellum-4b-sft-python-gguf
Base model
JetBrains/Mellum-4b-base