Text Generation
Safetensors
vllm
minimax
gptq
4-bit precision
quantization
Mixture of Experts
w4a16
conversational
custom_code
Instructions to use avtc/MiniMax-M2-GPTQMODEL-W4A16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Local Apps
- vLLM
How to use avtc/MiniMax-M2-GPTQMODEL-W4A16 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "avtc/MiniMax-M2-GPTQMODEL-W4A16" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "avtc/MiniMax-M2-GPTQMODEL-W4A16", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/avtc/MiniMax-M2-GPTQMODEL-W4A16
- SGLang
How to use avtc/MiniMax-M2-GPTQMODEL-W4A16 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "avtc/MiniMax-M2-GPTQMODEL-W4A16" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "avtc/MiniMax-M2-GPTQMODEL-W4A16", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "avtc/MiniMax-M2-GPTQMODEL-W4A16" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "avtc/MiniMax-M2-GPTQMODEL-W4A16", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use avtc/MiniMax-M2-GPTQMODEL-W4A16 with Docker Model Runner:
docker model run hf.co/avtc/MiniMax-M2-GPTQMODEL-W4A16
Update README.md
Browse files
README.md
CHANGED
|
@@ -32,12 +32,7 @@ This repository contains a **4-bit quantized version** of the MiniMax-M2 model.
|
|
| 32 |
The quantization was performed using **[GPTQModel](https://github.com/ModelCloud/GPTQModel)** with an experimental modification that **feeds the whole dataset to each expert** to achieve improved quality.
|
| 33 |
|
| 34 |
**Calibration Dataset:**
|
| 35 |
-
The dataset used during quantization consists of 1536 samples:
|
| 36 |
-
* c4/en (1024)
|
| 37 |
-
* arc (168)
|
| 38 |
-
* gsm8k (168)
|
| 39 |
-
* humaneval (168)
|
| 40 |
-
* alpaca (20)
|
| 41 |
|
| 42 |
**Hardware & Performance:**
|
| 43 |
This model is verified to run with Tensor Parallel (TP) on **8x NVIDIA RTX 3090** GPUs with a context window of **192,500 tokens**.
|
|
|
|
| 32 |
The quantization was performed using **[GPTQModel](https://github.com/ModelCloud/GPTQModel)** with an experimental modification that **feeds the whole dataset to each expert** to achieve improved quality.
|
| 33 |
|
| 34 |
**Calibration Dataset:**
|
| 35 |
+
The dataset used during quantization consists of 1536 samples: c4/en (1024), arc (164), gsm8k (164), humaneval (164), alpaca (20)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
**Hardware & Performance:**
|
| 38 |
This model is verified to run with Tensor Parallel (TP) on **8x NVIDIA RTX 3090** GPUs with a context window of **192,500 tokens**.
|