Text-to-Speech
Safetensors
Qwen3-TTS
MLX
mlx-audio
speech
speech generation
voice cloning
tts
8-bit precision
Instructions to use mlx-community/Qwen3-TTS-12Hz-1.7B-Base-8bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/Qwen3-TTS-12Hz-1.7B-Base-8bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen3-TTS-12Hz-1.7B-Base-8bit mlx-community/Qwen3-TTS-12Hz-1.7B-Base-8bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
mlx-community/Qwen3-TTS-12Hz-1.7B-Base-8bit
This model was converted to MLX format from Qwen/Qwen3-TTS-12Hz-1.7B-Base using mlx-audio version 0.3.0.
Refer to the original model card for more details on the model.
Use with mlx-audio
pip install -U mlx-audio
CLI Example:
python -m mlx_audio.tts.generate --model mlx-community/Qwen3-TTS-12Hz-1.7B-Base-8bit --text "Hello, this is a test."
Python Example:
from mlx_audio.tts.utils import load_model
from mlx_audio.tts.generate import generate_audio
model = load_model("mlx-community/Qwen3-TTS-12Hz-1.7B-Base-8bit")
generate_audio(
model=model,
text="Hello, this is a test.",
ref_audio="path_to_audio.wav",
file_prefix="test_audio",
)
- Downloads last month
- 2,501
Model size
0.8B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit