How to locally host qwen3 coder

#11
by judycc - opened

is there compatiable inference tools to deploy the new model ---qwen3-coder ?
except transformers, sglang? vllm? or TensorRT?

I presume the Qwen Coder (CLI application) + Ollama would do the trick, no?

Sign up or log in to comment