How to run with VLLM

#9
by d8rt8v - opened

Vllm 0.17.1 returns error

ValueError: Tokenizer class TokenizersBackend does not exist or is not currently imported.

Patching the tokenizer_config.json file tokenizer_class": "TokenizersBackend" with tokenizer_class": "Qwen2Tokenizer" does not solve the issue, another error emerges:

ValueError: The checkpoint you are trying to load has model typeqwen3_5 but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. vllm_gpu3_test | (APIServer pid=1) vllm_gpu3_test | (APIServer pid=1) You can update Transformers with the command pip install --upgrade transformers. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command pip install git+https://github.com/huggingface/transformers.git

Same Error. Tried the awq Version from cyan

Tesslate org

You need to update transformers to transformers>=5.2.0

Sign up or log in to comment