Instructions to use lmsys/vicuna-13b-delta-v1.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lmsys/vicuna-13b-delta-v1.1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="lmsys/vicuna-13b-delta-v1.1")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lmsys/vicuna-13b-delta-v1.1") model = AutoModelForCausalLM.from_pretrained("lmsys/vicuna-13b-delta-v1.1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use lmsys/vicuna-13b-delta-v1.1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lmsys/vicuna-13b-delta-v1.1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmsys/vicuna-13b-delta-v1.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/lmsys/vicuna-13b-delta-v1.1
- SGLang
How to use lmsys/vicuna-13b-delta-v1.1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "lmsys/vicuna-13b-delta-v1.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmsys/vicuna-13b-delta-v1.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "lmsys/vicuna-13b-delta-v1.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmsys/vicuna-13b-delta-v1.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use lmsys/vicuna-13b-delta-v1.1 with Docker Model Runner:
docker model run hf.co/lmsys/vicuna-13b-delta-v1.1
NOTE: New version available
Please check out a newer version of the weights here.
NOTE: This "delta model" cannot be used directly.
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See instructions.
Vicuna Model Card
Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- Developed by: LMSYS
- Model type: An auto-regressive language model based on the transformer architecture.
- License: Non-commercial license
- Finetuned from model: LLaMA.
Model Sources
- Repository: https://github.com/lm-sys/FastChat
- Blog: https://lmsys.org/blog/2023-03-30-vicuna/
- Paper: https://arxiv.org/abs/2306.05685
- Demo: https://chat.lmsys.org/
Uses
The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
Training Details
Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 70K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this paper.
Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this paper and leaderboard.
Difference between different versions of Vicuna
- Downloads last month
- 1,127