Instructions to use RedHatTraining/AI296-m3diterraneo-hotels with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use RedHatTraining/AI296-m3diterraneo-hotels with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="RedHatTraining/AI296-m3diterraneo-hotels", filename="samples_89973_Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use RedHatTraining/AI296-m3diterraneo-hotels with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M # Run inference directly in the terminal: llama-cli -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M # Run inference directly in the terminal: llama-cli -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
Use Docker
docker model run hf.co/RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use RedHatTraining/AI296-m3diterraneo-hotels with Ollama:
ollama run hf.co/RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
- Unsloth Studio new
How to use RedHatTraining/AI296-m3diterraneo-hotels with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for RedHatTraining/AI296-m3diterraneo-hotels to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for RedHatTraining/AI296-m3diterraneo-hotels to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for RedHatTraining/AI296-m3diterraneo-hotels to start chatting
- Docker Model Runner
How to use RedHatTraining/AI296-m3diterraneo-hotels with Docker Model Runner:
docker model run hf.co/RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
- Lemonade
How to use RedHatTraining/AI296-m3diterraneo-hotels with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull RedHatTraining/AI296-m3diterraneo-hotels:Q4_K_M
Run and chat with the model
lemonade run user.AI296-m3diterraneo-hotels-Q4_K_M
List all available models
lemonade list
RHEL AI Model Training Scenario: A Fictional Hotel Group
A fictional example for the Training Large Language Models with Red{nbsp}Hat Enterprise Linux AI (AI0005L) and Deploying Models with Red{nbsp}Hat Enterprise Linux AI (AI0006L) Red Hat Training lessons. These lessons present students with a scenario where a hotel group must train their own LLM, aligned with their business needs, by using RHEL AI.
The taxonomy with skills and knowledge is at https://github.com/RedHatTraining/AI296-taxonomy-hotels.
The generated synthetic dataset is available in the
resultsdirectory at https://github.com/RedHatTraining/AI296-apps/tree/main/scenarios/hotels This directory contains the intermediate outputs of the SDG phase to save time to the student. With the provided taxonomy, the SDG phase takes ~ 2 hours in ag6e.12xlargeAWS instance.The trained model is stored in this Hugging Face repository
. Additionally, a quantized version is also provided:samples_89973_Q4_K_M.gguf`.
NOTE: This model has been trained using a reduced version of the RHEL AI default training process. In this reduced version, the model has been trained only during four hours, instead of four-five days. Additionally, the number of training samples has been reduced from ~330,000 to only 10,000.
As a result, the model, although useful for learning purposes, is far from being optimally tuned.
- Downloads last month
- 53