File size: 740 Bytes
eb51aef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# Core dependencies for Fire Evacuation RAG System numpy torch transformers sentence-transformers gradio # FAISS for vector similarity search # Use faiss-cpu for CPU-only systems, or faiss-gpu for GPU systems faiss-cpu # faiss-gpu>=1.7.4 # Uncomment if you have CUDA-capable GPU # Optional: For faster model loading and inference unsloth # Faster model loading with Unsloth # Optional: For model quantization (4-bit/8-bit) bitsandbytes # Required for 4-bit/8-bit quantization # Optional: For optimized attention (FlashAttention2) # flash-attn>=2.0.0 # Uncomment if you want FlashAttention2 support # Note: flash-attn requires CUDA and may need to be installed separately # Install with: pip install flash-attn --no-build-isolation |