runtime error
Exit code: 1. Reason: 00002-of-00002.safetensors: 0%| | 0.00/3.96G [00:00<?, ?B/s][A model-00002-of-00002.safetensors: 0%| | 9.45M/3.96G [00:01<07:59, 8.25MB/s][A model-00002-of-00002.safetensors: 2%|▏ | 93.2M/3.96G [00:02<01:17, 50.0MB/s][A model-00002-of-00002.safetensors: 4%|▍ | 169M/3.96G [00:03<01:04, 58.7MB/s] [A model-00002-of-00002.safetensors: 8%|▊ | 308M/3.96G [00:04<00:41, 87.4MB/s][A model-00002-of-00002.safetensors: 10%|█ | 408M/3.96G [00:05<00:45, 78.4MB/s][A model-00002-of-00002.safetensors: 24%|██▎ | 939M/3.96G [00:06<00:14, 211MB/s] [A model-00002-of-00002.safetensors: 48%|████▊ | 1.92G/3.96G [00:07<00:04, 438MB/s][A model-00002-of-00002.safetensors: 63%|██████▎ | 2.49G/3.96G [00:08<00:03, 472MB/s][A model-00002-of-00002.safetensors: 100%|██████████| 3.96G/3.96G [00:09<00:00, 405MB/s] Traceback (most recent call last): File "/app/demo/gradio_demo_with_sam3.py", line 326, in <module> tokenizer, model, image_processors = load_pretrained_model( File "/app/vlm_fo1/model/builder.py", line 40, in load_pretrained_model model, loading_info = OmChatQwen25VLForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 272, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4395, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2112, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2262, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...