Very slow inference on AMD Strix Halo
#12
by
mikkoph
- opened
Using the default workflow, imported by dragging the image in the model card in my ComfyUI installation, I can see two problems:
- Inference time is incredibly slow: ~80 seconds for the test image. I regularly run ZiT, Qwen, Flux 2 Klein 9b, Illustrious on this computer and for this image size generation takes 8-25 secs depending on the model
- The generated image is different from the one in the model card, but I double checked that seed and all other settings match. The difference is in style and also the text reads "ANIIMA".
I am on a AMD Strix Halo 128gb RAM, Linux Arch, ComfyUI installation. Using PyTorch 2.10 + ROCm 7.11 nightly
Tried the fp16 patch just in case but as expected that didn't improve the situation
I'm on rocm 6.4.2, well, the Ubuntu 24.04 rollout. It works with my 6950xt - I use a venv and comfyui and you know the drill.