ComfyUI crashes on generate

I installed ComfyUI. I’m pretty clueless about this and not sure where to really look. I’ve tried both portable and desktop versions with the AMD support. Whenever I click generate on any model, the back end just halts. I’ve included the console log and system specs below. It seemed like from looking around that it points to not enough VRAM. I’ve also seen that people with similar graphics cards or worse have managed to make ComfyUI run. I’m not sure what I need to do to make that happen.

System Specs:
AMD Ryzen 5 7600X

32 GB RAM

Radeon RX 6700 XT (12 GB)

900 GB SSD

[WARNING] failed to run amdgpu-arch: binary not found.
Checkpoint files will always be loaded safely.
Total VRAM 12272 MB, total RAM 31821 MB
pytorch version: 2.9.0+rocmsdk20251116
AMD arch: gfx1031
ROCm version: (7, 1)
Set vram state to: LOW_VRAM
Device: cuda:0 AMD Radeon RX 6700 XT : native
Using async weight offloading with 2 streams
Enabled pinned memory 14319.0
Found comfy_kitchen backend eager: {‘available’: True, ‘disabled’: False, ‘unavailable_reason’: None, ‘capabilities’: [‘apply_rope’, ‘apply_rope1’, ‘dequantize_nvfp4’, ‘dequantize_per_tensor_fp8’, ‘quantize_nvfp4’, ‘quantize_per_tensor_fp8’, ‘scaled_mm_nvfp4’]}
Found comfy_kitchen backend cuda: {‘available’: True, ‘disabled’: True, ‘unavailable_reason’: None, ‘capabilities’: [‘apply_rope’, ‘apply_rope1’, ‘dequantize_nvfp4’, ‘dequantize_per_tensor_fp8’, ‘quantize_nvfp4’, ‘quantize_per_tensor_fp8’]}
Found comfy_kitchen backend triton: {‘available’: False, ‘disabled’: True, ‘unavailable_reason’: “ImportError: No module named ‘triton’”, ‘capabilities’: }
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.9.2
ComfyUI frontend version: 1.36.14
[Prompt Server] web root: C:\Users\macke\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static

Import times for custom nodes:
0.0 seconds: C:\Users\macke\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py

Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=[‘models’]) completed in 0.012s (created=0, skipped_existing=21, total_seen=21)
Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
Requested to load SD1ClipModel
loaded completely; 235.84 MB loaded, full load: True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load BaseModel
loaded completely; 10700.86 MB usable, 1639.41 MB loaded, full load: True

AMD RYZEN AI MAX+ 395 w/ Radeon 8060S, RAM 48 Gb, VRAM 48Gb:
So far, only 2 text-to-image workflows are working perfectly:
SD3.5 simple and Z-Image Turbo (New)
Many others (Flux, HiDream, Qwen) just do not try to use GPU at all and stuck forever trying to use CPU (according to Adrenalin monitoring). Setting VRAM management mode from “auto” to “GPU-only” doesn’t help.
P.S. : Chroma text-to-image workflow just worked!

AMD card? You need a special rocm driver. The normal Adrenaline driver will not work. The current adrenaline driver is running ComfyUI. Minus a few nodes that eagerly requires cuda. But my other software is in trouble with this driver. Blender does not start, a few games does not start. So i need the normal driver here. Means switching the drivers …

The current rocm driver is located here: AMD Software: PyTorch on Windows Edition 7.1.1 Release Notes

I would wait a few more days. There will be a special rocm adrenaline driver version available next week. It is at least announced. And i hope this fixes my issues then.

It is already available and loaded automatically in the last version of ComfyUI for Windows.
But it looks like it is addressing not all available VRAM (my guess, only a half of my 48Gb)

That’s another issue from the original author then. And i talk about an official Adrenalin driver from AMD. Which, as told, is promised for next week :slight_smile: