I installed ComfyUI. I’m pretty clueless about this and not sure where to really look. I’ve tried both portable and desktop versions with the AMD support. Whenever I click generate on any model, the back end just halts. I’ve included the console log and system specs below. It seemed like from looking around that it points to not enough VRAM. I’ve also seen that people with similar graphics cards or worse have managed to make ComfyUI run. I’m not sure what I need to do to make that happen.
System Specs:
AMD Ryzen 5 7600X
32 GB RAM
Radeon RX 6700 XT (12 GB)
900 GB SSD
[WARNING] failed to run amdgpu-arch: binary not found.
Checkpoint files will always be loaded safely.
Total VRAM 12272 MB, total RAM 31821 MB
pytorch version: 2.9.0+rocmsdk20251116
AMD arch: gfx1031
ROCm version: (7, 1)
Set vram state to: LOW_VRAM
Device: cuda:0 AMD Radeon RX 6700 XT : native
Using async weight offloading with 2 streams
Enabled pinned memory 14319.0
Found comfy_kitchen backend eager: {‘available’: True, ‘disabled’: False, ‘unavailable_reason’: None, ‘capabilities’: [‘apply_rope’, ‘apply_rope1’, ‘dequantize_nvfp4’, ‘dequantize_per_tensor_fp8’, ‘quantize_nvfp4’, ‘quantize_per_tensor_fp8’, ‘scaled_mm_nvfp4’]}
Found comfy_kitchen backend cuda: {‘available’: True, ‘disabled’: True, ‘unavailable_reason’: None, ‘capabilities’: [‘apply_rope’, ‘apply_rope1’, ‘dequantize_nvfp4’, ‘dequantize_per_tensor_fp8’, ‘quantize_nvfp4’, ‘quantize_per_tensor_fp8’]}
Found comfy_kitchen backend triton: {‘available’: False, ‘disabled’: True, ‘unavailable_reason’: “ImportError: No module named ‘triton’”, ‘capabilities’: }
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.9.2
ComfyUI frontend version: 1.36.14
[Prompt Server] web root: C:\Users\macke\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
Import times for custom nodes:
0.0 seconds: C:\Users\macke\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=[‘models’]) completed in 0.012s (created=0, skipped_existing=21, total_seen=21)
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
Requested to load SD1ClipModel
loaded completely; 235.84 MB loaded, full load: True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load BaseModel
loaded completely; 10700.86 MB usable, 1639.41 MB loaded, full load: True