Hi,
Since updating my desktop version from 0.3.67 to 0.3.71 my workflow crashes when when i run batches. I used to set it to 16 and come back, but now it crashes with the error below.
It can crash at any point in the workflow. This has never happened before the update.
Can you tell from my error anything? Or is it possible to revert back to 0.3.67?
Requested to load WAN21
loaded partially; 8793.59 MB usable, 8788.37 MB loaded, 548.82 MB offloaded, lowvram patches: 0
Attempting to release mmap (39)
100%|██████████| 3/3 [01:04<00:00, 21.41s/it]
Requested to load WanVAE
loaded completely; 2538.90 MB usable, 242.03 MB loaded, full load: True
!!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Traceback (most recent call last):
File “C:\ComfyUI\resources\ComfyUI\execution.py”, line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ComfyUI\resources\ComfyUI\execution.py”, line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ComfyUI\resources\ComfyUI\execution.py”, line 298, in async_map_node_over_list
await process_inputs(input_dict, i)
File “C:\ComfyUI\resources\ComfyUI\execution.py”, line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy_api\internal_init.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ComfyUI\resources\ComfyUI\comfy_api\latest_io.py”, line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ComfyUI\resources\ComfyUI\comfy_extras\nodes_upscale_model.py”, line 92, in execute
upscale_model.to(“cpu”)
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\spandrel__helpers\model_descriptor.py”, line 331, in to
self.model.to(device=device, dtype=dtype)
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py”, line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py”, line 928, in _apply
module._apply(fn)
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py”, line 928, in _apply
module._apply(fn)
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py”, line 928, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py”, line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File “C:\Users\xxx\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py”, line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
