The latest ComfyUI cannot load SD1.5, how to solve it?

My computer is WIN11 system, the graphics card is 4090, and I installed the ComfyUI desktop version

ComfyUI

├── models
│ │
│ └── Stable-diffusion ← The directory where I store the Stable Diffusion model file
│ │
│ └── v1-5-pruned.safetensors ← My SD1.5 main model file
│ └── v1-inference.yaml ← My model configuration file
│ └── Other model files (.safetensors)

My SD1.5 storage path is like this, but the SD1.5 main model is not displayed in the loading checkpoint node

My SD3.5 main model, configuration file and other files that must be installed are stored in the following path:
models/
├── checkpoints/
│ └── sd3.5_large.safetensors (main model file)

├── scheduler/
│ └──scheduler_scheduler_config.json (scheduler configuration file) │ ├── text_encoder/ │ └── text_encoder_config.json (text_encoder configuration file) │ └── model.fp16.safetensors │ └── model.safetensors ├── text_encoder_2/ │ └── text_encoder_2_config.json(text_encoder_2 configuration file) │ └── model.fp16.safetensors │ └── model.safetensors ├── text_encoder_3/ │ └── text_encoder_3_config.json │ └── text_encoder_3_model.safetensors.index.fp16 │ └── text_encoder_3_model.safetensors.index.json │ └── model.fp16-00001-of-00002.safetensors │ └── model.fp16-00002-of-00002.safetensors │ └── model-00001-of-00002.safetensors │ └── model-00002-of-00002.safetensors ├── text_encoders/ │ └── clip_g.safetensors │ └── clip_l.safetensors │ └── t5xxl_fp8_e4m3fn.safetensors │ └── t5xxl_fp16.safetensors │ └── text_encoders_README.md │ ├── tokenizer/ │ └── tokenizer_special_tokens_map.json │ └── tokenizer_tokenizer_config.json │ └── tokenizer_vocab.json │ └── tokenizer_merges.txt ├── tokenizer_2/ │ └── tokenizer_2_special_tokens_map.json │ └── tokenizer_2_tokenizer_config.json │ └── tokenizer_2_vocab.json │ └── tokenizer_2_merges.txt ├── tokenizer_3/ │ └── tokenizer_3_special_tokens_map.json │ └── tokenizer_3_tokenizer.json │ └── tokenizer_3_tokenizer_config.json │ └── spiece.model │ ├── transformer/ │ └── transformer_config.json │ └── transformer_diffusion_pytorch_model.safetensors.index.json
│ └── diffusion_pytorch_model-00001-of-00002.safetensors
│ └── diffusion_pytorch_model-00002-of-00002.safetensors

├── vae/
│ └── vae_config.json(vae configuration file)
│ └── diffusion_pytorch_model.safetensors

└── model_index.json (main configuration file)
Should I put the main model of SD1.5 in D:\ComfyUI\models\checkpoints? But isn’t this path generally used to store checkpoint files (.ckpt files)? Instead of .safetensors files, besides, isn’t it best to use the .safetensors format for the models under comfyUI, and .ckpt is backward and unsafe.

In addition, when I installed the desktop version of ComfyUI, there was no stable-diffusion in the built-in files, so I manually created stable-diffusion and put v1-5-pruned.safetensors and v1-inference.yaml in it, but when I loaded the checkpoint, there was only 3.5, not 1.5, which confused me.

Isn’t the first step of the ComfyUI workflow to load the checkpoint? But how can I work if I can only see 3.5 but not 1.5? Is the first step of my workflow wrong? Is the method of selecting the SD1.5 model wrong? Isn’t the first step to drag in the load checkpoint panel?

Just now, I tried to put the sd1.5 model into the checkpoints directory, which was originally the directory where the SD3.5 main model was located.
In this way, there are 3 files in this directory (the other files of SD3.5 are in other directories, because SD3.5 is a distributed layout)
D:\ComfyUI\models\checkpoints
└── sd3.5_large.safetensors
└── v1-5-pruned.safetensors
└── v1-inference.yaml
Although this approach does not conform to the conventional organizational structure, after placing it in this way, both 1.5 and 3.5 appear in the loaded checkpoint, but doesn’t this mean that it will bury hidden dangers in the future? So this will provide you with some basis for analysis. Can you tell me what I should do next?

Just put v1.5 into models/checkpoints. It should load.

1 Like