ComfyUI now has optimized support for Genmo’s latest video generation model, Mochi! Now it runs natively in a consumer GPU! To run the Mochi model right away with a standard workflow, try the following steps.
- Update to the latest version of ComfyUI
- Download Mochi weights (the diffusion models) into
models/diffusion_model
folder - Make sure a text encoder is in your
models/clip
folder - Download VAE to:
ComfyUI/models/vae
- Grab the workflow from and dive into creation!
We’ve also updated our Example Workflows page with text encoder and VAE setups for Mochi. Full blog post: https://blog.comfy.org/mochi/ Huge shoutout to @Kijai who did the initial implementation in his node, what a legend!!