6 mins read

LCM: Lightning-Fast AI Generations with Latent Consistency Models and ComfyUI

What are Latent Consistency Models?

Latent Consistency Models (LCMs) represent a significant advancement in the field of generative models, specifically following Latent Diffusion Models (LDMs). Developed by researchers from the Institute for Interdisciplinary Information Sciences at Tsinghua University, LCMs are designed to address the slow iterative sampling process of LDMs, enabling rapid inference with minimal steps on any pre-trained LDMs, such as Stable Diffusion.

UPDATE: Stabilityai recently release SDXL Turbo, which allows you to generate images almost in real time, generally with a higher quality then LCMs. It is definitely worth a try!

How Do LCMs Work?

LCMs function by viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE) and predicting its solution directly in the latent space. This approach allows for super-fast inference with as few as 2 to 4 steps. A high-quality 768×768 LCM, distilled from Stable Diffusion, requires just 32 A100 GPU training hours (8 node for only 4 hours) for this few-step inference​​.

Generating Images with LCMs

The strength of LCMs, compared to Stable Diffusion, is the speed of image generation. The models are capable of generating high-quality 768 x 768 resolution images in 2 to 4 steps, or even in a single step, significantly speeding up the text-to-image generation process​​​​​​. Quality, unfortunately, is also lower, at least at the current stage. But the possibility to generate images this fast opens the door to many other applications, including real time solutions and text to video.

Latent Consistency Fine-tuning (LCF)

One more remark, also fine-tuning is covered in this context. LCF is a fine-tuning method designed specifically for pre-trained LCMs. It allows for efficient few-step inference on customized datasets without requiring a teacher diffusion model. This presents a viable alternative to directly fine-tuning a pre-trained LCM​​.

Run LCMs with ComfyUI

LORA or Custom Nodes

In order to run an LCM model in ComfyUI, a custom node named LCM Sampler comes very handy. You can find it on comfyui-flowty-lcm. This node has been actually deprecated (in the ComfyUI-Manager is Latent Consistency Model for ComfyUI), because ComfyUI has updated its sampler to support lcm, but I think that this dedicated custom node is a very clean way to experiment with it. LCMs are a completely different class of models compared to Stable Diffusion and the first checkpoint of this type is LCM_Dreamshaper_v7.

If you prefer to avoid custom nodes, there are also LORAs that allow you to transform any SD model into an LCM model (at least in practice), so that all the existing models can be adapted for faster generations. Very handy!

To install ComfyUI, you can follow the steps in this article. This UI became very popular, especially since Stable Diffusion XL came out, even if it might look a bit scary at first, with all these nodes and lines.

How To Use LCM Models

Using LCM models is really easy. There are several option to get the advantages of this technique: The first one is to download an LCM model, like dreamshaper_v7 mentioned before, and then use a workflow with just two nodes, one of which is the custom LCM Sampler mentioned before.

lcm sampler comfyui dreamshaper_v7

The other possibility, if you prefer to avoid using custom nodes, is to use a workflow with a LORA, and download and lcm lora that will give any model the speed of an LCM. This means that you can use any SD or SDXL model with just a few steps to generate faster images.

lcm Lora comfyui workflow for sdxl

You can find an example on comfyanonymous, where it explains to download the lora model, rename it to lcm_lora_sdxl.safetensors, and put it in your ComfyUI/models/loras folder. Then load the workflow and everything should be ready: be sure to use a low cfg and as a scheduler either sgm_uniform or simple. As you can see, I’m using the base SDXL model with only 5 steps and a low cfg.

Let’s try one last workflow by combining lcm-lora and custom nodes using ComfyUI-sampler-lcm-alternative, which includes three nodes to give additional control and features for LCM samplers.

lcm-lora workflow

You can find the node and the example workflow on the corresponding GitHub repository. Just use the Manager to install it and make sure to select an lcm-lora. In the LCMScheduler node, you can control the number of steps, even as low as 2-3 steps, albeit with a lower quality.

Fooocus

LCM models are available also in Fooocus! After installing it, you do not need to set up anything – just check the advanced box, and select Extreme Speed. The first time it will download an LCM model, which allows you to generate good images with an amazing speed.

Conclusion

The integration of Latent Consistency Models (LCMs) with ComfyUI marks a groundbreaking advancement in generative AI, offering incredibly fast image synthesis capabilities. While these models may not yet match the highest quality outputs of traditional Stable Diffusion or the more powerful Stable Diffusion XL models — which demand a robust GPU and more iterations — the speed and efficiency of LCMs cannot be overstated. They are particularly promising for AI-powered video generation, especially when combined with tools like AnimatedDiff, and for real-time applications. As such, LCMs represent a pivotal development for users seeking rapid, high-quality image and video generation with reduced computational requirements. This blend of speed and quality opens up exciting new avenues for personalized and efficient content creation in the realm of generative AI.

3 thoughts on “LCM: Lightning-Fast AI Generations with Latent Consistency Models and ComfyUI

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.