7 mins read

Best GPU For Stable Diffusion to Generate AI Images on Your PC

If you’re eager to run Stable Diffusion models on your local PC to generate AI images, and whats the best GPU for Stable Diffusion, you’ve landed on the right page.

I’m here to simplify the process for you by focusing on one key component: the graphics card, specifically Nvidia GPUs. In this article, we’ll break down the minimum and optimal GPU specifications you should consider, with a special mention of the powerful RTX 4090.

Minimum Specs: A Solid Starting Point

Let’s kick things off with the fundamental specifications that will ensure Stable Diffusion runs smoothly on your PC:

  • GPU : Look for an Nvidia 20, 30, or 40-series GPU. While even an older 2060 Super with 8GB of VRAM will suffice, if your budget allows, aim for 12GB or more. VRAM is your GPU’s memory, and it’s essential for generating images efficiently. Note that Stable Diffusion XL is even more intensive. If you need to generate 1920x1080px resolution images without upscale, you want about 12GB GPU. For up to 700x700px images, you can go lower with 8GB. Lower than that, it might be difficult to generate anything decent.
  • RAM: You’ll want a minimum of 8GB of RAM, 16GB is better. It might be used in case your VRAM is full, but it will make everything slower.
  • Storage Speed: Opt for an SSD with at least 512GB of storage. Custom models are huge files and you might run out of memory quickly the more you explore different techniques and styles.
  • CPU: An Intel Core i5 or an AMD Ryzen 5 series CPU will suffice for getting started. It doesn’t really have an impact on image generation, even if it’s still good to have a good one in case you want to use Latent Consistency Models (LCMs).

The RTX 4090: A True Powerhouse

For those ready to make their image generation process even faster, there’s the Nvidia RTX 4090. This is probably the best GPU for Stable Diffusion, based on the ADA Lovelace architecture, is the latest offering from Nvidia. It boasts an astounding 16,384 CUDA cores and 24GB of GDDR6X VRAM, a substantial improvement over its predecessor, the RTX 3090. You can comfortably run Stable Diffusion and Stable Diffusion XL, including high res upscaling and model training.

Some Considerations

  • Resolution Matters: Keep in mind the resolution of the content you intend to work with. For native 1920×1080 without upscaling, 12GB of VRAM is recommended. For smaller renders up to 700×700 with minor upscaling and denoising, 8GB is usually sufficient.
  • Storage Speed: In addition to a powerful GPU, ensure your computer has a fast SSD that can save and retrieve images quickly. Models take a lot of space, so be prepared.
  • Budget Constraints: While the RTX 4090 is an incredible GPU, it may not be within everyone’s budget. Consider your budget and aim for the best GPU you can afford depending on your objectives.

Can I Run Stable Diffusion on Macs?

Yes! For those Mac enthusiasts eager to explore Stable Diffusion on their M1 or M2 chip-powered Macs, there are specific hardware and software considerations to keep in mind.

Hardware Requirements:

  • Chip: You’ll need a MacBook equipped with an M1 or M2 chip. Macs with the M1 chip benefit from unified memory. You can run Stable Diffusion also on the Intel-based Mac, but performance will be relatively slower.
  • RAM: It’s definitely recommended to have at least 16GB of RAM for an optimal experience. While it’s technically possible to run Stable Diffusion with 8GB of RAM, you may experience significantly slower performance. Extra RAM ensures smoother operation.
  • Unified Memory Advantage: Macs with the M1 chip offer a unique advantage called unified memory. This means that, even though Stable Diffusion typically demands a GPU with 10GB+ of VRAM on PCs, an M1 or M2 Mac with 16GB of RAM can run it efficiently thanks to this unified memory architecture. This feature makes Macs with M1 and M2 chips particularly well-suited for local Large Language Models (LLMs) and AI tasks.

Keep in mind that currently Macs are generally less powerful than PCs with an Nvidia GPU. This means slower time to generate the images, and limitations in terms of the resolution of your generations. However they should be able to run comfortably images with 512×512 resolution.

There are some UI that support Mac such as InvokeAI, ComfyUI and Focus (less tested).

Tom’s Hardware Benchmark

gpu benchmark for stable diffusion

The image above shows a graph comparing the performance of different graphics cards in generating images using the Stable Diffusion algorithm. The x-axis shows the name of the graphics card, and the y-axis shows the number of images per minute that the card can generate.

According to this benchmark, the RTX 4090 is the best-performing graphics card for Stable Diffusion, with a maximum of 29.96 images per minute. The RTX 3090 Ti and RTX 4070 Ti are also very fast, with maximum speeds of 17.23 and 12.86 images per minute, respectively.

The graphics cards from AMD are also well-represented in the graph, with the RX 7900 XTX, RX 7800 XT, and RX 6950 XT all capable of generating more than 10 images per minute. However, the RTX 2080 Ti, RTX 3080 Ti, RTX 3080, and RTX 3070 Ti are all faster than these AMD cards.

Overall, the RTX 4090 is the best graphics card for Stable Diffusion, but the RTX 3090 Ti, RTX 4070 Ti, and RTX 3080 Ti are also very good options. The AMD ones also capable of generating images quickly, but they are not as fast as the Nvidia cards.

All of this to say, Nvidia GPU is the core component of your computer when it comes to running Stable Diffusion models for generating AI images with Stable Diffusion. Whether you’re aiming for minimum or optimal specifications, the key is to prioritize VRAM and select a GPU that suits your specific requirements. More VRAM means creating images faster and being able to generate and upscale higher resolutions, up to 4k or even 8k.

In other words, if you are in doubt with a 3060 with 12GB of VRAM and a 3080 with 10GB of VRAM, the first is generally better when it comes to AI image generation. Even if the 3080 has a much faster memory transfer speed, which is better for gaming, the lower VRAM might actually make Stable Diffusion run slower, because less models can be loaded in memory at a given time. VRAM is also needed to generate images with higher resolutions, including upscaling.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.