5 mins read

Face Detailer to Fix Faces in Stable Diffusion for ComfyUI and SD WebUI

Generating AI images sometimes requires many attempts until an almost perfect image is achieved. Distorted faces, missing details, and unnatural expressions are common issues that could potentially ruin an otherwise great picture

face detailer realistic comparison
Left is original, Right is restored with Face Detailer

In this article, I will introduce you to Face Detailer, a collection of tools and techniques designed to fix faces and facial features. These methods involve removing distortions, adjusting the position of the eyes and mouth, and even adding finer details. Face Detailer proves particularly useful when generating images with small faces or when tweaking elements such as colors and facial expressions.

There are several models available to perform face restorations, as well as many interfaces; here I will focus on two solutions using ComfyUI and Stable-Diffusion-WebUI. Follow the table of content to skip directly to the UI that interests you.

ComfyUI

This guide assumes that you have a functioning setup for ComfyUI. If you’d like to learn more about it, please check this ComfyUI introductory guide. All these examples are using models based on Stable Diffusion 1.5.

Fix images

In order to run face detailer to fix a face from an image, you can download this basic workflow on OpenArt, then load in it ComfyUI and install any missing custom node.

simple workflow comfyui face restoration

The workflow is the same of the one described in the README of the custom node repository. Indeed you will need the ComfyUI-Impact-Pack node installed in ComfyUI to follow along. You can do it by using the ComfyUI-Manager directly when loading the workflow.

In addition, you will need some models to perform the face detection and restoration. First, I downloaded the YOLO model (YOLOv8s), but you can find some variants in the Ultralytics repository. Note that YOLOv8s is bigger than YOLOv8n, so it should perform well but might require more time to fix.

Download the file and put it in \ComfyUI\models\ultralytics\bbox\.

The workflow is grouped in two main parts. The group named Face Detailer below has some nodes worth mentioning:

Use these two nodes above to select the models required (including the YOLO model downloaded earlier). Refresh the UI if you cannot find them.

FaceDetailer ComfyUI custom node

The Face Detailer node contains a lot of parameters, that you can tweak to experiment and improve your results. Note that you can add a specific prompt for the face in this node, in case you want to add specific details during the restoration.

Finally, simply add a prompt to generate a face and see the results.

It works well with different kind of models. If the original face is too distorted, you can try to add another step of Face Detailer to pass the image two times.

AnimateDiff

AnimateDiff is a model designed for generating animations and can be paired with Face Detailer to restore faces. You can download this AnimateDiff+FaceDetailer workflow to get started.

AnimateDiff face Detailer workflow

I have a simple AnimateDiff workflow and added a Face Detailer group of nodes in the bottom part. These nodes are also part of the Impact-Pack and are specifically designed to work with AnimateDiff. I think that the results are good, but perhaps not as effective as restoring faces from images. It’s also true that AnimateDiff without ControlNet easily produces distorted or flickering animations, making the difficulty level higher.


Curios about making animations? Discover AutomateDiff in ComfyUI in combination with ControlNet and Prompt Travelling.

Stable Diffusion Web UI

Setting up Face Restoration in the UI of automatic1111 is as simple as installing an extension, specifically the After Detailer (ADetailer). You can easily find it in the extension tab, just type After Detailer or Detailer. For more information, refer to this tutorial on stable-diffusion-webui.

Refresh the UI, and you will discover a new section called ADetailer in the text2image and image2image tabs. Let’s explore how it works for text to image.

stable diffusion webui adetailer face restoration
See below the ADetailer extension

Select the checkbox ‘Enable ADetailer’ and choose a model directly in the panel below, such as, for example mediapipe_face_full. Now you are ready to proceed. Optionally, you can add a prompt to customize the face restoration process. For example, if you want to change the color of the eyes or the facial expression, you can type it in there.

You might spot the model detecting the face in the original image while performing the restoration. After a few seconds, you should see directly the fixed image.

You can also use the YOLO model mentioned previously, if you prefer to experiment more.


Conclusion

In conclusion, fixing faces is a necessary technique when discussing AI-generated images, which don’t always get everything right (hands are another significant aspect). Now, we can at least attempt to enhance facial features, which can also save us time. This is possible by generating images at lower resolutions initially and subsequently improving only the face separately.

Something is not clear? Just leave a comment!


YOLOv8 modelsComfyUI Impact-PackImage Face Detailer workflow

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.