ComfyUI Guide: Utilizing ControlNet and T2I-Adapter

Overview:
In ComfyUI, the ControlNet and T2I-Adapter are essential tools. This guide provides a brief overview of how to effectively use them, with a focus on the prerequisite image formats and available resources.

1. Image Formatting for ControlNet/T2I Adapter:

  • Direct Image Passage: In the provided examples, the raw image is sent straight to the ControlNet/T2I adapter.
  • Specific Format Requirements: Each ControlNet/T2I adapter mandates a certain image format for optimal results, such as depth maps, canny maps, etc., based on the model in use.
  • Manual Conversion Needed: It’s crucial to remember that the ControlNetApply node won’t transform your regular images into the required formats like depth maps or canny maps automatically. Users must carry out this conversion separately.

2. Preprocessing and ControlNet Model Resources:

  • Image Preprocessing Nodes: To aid in converting your images to the necessary formats, specific preprocessing nodes are available. You can explore and utilize them Here.
  • ControlNet Model Files: For those seeking the most recent controlnet model files, they’re available in two versions:
  • Control Loras by SDXL: For users of SDXL, stability.ai offers Control Loras. They bolster the ControlNet capabilities and are available in two ranks:

3. Directory Placement:

  • Location for ControlNet Model Files: Ensure that all ControlNet model files are placed in the ‘ComfyUI/models/controlnet’ directory for smooth integration with ComfyUI.

Scribble ControlNet

Here’s a simple example of how to use controlnets, this example uses the scribble controlnet and the AnythingV3 model. You can load this image in ComfyUI to get the full workflow.

Example

Here is the input image I used for this workflow:

T2I-Adapter vs ControlNets

T2I-Adapters are much much more efficient than ControlNets so I highly recommend them. ControlNets will slow down generation speed by a significant amount while T2I-Adapters have almost zero negative impact on generation speed.

In ControlNets the ControlNet model is run once every iteration. For the T2I-Adapter the model runs once in total.

T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node.

This is the input image that will be used in this example source:

Here is how you use the depth T2I-Adapter:

Example

Here is how you use the depth Controlnet. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff control net. Diff controlnets need the weights of a model to be loaded correctly. The DiffControlNetLoader node can also be used to load regular controlnet models. When loading regular controlnet models it will behave the same as the ControlNetLoader node.

Example

You can load these images in ComfyUI to get the full workflow.

Pose ControlNet

This is the input image that will be used in this example:

Example

Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE.

Example

You can load this image in ComfyUI to get the full workflow.

Mixing ControlNets

Multiple ControlNets and T2I-Adapters can be applied like this with interesting results:

Example

You can load this image in ComfyUI to get the full workflow.

Input images:

Leave a Reply