Harnessing the “Hires Fix” Feature with Txt2Img in ComfyUI

The “Hires Fix” approach provides users an alternative method to produce high-resolution images without directly generating them at high resolutions. By creating an image at a more manageable, lower resolution and then upscaling it, you can subsequently process it through img2img to achieve the desired outcome.

Steps to Leverage the Hires Fix in ComfyUI:

  1. Loading Images: Start by loading the example images into ComfyUI to access the complete workflow.
  2. Understanding the Underlying Concept:
    • The core principle of Hires Fix lies in upscaling a lower-resolution image before its conversion via img2img.
    • In ComfyUI, txt2img and img2img are essentially the same node. The distinction between them is a matter of input; Txt2Img is invoked by supplying an empty image to the sampler node and maximizing the denoise parameter.
  3. Implementing a Basic Workflow:
    • Step 1: Begin by selecting the Txt2Img node in ComfyUI.
    • Step 2: Introduce an empty image to the sampler node.
    • Step 3: Adjust the denoise slider/parameter to its maximum value. This transforms the node’s functionality to mimic that of Txt2Img.
    • Step 4: Generate the image at a lower resolution.
    • Step 5: Employ a latent upscale node or any other upscale method within ComfyUI to magnify the produced image.
    • Step 6: Finally, route the upscaled image through the img2img (or Txt2Img) node for further refinement and to obtain the final high-resolution output.

By following this workflow, you utilize the Hires Fix method to produce images of outstanding quality in ComfyUI, ensuring both efficient processing and impeccable results.

Example

Non latent Upscaling

Here is an example of how the esrgan upscaler can be used for the upscaling step. Since ESRGAN operates in pixel space the image must be converted to pixel space and back to latent space after being upscaled.

Example

More Examples

Here is an example of a more complex 2 pass workflow, This image is first generated with the WD1.5 beta 3 illusion model, latent upscaled and then a second pass is done with cardosAnime_v10:

Example

Leave a Reply