3 min read

Day 1 Support for Flux Tools in ComfyUI

Day 1 Support for Flux Tools in ComfyUI

We’re thrilled to share that ComfyUI now supports 3 series of new models from Black Forest Labs designed for Flux.1: the Redux Adapter, Fill Model, ControlNet Models & LoRAs (Depth and Canny).

These additions provide users with easy and precise control of details and styles in image generation.

  1. FLUX.1 Fill [dev]: takes an input image, an input mask (black and white image of same size as input image) and a prompt
  2. FLUX.1 Redux [dev]: A small adapter that can be used for both dev and schnell to generate image variations. the input is an image (no prompt) and the model will generate images similar to the input image
  3. Controlnet models: take an input image and a prompt
    • FLUX.1 Depth [dev]: uses a depth map as the actual conditioning
    • FLUX.1 Depth [dev] LoRA: LoRA to be used with FLUX.1 [dev]
    • FLUX.1 Canny [dev]: uses a canny edge map as the actual conditioning
    • FLUX.1 Canny [dev] LoRA: LoRA that can be used with FLUX.1 [dev]

Check the following for a detailed look at each model, its features, and how you can start using them.

1. Fill

The Fill Model is designed for inpainting and outpainting through masks and prompts.

  • Functions:
    • Inpainting: Fill in missing or removed areas in an image.
    • Outpainting: Extend an image seamlessly
  • Input: An input image, an input mask (black and white image of same size as the input image) and a prompt.
  • Enhanced CLI Features: For inpainting tasks, the CLI automatically infers the size of the generated image based on the dimensions of the input image and mask.

Get Started:

  1. Update ComfyUI to the latest
  2. Download clip_l and t5xxl_fp16 models to models/clip folder
  3. Download flux1-fill-dev.safetensors is in ComfyUI/models/unet folder

Use the flux_inpainting_example or flux_outpainting_example workflows on our example page.

2. Redux

The Redux model is a lightweight model that works with both Flux.1[Dev] and Flux.1[Schnell]to generate image variations based on 1 input image—no prompt required. It’s perfect for producing images in specific styles quickly.

  • Input: Provide an existing image to the Remix Adapter.
  • Output: A set of variations true to the input’s style, color palette, and composition.

Geting started:

  1. Update ComfyUI to the latest
  2. Download sigclip_patch14-384.safetensors into ComfyUI/models/clip_vision
  3. Make sure flux1-dev model is in ComfyUI/models/unet folder
  4. Download the Redux Model into comfyui/models/style_models
  5. Download the remix workflows as the following picture on our example page.

Remix 1 image

Remixing JAGUAR logo : )

Remix multiple images

Remixing two images together

3. ControlNets

ControlNet models enhance generation workflows by adding specific conditioning via maps, offering unparalleled control over structure and design.

  • Types:
    • Depth Model: Leverages a depth map to guide perspective and structure.
    • Canny Model: Uses a Canny edge map for outline-based conditioning.
  • Input required: An image, a text prompt and a depth or canny edge map.
  • Output: A generated image that closely follows the provided structure and adheres to the prompt.

For more details, please check out blog post from Black Forest Labs

We’ll continue to refine and improve these example workflows based on community feedback.

Enjoy your creation!