Preview for Inpainting with LoRA
Inpainting with LoRA workflow diagram

Run this workflow on InstaSD

Get started in minutes! Run this ComfyUI workflow online - no setup required.

Description

Create powerful inpaint transformations using Flux Dev Q8 in combination with LoRA models. This workflow excels at reconstructing masked areas while applying detailed stylistic changesβ€”perfect for enhancing portraits, editing outfits, or transforming facial features with ease and precision.


🎯 Features

  • Flux Dev Q8 Model – Optimized UNet for enhanced inpainting results.
  • LoRA Integration – Uses Power Lora Loader to inject multiple LoRA styles (like moonal1saa).
  • Mask-Based Editing – Inpaint specific regions using precise mask control.
  • Custom Prompts Support – Combine trigger words and stylistic prompts dynamically.
  • Stitching & Decoding – Maintains image resolution and coherence through InpaintCrop, InpaintStitch, and VAEDecode.
  • FluxGuidance Node – Fine control over the guidance strength for improved prompt responsiveness.

πŸ’‘ Use Cases

  • Portrait Reworks – Change facial features, hairstyles, or accessories while keeping base structure.
  • Fantasy/Character Edits – Inject stylistic LoRA characters into selected image regions.
  • Fixing Images – Repair damaged or unwanted parts using guided inpainting.
  • Fashion Retouches – Swap clothing or accessories on models selectively.
  • Creative Concept Art – Use LoRA-enhanced prompts to generate imaginative alterations on existing images.

βš™οΈ How It Works

  1. Load an Image & Mask – Use the LoadImage node to input both the source image and its mask.
  2. Set Up LoRA – In Power Lora Loader, load your desired LoRAs (e.g. Mona Lisa Flux, Turbo Alpha) and set their strengths.
  3. Define Prompts – Use trigger words and stylistic descriptions (easy positive, CLIPTextEncode, Text Concatenate) to influence generation.
  4. Enable FluxGuidance – Connect encoded text through FluxGuidance to refine prompt control.
  5. Condition the Inpainting – Use InpaintModelConditioning with your image, mask, and VAE.
  6. Sample & Decode – Process through KSampler and VAEDecode to reconstruct the image.
  7. Stitch & Preview – Combine inpainted parts with InpaintStitch, preview the results, and save via SaveImage.

Credits: pixaroma

Models

FileDestinationSource
flux1-dev-Q8_0.gguf/ComfyUI/models/unetDownload
clip_l.safetensors/ComfyUI/models/clipDownload
t5-v1_1-xxl-encoder-Q8_0.gguf/ComfyUI/models/clipDownload
ae.safetensors/ComfyUI/models/vaeDownload
sigclip_vision_patch14_384.safetensors/ComfyUI/models/clip_visionDownload
flux1-redux-dev.safetensors/ComfyUI/models/style_modelsDownload

Nodes

CLIPTextEncodeUnetLoaderGGUFDualCLIPLoaderGGUFEmptySD3LatentImageFluxGuidanceeasy showAnythingKSamplerVAEDecodeSaveImageFluxResolutionNodeVAELoaderCLIPVisionEncodeCLIPVisionLoaderStyleModelLoaderStyleModelApplySimpleLoadImage