Basic Image Generation

Getting Started with Basic Image Generation

Learn how to create your first image using ComfyUI's node-based interface. This tutorial will walk you through the essential steps of setting up a basic generation workflow.

1

Loading a Model

First, we need to load our base model and the required VAE.

  • Click the "Add Node" button or right-click in the workspace
  • Search for "Load Checkpoint" and add it to your workspace
  • Select your desired model from the dropdown (e.g., "sd_xl_base_1.0.safetensors")
  • Add a "VAE Decode" node - this will be connected later
2

Setting Up Text Prompts

Create positive and negative prompt nodes to guide the image generation.

  • Add a "CLIPTextEncode" node for your positive prompt
  • Add another "CLIPTextEncode" node for your negative prompt
  • Connect both to the Load Checkpoint node's CLIP output
  • Enter your desired prompts in each node
3

Configuring Generation Parameters

Set up the parameters that control how your image will be generated.

  • Add a "KSampler" node to control the generation process
  • Set your desired steps (20-30 is a good starting point)
  • Choose your preferred sampler (Euler a is commonly used)
  • Set an appropriate CFG Scale (7-8 works well for most cases)
  • Connect your checkpoint, positive prompt, and negative prompt nodes
4

Image Output Setup

Configure the final steps to generate and save your image.

  • Connect the KSampler output to your VAE Decode node
  • Add a "Save Image" node and connect it to the VAE Decode output
  • Configure your desired save location and file format
  • Your workflow is now ready to generate images!

© Mooshieblob 2024 - 2025