I’m Sold On StableSwarmUI

Good Morning from my Robotics Lab! This is Shadow_8472 and I’ve made up my mind on StableSwarmUI as a replacement to A1111. Let’s get started!

Generative AI (Artificial Intelligence) is the technology buzz word of the decade so far thanks to open sourced models. Automatic1111 has an extensive community library, but ComfyUI’s flexibility may yet challenge it as the next favorite. While not yet polished to A1111’s visual aesthetic, a total AI noob should find StableSwarmUI navigable while letting him/her peek at Comfy beneath.

Learning ComfyUI Basics

I’m taking that peek… ComfyUI looks like boxes and spaghetti. The correct term is “workflow.” Each node represents some unit of work similar to any other UI. The power of Comfy is the ability to arbitrarily link and re-arrange nodes. Once my first impression –intimidation– wore off, I found grouping familiar options by node and color coding their connections made the basic workflow more intuitive while highlighting my gaps in understanding of the Stable Diffusion process.

Let’s define some terms before continuing. Be warned: I’m still working on my intuition, so don’t quote me on this.

  • Latent Space: data structure for concepts trained by [counter]examples. Related concepts are stored close to each other for interpolation between them.
  • Latent Image: a graphical point in a latent space.
  • Model/Checkpoint: save files for a latent space. From what I can tell: checkpoints can be trained further, but finished models are more flexible.
  • CLIP: (Contrastive Language-Image Pretraining) a part of the model that turns text into concepts.
  • Sampler: explores the model’s latent space for a given number of “steps” with respect to concepts specified in the CLIP conditioning as well as additional sliders.
  • VAE: (Variable AutoEncoder) a model that translates images to and from latent space.

The basic Stable Diffusion workflow starts with an empty Latent Image node defining height, width, and batch size. Alongside this, a model or checkpoint is loaded. CLIP Text Encode nodes are used to enter prompts (typically both positive and negative). A KSampler node does the heavy lifting, combining everything into a low-resolution preview based off the latent image (if enabled). Finally, a VAE decoder node turns your latent image into a normal picture.

While I’m still developing an intuition for how a latent space works, I’m imagining a tent held up by a number of poles defining its shape. You are free to interpolate between these points, but quirks can arise when concepts bleed into each other: like how you’d tend to imagine bald people as male.

ControlNet

The next process I wish to demystify to myself is ControlNet. A second model is loaded to extract information from an existing image. This information is then applied to your positive prompt. (Let me know if you get any interesting results conditioning negative prompts.) Add in a second or more ControlNets, and combining them presents its own artistic opportunity.

For this exercise, I used a picture I made during my first attempt at Stable Diffusion: a buff angel with a glowing sword. As a challenge to myself, I redid it with SDXL (Stable Diffusion eXtra Large). I used matching ControlNet models for Canny and OpenPose. Some attempts came up with details I liked and tried to keep. I added the SDXL refiner model to try fix his sword hand. It didn’t work, but in the end, I had made a generation I liked with a few golden armor pieces and a red, white, and blue “(kilt:1.4).” Happy 4th of July!

Practical Application

A recent event has inspired me to try making a landscape picture with a pair of mason jars –one full of gold coins, and the other empty– both on a wooden table in front of a recognizable background. It’s a bit complex to generate straight out of text, but it shouldn’t be too hard with regional conditioning, right?

Impossible. Even if my background came out true, I’d still want the mason jars to match, which didn’t happen. This would have been end of the line of the if I were limiting myself to A1111 without researching additional plugins for my already confusing-to-manage cocktail. With Comfy, My basic idea is to generate a jar and generate another filled jar based off it, then generate them together in front of my background.

Again: easier said than done. Generating the initial mason jar was simple. I even arranged it into a tidy group. From there, I made a node group for ControlNet Canny and learned about Latent Composite – both of which allowed me to consistently put the same jar into a scene twice (once I figured out my dimensions and offsets), but filling/emptying one jar’s gold proved tricky. “Filling” it only ever gave me a quarter jar of coins (limited by the table visible through the glass), and emptying it left the glass surface horribly deformed. What’s more is that the coins I did get would often morph into something else –such as maple syrup– with too high of a denoise in the KSampler. On the other hand, too low a value, and the halves of the image don’t fuse. I even had coins wind up in the wrong jar with an otherwise clean workflow.

Even though I got a head start on this project, I must lay it down here, incomplete. I have seen backgrounds removed properly with masking, so I’ll be exploring that when I come back.

Takeaway

ComfyUI looks scary, but a clean workflow is its own work of art. Comfy’s path to mastery is clearer than A1111. Even if you stick to basics, StableSwarmUI has simpler interfaces – a simple prompt and an “unpolished A1111-esk” front panel for loaded pre-made workflows.

Final Question

I’m probably being too hard on myself compositing glass in-workflow. Let me know what you think. What tips and tricks might you know for advanced AI composition? I look forward to hearing from you in the comments below or on my Socials!