Self-Hosted AI Consistent Characters: Part 2

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am continuing work towards a consistent character using Stable Diffusion AI image generation software. Let’s get started!

Previously

Last time I talked about making a consistent character on local hardware, I went over using Automatic 1111 (A1111) web interface (running on my father’s computer), installing the ControlNet extension for Stable Diffusion and equipping it with models for OpenPose, and then using OpenPose to generate eight skeletons based of screenshots from Sonic Forces. All common enough stuff, but for context, I am following a tutorial on YouTube by Not4Talent “Create consistent characters with Stable diffusion!!” [1].

Character Switch

While I had previously been working on a Sonic fan character for my sister whom I am calling Ms. T, I switched over to working on my own character in the same setting, whom I’m calling Smokey Fox. He’s just spent several years studying in a foreign culture with a human sense of modesty, so I generated an orange Mobian fox with blue eyes and wearing red sneakers, blue jeans, and a red trench coat while applying a bunch of little things I picked up along the way, such as quality prompts and negative prompts.

Along the way, the AI came up with details I liked, such as a white shirt, black gloves, black tipped ears, and some of the time, he even generated with a red thing on a glove I decided was some kind of accessory crystal. Quality was spotty. It took me a few attempts before I cleaned up the hair in his profile shots by prompting for a bald head. Only four poses consistently gave him a tail, but one was almost never usable. It also tried giving him a black tail tip a few times, but I didn’t like that.

Along the way, I grabbed pictures with poses I liked and stacked them in GIMP. Because I was using a fixed seed, I was able to assemble more poses until I had eight portraits I’d touched up. Notably: I had to extend his coat on his behind shot, his shoes needed a lot of help, and I had to draw one of his tails from scratch. The crystal thing on his glove also got interesting to transfer around, and I did have to draw it myself a few times. During this process, I took screenshots of my work in progress and shared them on Discord.

No Auto Save

Disaster!! At some point, my computer randomly crashed. I don’t remember the details, but it was several days later when I returned to work on Smokey that I learned that GIMP doesn’t auto-save, like LibreOffice just as I mentioned it. Thankfully, I had the screenshots to work with. I also lost my original prompts through a side project where I helped my mother troll a friend from elementary regarding a Noah’s Ark baby quilt she made for her. In total, I made an island chain, a forest scene, and just as it was about to arrive, I made up a beach scene with the Taco Bell logo embedded using a ControlNet model for making fancy QR codes.

Back to Smokey Fox, the next step in the tutorial was upscaling. Pain followed. The Not4Talent tutorial [1] didn’t make sense to me, so I spent a day or two unenthusiastically bumbling around trying to learn enough to feel ready to post. I played around with several ControlNet models. Most are variations on making white-on-black detail maps. One late night session landed me an upscale tutorial by Olivio Sarikas [2] that clicked with me. Similarly to other tutorials, A1111 has [re?]moved stuff around between updates in the 6 months to a year since it was popular to introduce ControlNet – not to mention various plugins which may differ between our setups. Olivio’s tutorial rescued my project, and I got back to having fun cleaning up details with GIMP.

Takeaway

I may need to take a closer look at Forge instead of A1111. A1111 has a known bug where it has trouble unloading models, but while I was playing with various ControlNet models, I managed to defeat the vRAM capacity on the GPU.

Final Question

Forge will require a virtual environment, which I don’t know how to do properly yet. What tutorial would you recommend? I look forward to hearing your answers in the comments below or on my Socials!

Work Cited

[1] Not4Talent, “Create consistent characters with Stable diffusion!!,”youtube.com, Jun. 2, 2023. [Online]. Available:https://youtu.be/aBiGYIwoN_k [Accessed Jun. 7, 2024].

[2] O. Sarikas, “ULTIMATE Upscale for SLOW GPUs – Fast Workflow, High Quality, A1111.”youtube.com, May 6, 2023 [Online]. Available:https://youtu.be/3z4MKUqFEUk. [Accessed Jun. 7, 2024].

Self-Hosted AI Consistent Characters: Part 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am on a quest to generate consistent characters using AI. Let’s get started!

It all started with wanting to learn how to make my own “consistent characters” I can summon with a keyword in my prompt. Before I can train the AI to make one, my subject needs consistent source pictures. One way to do that is to chop up a character sheet with multiple angles of the same character all generated at once. Expect all that in a future post. It sounded like a reasonable goal until I discovered just how many moving parts I needed just to approach.

In particular, my first goal is Ms. T, a Sonic OC (Original Character) by my sister, but once I figure out a successful workflow, it shouldn’t be too hard to make more.

A1111

Automatic1111 (A1111) is the go-to StableDiffusion (SD) image generation web interface for the vast majority of tutorials out there. While it’s not the easiest SD WebUI, A1111 is approachable by patient AI noobs and EasyDiffusion graduates alike. It exposes a bit many controls by default, but it packs enough power to keep a SD adept busy for a while. I also found Forge, an A1111 fork reportedly having extra features, bug fixes, grudging Linux support, and needs a virtual environment. At the top, I found ComfyUI, which lets you design and share custom workflows.

As a warm up exercise, I found a SonicDiffusion, a SD model geared for Sonic characters, generated a bunch of Ms. T portraits, and saved my favorites. Talking with my sister, I began cherry picking for details the model doesn’t control for, such as “cyclopse” designs where the eyes join at the whites vs. separate eyes (Hedgehogs are usually cyclopses, but not in the live action movies). SonicDiffusion –to my knowledge– lacks a keyword to force this distinction. Eventually, my expectations outpaced my ability to prompt, and I had to move on.

ControlNet

A major contribution to A1111’s versatility is its ecosystem of extensions. Of interest this week is ControlNet, a tool to include visual data in a StableDiffusion prompt for precision results. As of writing, I’m looking at 21 controller types – each needing a model to work. I downloaded the ones for Canny, Depth, and OpenPose to get started.

My first thought was to use an Xbox One Kinect (AKA Kinect v2) I bought from someone in my area a few Thanksgivings ago. If it works, I can easily pose for ControlNet. Long story short: I spent a couple days either last week or the week before tossing code back and forth with a self-hosted AI chatbot in SillyTavern with no dice. The open source Linux driver for the Kinect v2 just isn’t maintained for Ubuntu 22.04 and distros built on it. I couldn’t even get it to turn on its infrared LED’s (visible by my PinePhone’s camera) because of broken linkages in the header files or something. Pro tip: Don’t argue with a delusional LLM unless you can straighten it out in a reply or two. On the plus side, the AI did help me approach this job where I’d expect to have taken weeks to months without it. If/when I return, I expect to bodge it with Podman, but I may need to update the driver anyway if the kernel matters.

Even if I did get the Kinect to work, I doubt it would have been the miracle I was hoping for. Sonic style characters (Mobians) have different proportions than humans – most notably everything from the shoulders up. I ended up finding an embedding for making turnaround/character sheets, but it was again trained to make humans and I got inconsistent results compared to before. I did find a turnaround template for chibi characters that gave me OK-ish results running it through Canny, but Ms. T kept generating facing the wrong way.

On another session, I decided to try making Ms. T up in Sonic Forces. I installed it (ProtonDB: Platinum) and loaded my 100% save. I put Ms. T on a white background in GIMP and gave it to ControlNet. Unsurprisingly, OpenPose is not a Sonic fan. It’s trained on human data (now with animals!), but a cartoon kept returning blank outputs until I used a preprocessor called dw_openpose_full, which –while it still doesn’t like cartoon animal people– did cooperate on Ms. T’s right hand. Most every node else I dragged into place manually. I then demonstrated an ability to pose her left leg.

Character Sheet

From there, I opened OBS to record an .MP4 file. I used FFmpg to convert to .gif and loaded it in GIMP to… my computer slowed to a crawl, but it did comply without a major crash. I tried to crop and delete removed pixels… another slowdown, GIMP crashed. I adjusted OBS to record just my region of interest. 500+ frames was still a no-fly when each layer only has the changes from the last. I found options to record as .gif and record as slowly as I want. I then separated out my frames with FFmpg, making sure to have a new directory:

ffmpeg -i fileName.gif -vf fps=1 frame_%04d.jpg

I chose ten frames and arranged them in a 5×2 grid in GIMP. I then manually aligned OpenPose skeletons for each and sent that off to ControlNet. Immediately, my results improved. I got another big boost by using my grid of .gif frames, but in both cases Ms. T kept eyes and feet facing toward the viewer – even when her skeleton was pointed the other way. My next thought was to clean up the background on the grid, but compression artifacts got in the way.

Start over. I made a new character with joint visibility and background removal in mind. She looked ridiculous running through a level, but I got her knee placement by moving diagonal toward the camera and jumping. I then put eight new screenshots in a grid. Select-by-color had the background cleared in a minute. I then used Canny for silhouettes intending to reinforce OpenPose. I still got characters generating the wrong way.

Takeaway

This week has had a lot of interplay between study and play. While it’s fun to run the AI and cherry pick what comes out, the prospect of better consistently keeps me coming back to improve my “jungle gym” as I prepare to generate LoRa training images.

Final Question

The challenge that broke this topic into multiple parts is getting characters to face away from the viewer. Have you ever gotten this effect while making character sheets?

I look forward to hearing from you in the comments below or on my Socials!