Building Up My SillyTavern Suite

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am going farther into the world of AI chat from the comfort of my own home. Let’s get started!

Last week, I got into Silly Tavern, a highly configurable AI chat playground with tons of features. Accomplishing a functional setup was rewarding on its own, but I am having my mind blown reading about some of the more advanced features. I want to explore farther. Notably: I am interested in a long term goal of posing characters with text and “taking pictures,” as well as locally powered AI web search.

Stable Diffusion

My first serious exploration into AI was image generation. Silly Tavern can have the LLM (Large Language Model) write a prompt for Stable Diffusion, then interface with tools such as Automatic1111 through an API’s (Application Program Interface) to generate an image. Similarly to the LLM engine, A1111 must be launched with the –api flag. I haven’t yet spent much time on this since getting it working.

Web Search

It is possible with a plugin to give your AI character the ability to search the Web. While historically this was done through something called the Extras API, the official documentation noted how it is no longer maintained as of last month and that most of the plugins work without it. The step I missed on both this and Stable Diffusion last week was connecting to their repository to download stuff. Anything else I tried kept getting ignored.

I configured AI search to use DuckDuckGo through Firefox. Let’s just say that while my AI search buddies appear to have a knack for finding obscure documentation, they do suffer from hallucinations when asking for exact products, so always double check the AI’s work.

A favorite interaction I had AI searching was looking up how much I probably paid for my now dying tablet (Praise God for letting me finish backing it up first!), a Samsung Galaxy Tab A 10.1 (2016). The bot said it originally sold for around $400 citing MSRP (Manufacturer’s Suggested Retail Price, which I did not know previously). I went and found the actual price, which was $50 cheaper and closer to what I remember its price tag being.

LLM Censorship

While my experience with Artificial Intelligence so far has been a fun journey of discovery, I’ve already run into certain limitations. The companies releasing LLM’s typically install a number of guardrails. I used AI to find a cell phone’s IMEI number, but Crazy Grandpa Joe might make bombs or crack with it in his son’s kitchen using common ingredients. This knowledge is legal, but the people training LLM’s don’t want to be seen as responsible for being accessories to crime. So they draw a line.

But where should they draw this line? Every sub-culture differs in values. [Social] media platforms often only allow a subset of what’s legal for more universal appeal; your .pdf giveaway of Crime This From Home! will likely draw attention from moderators to limit the platform/community’s liability before someone does something stupid with it. In the same line of reasoning, if LLM trainers wish to self-censor, then that is their prerogative. However, progressive liberal American culture doesn’t distinguish between potential for danger and danger itself. LLM’s tend to be produced under this and similar mentalities. It is no surprise then that raw models –when given the chance– are ever eager to lecture about environmental hyper-awareness and promote “safe” environments.

It gets in the way though. For example: I played a scenario in which the ruthless Princess Azula (Avatar: The Last Airbender) is after a fight. The initial prompt has her threatening to “…incinerate you where you stand…” for bonking her with a vollyball. I goaded her about my diplomatic immunity and suddenly she merely wanted my head. At, “I will find a way to make you pay for this,” I jokingly tossed her a unit of currency. It went over poorly, but she still refused to get physical. I ended up taking her out for coffee. I totally understand the reasoning behind this kind of censorship, but it made the LLM is so adverse to causing harm, it cannot effectively play a bad guy doing bad things to challenge you as the hero.

Takeaway


AI is already powerful genie. The “uncensored” LLM’s I have looked at draw their line at bomb and crack recipes, but sooner or later truly uncensored LLM’s will pop up as consumer grade hardware grows powerful enough to train models from scratch. Or perhaps by then access to precursor datasets will be restricted and distribution of such models regulated. For now though, those with the power to let technologies like LLM’s out of the AI bottle have chosen to do so slowly in the hopes we don’t destroy ourselves by the time we learn to respect and use them responsibly.

Final Question

I tested pacifist Azula against a group chat, and got them to fight normally, but the LLM I’m using (kunoichi-dpo-v2-7b) gives {user} Mary Sue grade plot armor as elaborated above. Have you found a 7B model and configuration that works for interesting results?

I’ve looked around for another LLM and read it’s one of the better ones for the hardware I’m using. I tested pacifist Azula against a few other cards in a group chat, and found that fights can happen, but it gives {user} plot armor to the point of being a Mary Sue. Have you found a LLM you like? I look forward to hearing from you in the comments below or on my Socials!

Programming a Pi to Deter Cats: Part 2.01 (Robot Ethics Monologue)

Good Morning from my Robotics Lab! This is Shadow_8472, and I don’t feel like I have much progress this week. Let’s get Started!

I continued with the tutorial I started last week, and kept running into walls. I wanted a Hello World program up and running to show for this week’s post, but that simply isn’t happening unless I don’t generate enough text and brute force it within the next couple hours.

I suppose I could try sorting out my feelings about robot cruelty instead.

Several years ago, I did a paper about future mistreatment of robots. The gist was basically since animal abuse is strongly tied to human abuse, and the brain handles some robots more like people than even animals, there should be some form of legal protection for lifelike robots by the time they come into common use. Robots themselves might not need the protection, but people around would-be robot users would benefit from fewer abusers.

On the other hand, now that I know a little more about the prototyping process, I now know that a little more care would need to go into defining “abuse” lest the industry suffer.

I recently listened to a story where one of the characters was discovered to be a robot after a set of shelves fell on and damaged a limb. When her older sister questioned their parents, they said they found her in a dumpster, and didn’t show any sign of knowing she was artificial.

In the above story, there could be any number of reasons a lifelike robot could end up in a dumpster. She could have ended up there as an abandoned prototype, or even for having anomalous code that produced true emotions in a line of worker robots and someone smuggled her out instead of secretly destroying or reprogramming her. Or, she simply could have outlived her usefulness to her previous owner and they wiped her memory as they abandoned her.

In pretty much any case, under today’s laws, robots are treated like stuff, though social pressure has forced robotics company Boston Dynamics to stop showing footage of debugging their robots’ balance programs.

Where then, should the line be drawn, if it should even be drawn? The strongest opposing argument would be for the people who could stop at abusing robots so they don’t go on to people and buy legal victims who can easily have the memory erased. And without going off on a long sting of research, I don’t think I could answer which way would lead to fewer living victims.

For me, if asked to draw the line right now, I’d go easy on “abuse” cases performed in a professional context, as well as robots not designed or modified to relate as an artificial person. Digital assistants are a bit fuzzy here. They are often bound as part of modern operating systems, though I try to limit their scope for privacy concerns, thereby “neglecting” them.

Final Question: Should there be any laws against mistreatment of robots, and if so, how would you weave such a law so it stops the potentially harmful stuff while permitting ethical ore even necessary forms of “abuse”?