A Game for Geeks (Silly Tavern)

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am getting into the world of self-hosted AI chat. Let’s get started!

Welcome to the Jungles

The Linux ecosystem is a jungle when compared to Windows or Mac. Granted: it’s had decades to mature atop its GNU roots that go back before the first Linux kernel. Emergent names such as Debian, Ubuntu, Arch, and Red Hat stand tall and visible among a canopy of other distros based off them with smaller names searchable on rosters like DistroWatch akin to the understory layer with a jungle floor of personal projects. Rinse and repeat for every kind of software from window managers to office tools. Every category has its tour attractions and an army of guides are often more than happy to advise newcomers on how to assemble a working system. The Linux ecosystem is a jungle I have learned to navigate in, but I would be remiss if I were to say it is not curated!

This isn’t my first week on AI. Nevertheless, the AI landscape feels like the playground/park my parents used to take me to by comparison, if it were scaled up so I were only a couple inches tall. Names like ChatGPT, Gemini, Stable Diffusion, and other big names are the first names anyone learns when researching AI – establishing them as the de facto benchmark standards everything else is judged by in their respective sub-fields. Growing in among the factionated giants are a comparatively short range of searchable shrubs, but if you wish to fully self-host, 2-inch-tall you about has to venture into the grass field of projects too short lived to stand out before being passed up. The AI ecosystem is a jungle where future canopy and emergent layers are indistinguishable from shrubs and moss on the forest floor. The art of tour guiding is guesswork at best because the ecosystem isn’t mature enough to be properly pruned. I could be wrong of course, but this is my first impression of the larger landscape.

AI Driven Character Chat

My goal this week was to work towards an AI chat bot and see where things went from there. I expect most everyone reading this has either used or heard ChatGPT and/or similar tools. User says something and computer responds based on the conversational context using a Large Language Model (LLM – a neural network trained from large amounts of data). While I have a medium-term goal of using AI to solve my NFS+rootless Podman issues, I found a much more fun twist: AI character chat.

LLM’s can be “programmed” by the user to respond in certain ways strikingly similarly to how Star Trek’s holodeck programs and characters are depicted working. One system I came across to facilitate this style of interaction is called Silly Tavern. Silly Tavern alone doesn’t do much – if a complete AI chatbot setup were a car, I’d compare Silly Tavern to the car interior. To extend the analogy, the LLM is the engine powering things, but it needs an LLM engine to interface the two like a car frame.

Following the relevant Silly Tavern documentation for self-hosted environments, I located and deployed Oobabooga as an LLM engine and an LLM called Kunoichi-DPO-v2. Without going into the theory this week, I went with a larger and smarter version than is recommended for a Hello World setup because I had the vRAM available to run it. Each of these three parts has alternatives, of course. But for now, I’m sticking with Silly Tavern.

I doubt I will forget the first at-length conversation I had with my setup. It was directly on top of Oobabooga running the LLM, and we eventually got to talking about a baseball team themed up after https://www.nerdfitness.com/wp-content/uploads/2021/05/its-working.gifthe “Who’s on First?” skit, but with positions taken up by fictional time travelers from different franchises. I had it populate the stadium with popcorn and chili dog vendors, butlers, and other characters – all through natural language. It wasn’t perfect, but it was certainly worth a laugh when, say I had the pitcher, Starlight Glimmer (My Little Pony), trot over to Sonic’s chili dog stand for some food and conversation (I’m just going to pretend he had a vegetarian option, even though neither the bot nor I thought of it at the time).

But also importantly, I asked it a few all-but-mandatory questions about itself, which I plan on covering next week along with the theory. The day after the baseball team conversation, I went to re-create the error I’d previously gotten out of Silly Tavern, and I got a response. Normally, I’d call it magic, but in this conversation with the AI, I casually asked something like,

You know when something computer doesn’t work, it gets left alone for a while, and then it works without changing anything?

I was just making conversation as I might with a human, but it got back with a very reasonable sounding essay to the tune of:

Sometimes memory caches or temporary files are refreshed or cleaned up, letting things work when before they didn’t. [Rough summary without seeing it for days.]

Moving on, I had a stretch goal for the week of working towards one of Silly Tavern’s features: character group chat. For that purpose, I found a popular character designed to build characters. We tried to build a card for Sonic the Hedgehog. The process was mildly frustrating at times, but we eventually ended up talking about how to optimize the card for a smaller vRAM footprint, which changed wildly when I brought up my intention to group chat.

Takeaway

I learned more for this topic than I often do in a given week, so I am splitting the theory out to save for next week. Hopefully, I will have group chat working by then as well as another feature I thought looked interesting.

Final Question

Love it or hate it, what are your thoughts about the growing role AI is taking on in society? I look forward to hearing from you in the comments below or on my Socials!

Learning OPNsense: DNS Adblocking

Good Morning from my Robotics Lab! This is Shadow_8472 with another short update my progress with my OPNsense firewall. Let’s get started.

I have done a substantial amount of work with DNS on my home network, but as noted in a previous post, it’s sub-optimal to exclusively manipulate your Domain Name System access from a power-hungry desktop when you sometimes have to ration electricity in your Uninterruptible Power Supply (UPS). I like PiHole’s web interface with all its fancy, moving graphs and charts, but our new firewall, Cerberus, can replicate the functionality I need.

I primarily use PiHole for DNS ad blocking, but I also explicitly blacklist a few URL’s while hosting local DNS records for servers on my Local Area Network (LAN), though the later is a work in progress.

OPNsense→Services→Unbound DNS→Blocklist→Type of DNSBL offers a drop-down checkbox menu of block lists. This is in contrast to PiHole→Adlists, which lets you import lists from arbitrary sources (edit the day after posting: OPNsense Unbound does have a URLs of Blocklists field). Either way, it should go without saying that sites and ads only need to be blocked once; it will only slow your DNS service down if given a bunch of redundant lists. From what I remember installing OISD Big onto PiHole, they aggregate several of these lists and remove the duplicates. PiHole also picked up a list named StephenBlack with a comment, “Migrated from /etc/pihole/adlists.list.” It sounds like a system default, but in any case, I found it had stuff not on OISD Big. OPNsense Unbound has the option for it, so it got migrated.

Migrating singled-out blacklist items was as simple as adding each entry to a comma-separated list (where PiHole wants separate entries). I’m going to wait on migrating my LAN domain names though. I believe I found the place to do it, but ButtonMash isn’t running Caddy to recognize subdomain requests right now.

One last step was to get into the red gaming router we’ve been using and point its DNS at Cerberus the Firewall. I then pointed its secondary DNS alternative at ButonMash.

To summarize, we should have the exact same protection as before on a smaller battery footprint and within the firewall’s default attack surface to boot!

Encrypted DNS

One of my eventual goals is to have my own recursive DNS server, which seeks out an a URL’s authoritative DNS record if it doesn’t have it cached. This will increase privacy, but I haven’t figured it out at a production grade yet. Instead, I looked up the best free and privacy respecting DNS, and so far as I can tell, that’s Cloudflare at 1.1.1.1.

From OPNsense, it wasn’t much more trouble to encrypt using DNS over TLS. I would prefer DNS over HTTPS, does the same thing but camouflages DNS requests as normal web traffic. For now, I’m assuming Unbound can’t do this and working properly. Please tell me if I’m wrong.

Takeaway

It’s slow going, but I am moving into Cerberus. While looking around, I found a module for NUT (Network UPS Tools), a utility for shutting down computers gracefully as their UPS runs down. I wanted to get it working, and for a moment after a reboot I did, but for reasons beyond me besides the driver on BSD not agreeing the best with CyberPower UPS systems, I’m at a loss. At this point, I am thinking to install a small Linux box to do the job at a future date, even though that will be yet another thing on the UPS.

Final Question

From above: Do you know of a way for OPNsense’s Unbound module to run DNS over HTTPS? I look forward to hearing from you in the comments below or on my Socials!

Hardware Firewall Up!

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project of the week. Let’s get started!

I left off last week having made attempts on four separate nights trying to get the hardware firewall online in a production context. When I tested it between my upstairs workstation and its OpenWRT+Raspberry Pi router/Wi-Fi adapter, it worked fine. Put it back in production between our ISP’s gateway and our existing gaming router, and no one gets Internet.

The solution: pull the gateway’s plug for 30 seconds and let it reboot. Internet solved.

Longer explanation: my ISP box is in some sort of bridge mode, where it’s supposed to pass the external IP address to a single device (usually a router, but can be a normal computer). In this mode, it didn’t like this device getting swapped out – possibly as a security measure. It still reserves the address 10.0.0.1 as itself through out the network, a behavior I took to be half-bridge mode, but my surprise this week while fiddling with settings was that it did in-fact pass on the external address.

Takeaway

I expected the struggle to continue a lot longer, but I actually figured it out pretty quickly once I started researching the symptoms online. I explored the settings a bit more. I’d like to move the functions of PiHole over, but the web interface has a drop-down menu for block lists instead of a text box. I’ll look into it another time. Instead, I spent a good chunk of the week weeding grass and getting a sunburn.

Final Question

Have you ever found you were rebooting the wrong thing? I look forward to hearing from you in the comments below or on my Socials!

Unboxing: Hardware Firewall (Protectli Vault)

Good Morning from my Robotics Lab! This is Shadow_8472 and today I have on my desk between my keyboard and monitors a new Protectli Vault running OPNsense. Let’s get started.

After at least a couple years tentatively researching hardware firewalls, it’s here. Let me tell you: it’s both a relief and a bit of pressure. I’m glad I’m no longer starting from scratch over and over again, but now I feel time pressure to deploy it despite my parents’ assurance that it’s much better to go at a responsible pace. And unless you’re a full time network specialist, that pace is longer than a week.

My Current Network and Its Weaknesses

At present, my home network starts with a box owned and controlled by my service provider. This gateway feeds into a gaming router before going out to a couple switches and Wi-Fi. One of my desktops has OpenWRT on a Raspberry Pi 4. ButtonMash, my home server, runs Podman containers for Vaultwarden (Password vault storage) and PiHole (DNS ad blocking). We have a Network Attached Storage by the hostname of GoldenOakLibry. Everything minus a couple workstations has battery backup in case the lights go out.

And when the lights do go out, the first big flaw comes out. While the network closet may last several hours, Power-hungry ButtonMash and GoldenOakLibry chew through their shared battery in around half an hour before I added ButtonMash’s twin, Joystick, as a development platform. When ButtonMash goes down, the network loses DNS so we can’t resolve URL’s.

Additionally, I’d like to move to a non-default set of internal IP addresses, like 10.59.102.X instead of 10.0.0.X or 192.168.0.X. While computers getting automatic IP’s over DHCP will essentially take care of themselves, I have invested quite a bit of time into static IP’s on NFS (Network File System), and when I move GoldenOakLibry’s IP, I’ll need to adjust the automounts for all systems accessing it, and that’s just a pain. I want to learn how a home domain works.

I also have a number of network-related projects I’ve done research for, but burned out on before solving. From memory, here’s a checklist of partial/incomplete/need-to-redo projects:

  • Feline Observation Pi (First prototype tested, needs overhaul)
  • Website for family photo archive (Needs hardware firewall, rootless Podman/NFS, booru/wiki)
  • Nextcloud (Early prototype successful, needs rootless Podman/NFS before production)
  • Beowulf cluster (Early research)
  • Rootless Podman/NFS (Heard from a developer and solution may not exist [yet])
  • UPS battery monitoring/shutdown before power failure (Research phase)
  • Caddy (First prototype in production, needs overhaul)
  • Unbound (Incomplete prototype)
  • Reverse VPN [mobile traffic] (Need Hardware Firewall)
  • Podman systemctl –user (In production, but I cannot reproduce at will)
  • Domain/Domain Controller (Background research incomplete)

Keep in mind that the notes on each item suggesting a direction are just the direction I’m leaning in at the moment without reflecting the new hardware. Replacing GoldenOakLibry with a server beefy enough to handle running Podman would solve my current need for rootless Podman/NFS. I may find a replacement for Caddy that also works as a Domain Controller. Does Caddy even do that? Let me check… Inconclusive; probably not. I don’t know enough about what to look for in a Domain Controller besides the name. Most of my time focused on researching Demilitarized Zones.

Demilitarized Zone and Roadmapping

Originally, I had a goal of deploying this new firewall/router configured with a demilitarized zone network structure. With hardware in hand, I learned a lot! But as I learned, I realized I needed to learn that much more to do the job right. A DMZ is basically a low security area of your network for serving stuff over an untrusted network (usually the wide open Internet) while protecting your Local Area Network. Ideally your LAN would have a separate physical router in case the one servicing the DMZ is ever compromised, but a homelab environment should be a small enough target that branching off from a single hardened router should be fine. My trouble is that I can’t fully tell where to put what.

I already know I want to move PiHole, Unbound, and similar projects related to internet traffic, and other projects I want lasting a bit longer into power outages onto the new router. OPNsense is a distribution of BSD and not Linux, so I expect I will need to look into a Linux Virtual Machine if BSD-based containers aren’t available. The gaming router I’m using now will still be our Wi-Fi access point, but I’d prefer to retire it from DHCP duty.

ButtonMash and Joystick are my enigma. I had plans of clustering them, but I may need one in the DMZ and one on the LAN. GoldenOakLibry belongs on the LAN so far as I can tell – as do all workstations.

Takeaway

There will be more thought to give it another week. I went ahead and hooked it up in place, but it didn’t work despite how I had previously had it working between my upstairs workstation and its rPi router. I’ve reverted the setup to how it was before, and I’ll need to take a closer look and do some further testing.

Final Question

What was the last piece of tech you unboxed?

Roadmapping Switching to Linux Phones

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am dusting off my studies of Android in response to happenings around the family. Let’s get started.

Phones and Privacy

This round started with my father’s phone screen burning out. He got it repaired, but my sister’s phone screen was cracked, and I ended up giving her my otherwise unused one of the same model.

Consumer privacy has been an increasingly important question as computers integrate more tightly with our everyday lives. We now live in a time and place where most everyone old enough to read is coerced by convenience into loading themselves down with every tracker known to Man unless restrained by applicable law. And this issue will only continue to grow as technologies such as consumer VR and clinically installable neural interfaces mature and facilitate the collection of ever more intimate personal data. Legislation isn’t keeping up, but a minority of people unsatisfied with this status quo are developing open source alternatives that either let end-users mitigate abuse of technology by obfuscating their data or preventing its collection entirely.

My goal is to one day have the family on Linux phones. OK, what are the main obstacles? From previous research, cell network compatibility has been big, but support from our current carrier is nil the moment I read off my prototype PinePhone’s IMEI number (an individual cell modem identifier). Additional general knowledge research turned up that banking apps have a tendency to really hate working on Android devices without Google’s seal of approval.

Phone Carriers

Most major cell companies will prefer you buy a phone on credit that’s been locked to their network for a couple years. Per contract, you don’t get admin privileges until it’s paid off and unlocked – assuming you still have a unit at the end. Even if a service has a bring-your-own-phone program, it may be limited to a short list of models, even if your unapproved unit is compatible with the wireless technology[ies] their network is built on (which I believe was the case last time we switched carriers). Even then, a phone may only be compatible between any combination of calls, texting, data, or other wireless functionalities.

Complicating the matter for me specifically, I cannot even comfortably look at pictures where a part of the screen has been sacrificed for a camera “island” per the modern trend. I need a phone with like goodies in the bezel where they belong. After doing some research, I narrowed my choices down to the Librem 5 and the PinePhone Pro. General research on each for this week turned up year-old criticism about the American made Librem 5 refunds to weigh against the PinePhone being assembled in China like most other personal devices. I looked up each and found a compatibility chart for each and made a better informed recommendation to my parents for when we switch carriers.

Android On Linux

One high priority feature my parents are after in their phones is mobile deposit. That’s not happening on a week’s notice, even if I had a phone already to try it on. From a software point of view though, a Linux desktop is essentially identical to a Linux phone minus a cellular modem, SIM card, and miscellaneous other peripherals.

Many tools exist specifically to run Android apps on desktop. This week, I explored running BlissOS custom ROM on QEMU/KVM. QEMU and KVM took a while to straighten out, so I might be mistaken here, but QEMU is an emulator/hypervisor and KVM is a Linux kernel module for virtualization QEMU can optionally use for direct access to hardware for improved performance. Along the way, I was recommended in the direction of using an AMD graphics card instead of Nvidia. That meant using Derpy Chips, a computer from before Intel came up with the special circuitry needed for this kind of virtualization…

…Well, it looks like I’ve been carrying some bad info around, because this virtualization circuitry has been around for a lot longer than I thought! I navigated Derpy’s BIOS (Yeah, I know I’ve made a stink about them being UEFI and not BIOS, but they’re labeled BIOS despite being UEFI.) to turn on virtualization and and I got a proof of concept going. I tried using its command line installer, but couldn’t figure out how to work anything related to the hard drive. I could consistently get to –I assume– a live session run directly off the disk image. I successfully accessed the Internet from within the VM, but installing apps or even trying to make an entry on the file system remains an unsolved issue. Nevertheless: this is major progress.

Takeaway

Android at its core was made open source by Google to catch up to the iPhone. Now that it’s ahead, they’ve had seasons of moving as much of the definitive experience out of the public eye, but that’s not kept people from making custom ROM’s. While my next major goal in this project is to install an app, the another blockade on my developing road map is SafetyNet. When I get there, I’ll want to look up Magisk and Shamiko, two names I came across pertaining to the Android custom ROM community.

I’ll also note that I still have additional options to try, like something based on containerization. While writing this up, I took a second look at WayDroid, which I dismissed assuming it wouldn’t work in an X11 environment, and it just might.

Final Question

Do you have any experience with Linux phones? I would be most interested in hearing from you in the comments below or on my Socials!

A Let’s Player I am Not

Good Morning and Happy Easter/April Fool’s from my Robotics Lab! This is Shadow_8472 and today I am salvaging an attempted Let’s Play I tried making for Minetest Exile. Let’s get started!

Minetest is an open source block game engine and Exile among the top games on that engine. In this game created by Dokimi and continued by Mantar, you play as an exile banished from a post-postapocalyptic Iron Age civilization to the ruined land of the ancients for crimes you likely didn’t commit. When you lose a character to the wasteland, you will respawn as a fresh exile with a distinct backstory.

Suffice it to say Exile has a difficult learning curve. A .pdf tutorial exists by the original creator, Dokimi, but I found it very outdated two (I think) years ago. I tried making my own tutorial in the same style, but I found it too exhausting to switch between gameplay and organizing my thoughts, so I put the project down and all but forgot about it without publishing anything.

This week, I was inspired to try recording my gameplay with live commentary. I installed OBS from the repository and set it up for recording. It took me a while to get a good mix between my voice and game sounds, but I eventually got something halfway decent.

Gameplay wise, I got a decently lucky start with finding a good food to farm. Thanks to new clothing items since I last played, I wasn’t too bad off despite sub-optimal fiber sources previously being a practical requirement for keeping a critter alive while starting from scratch. I found a spot for a year one farm, but had to make a seepage pit for water to drink, which eventually gave me a stomach bug that crashed my game. I gave up on the Let’s Play and reported the bug. As of writing this on Friday afternoon, the bug has already been addressed. My save was still corrupted though, so I doubt I’ll be committing to a let’s play anytime soon.

Takeaway

Taking cumulative lessons from this project’s two attempts so far, I believe the wisest course of action is to record footage playing a season at a time while taking verbal notes, then grab screenshots from the video and write commentary on the highlights.

Final Question

Have you ever tried making a Let’s Play? How did it go?

How I Would Relearn Linux #7: Wandering

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am back with another Linux-learning tip. Let’s get started.

This edition of How I would Relearn Linux will be a bit heavier on the story side, but I’ll try to make it as interesting as possible. If I totaly lose you, feel free to skip to the Takeaway before making your way back to where you left off.

DerpyChips, one of my two daily drivers, has had a longstanding problem with running Discord where the proper installation crashes after updating (if applicable). I have it installed as a Snap installation, which got it working, but costs me my KDE themed mouse pointer, which I’m quite fond of. I thought to fix it as a side project. For the record, none of these hammers work: restarting the program, rebooting, reinstalling.

When in doubt, launch a failing program from the command line. I caught an error and looked it up, which landed me on Reddit where one user had fixed the exact same problem by finding a better driver. Derpy has an AMD card, so I looked through AMD’s site for the proper proprietary driver. It took an hour or so, but I found a list of 32 and 64 bit drivers for my exact card – they were just seven years old! The card is about twice that.

Now, I faced a dilemma. AMD’s driver might be the most compatible, but will it work with KDE 6, which I expect Derpy may be running within a year barring any major hardware upgrades? I have doubts. Besides, I didn’t feel like cleaning up a driver bricking my installation anymore. I’ll probably seek out an open source driver at a future date once I gathered a bit more background knowledge.

Along the way, I noticed a couple abbreviations tied to the graphics system: EGL and GLX. A search brought me to a Hackaday post about Firefox 94 and above switching from GLX to EGL as window management transitions away from the legacy X11 window management system in favor of Wayland for lower overhead, better security, and closer access to the hardware [1]. The top comment lamented this transition finishing with, “Sadly the kids don’t want a fun OS that does cool things like let you run your program on one machine but display it on a second. I guess they just want an OSX that they didn’t have to pay for.” My surprise died down when I read the screen name: XForever. The comments in reply were a frenzy of reports claiming X11 remote display either “never did [work]” [X], “worked perfectly fine 20 years ago and still does” [Traumflug], “only ‘worked’ because network security was a joke 20 years ago” [Pat], and that slow network could make or break the experience [Guest]. One user [raster] went back to refute XForever’s complaint with a link to a Git for Waypipe.

While the conversation brought up other methods of graphical computing from another device, like VNC (Virtual Network Client) and RDP (Remote Desktop Protocol), SSH caught my attention [Redhatter (VK4MSL)].

ssh -CY user@remotehost x11client

The unfamiliar flags turn out to mean compressing the datastream and trusting something to do with X11. I had to try for myself.

Logging in to my father’s computer, I brought up FreeTube over SSH no problem. I then tried MultiMC’s start script, and something errored out and it came up on his screen. I thought that was it without making a custom script or at least an export command, but I tried running bash as my x11client. It came up without prompts or access to command history, but when I called up the script, I got a window. The game itself was a bit slow to play given how choppy the video was, but it was good enough to navigate to an AFK point – until I turned up the graphics with extra shaders, then it dropped to 1fps.

Takeaway

Allow yourself to wander. Poke at annoying, non-critical problems every so often. Even if you don’t get to where you intended, you can still end up learning something objectively cool. I can see this tool will be very useful

Final Question

What cool stuff have you found by allowing yourself to wander?

Work cited

[1] M. Carlson, “Firefox Brings the Fire: Shifting from GLX to EGL,” hackaday.com,Nov. 23, 2021. Available: https://hackaday.com/2021/11/23/firefox-brings-the-fire-shifting-from-glx-to-egl/ [Accessed Mar. 25, 2024].

How I Would Relearn Linux #6: Bleeding Edge vs. Stable vs. Enterprise

Good Morning from my Robotics Lab! This is Shadow_8472 and today I have another installment in how I would relearn Linux. Let’s get started!

The Linux ecosystem has a lot of moving parts within easy view of the end user. Various distributions curate these often redundant parts in line with their respective goals. For example: how new do you like/want/need your software?

If “Cutting Edge” is the latest and greatest, then developers and enthusiasts willing to volunteer time and effort reporting bugs and provide feedback for new features belong on the “Bleeding Edge” – typically using a some sort of rolling release or compile-it-yourself based package manager.

On the hand, if the system’s goal is to not need sudo every few days, a stable system like Ubuntu or one of its many downstreams may be a better choice, but know that distros aimed at an Enterprise audience may leave you spending excessive amounts of time dumpster diving through backports for missing dependencies trying to compile source code when a two-year-old feature comes up missing (ask-me-how-I-know).

My pick is that I maintain a bit of everything. I have two computers I use as daily drivers. One runs EndeavourOS and another PopOS. EndeavourOS is Arch-based and on a rolling release while PopOS is Ubuntu-based with releases every few months. This way, if major changes break something with software I use across both devices don’t hit me all at once. Just this past week, I re-booted into KDE 6 on EndeavourOS and Discord immediately started having problems displaying messages as I draft them and occasionally the whole window would black out until interacted with. Turns out KDE 6 defaults to using Wayland instead of the old graphics display system, X (also called xorg, X11, x.org, etc). These are/this is [a] bug[s] that need[s] attention before X is retired from mainstream use. I now know to look out for it/them when KDE 6 comes to PopOS – probably sometime later this year. For now, X is still around in case a fallback is needed.

Takeaway

While my alert system to changing conditions is adequate for me at this time, another approach to consider is staying current on news regarding projects you use. I didn’t note his name, but I did come across someone saying he read about the KDE 6 Wayland. Time will prove the choice pioneering or foolish.

Final Question

Where do you balance on the innovation vs. stability scale?

Pi Spycam: Prototype Deployment

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am trying to solve an animal misbehavior problem around the house, and we don’t know who’s done doing it. And I’m literally dusting off an old project to help. Let’s get started!

Blinkie Pie is a Raspberry Pi 3B+ with a case themed after a PacMan ghost. I modified the files to include a Pi camera module looking out its left eye, but got bogged down with computer vision trying to automate critter punishment. Well, in the present day, one or both of our cats keeps doing something in the same area of the house, and a video feed served over HTTP (to access via browser) could let us monitor things on a second screen. After a brief survey of open source projects, Motion looks to be exactly what I think I need. [1]

DietPi looks like a good base OS. The article I found says it’s based on Debian Buster [2], but I’m only after a short-term project. I checked on Pkgs.org and Motion is listed as in the Debian Buster ARM repositories. [3]

Modern DietPi is based on Debian Bookworm as I found out when I downloaded it. Motion was present, but setup was annoying. I’ll spare the blow by blow, but the terminal program serving as its installer doesn’t think like me. Sometimes it ran quite slow (Pi 3B+ being old?). Credit to the install script where credit is due, but it got hung up trying to update, so I had to drop out and unsnag it manually.

Motion was obtuse to get working. While pounding around for a solution, I found a curated list of software calling the project motionEye. It didn’t find the camera module until after a reboot or two. My only confirmation was terminal activity in response to motion in front of the camera. And working from the command line, I can’t check on Motion’s web interface. A few hours of diagnostic research later, I used curl over SSH and found data on Blinkie Pie’s localhost:8081, but not over network_ip:8081 – which is a wise default configuration with the base OS lacking a firewall, but annoying for my low security use case of monitoring cats. I overrode this setting with a config option at ~/.motion/motion.conf.

# Restrict stream connections to the localhost.
stream_localhost off

I now had images around once per second, but they were coming through upside-down because of how I mounted my camera module in its case. I Modified a similar setting to allow the webUI, but it was very limited. I’d spent a while Thursday night trying to solve this flip issue from the config file, but I only ended up jamming terms from documentation in ways that didn’t work. I circled back around on Friday afternoon and used a search function to find settings for both rotation and flip, but later realized the system only flipped vertically (probably because I used two lines).

With high to mediocre hopes, I deployed Blinkie Pie to watch over Sabbath. The image came out dark, so I found an old desk lamp I had stashed in a closet and left a few other lights on. The good news is that it was more stable than during tentative testing. The only difference I can think of I that I was only SSH’ing over Wi-Fi once instead of for two links.

To access the recordings, my first instinct was to use SCP, a program to transfer files over the SSH protocol. DietPi operating system does not include SCP by default. Instead, I logged in with the FISH protocol, which doesn’t require anything special on the other end besides SSH. Bonus: I could use it with Dolphin (KDE’s file browser). Unfortunately, the default motion detection settings mostly caught human family. Our little, white dog starred in a couple automated recordings, but my black Labrador was seen in one clip being ordered to lay down in the observation zone without any footage from when he got up and left. At no point did I see a cat who wasn’t being carried.

Takeaway

This project didn’t catch anycreature in the act, but I’m still satisfied with my progress this week, and I intend to follow it up later this month where I tweak the configuration to be more sensitive to smaller creatures.

Final Question

Have you any suggestions on tracking cats with Motion or a similar technology?

Works Cited

[1] C. Schroder, “How to Operate Linux Spycams With Motion,” linux.com,July 10, 2014. [Online]. Available: https://www.linux.com/training-tutorials/how-operate-linux-spycams-motion/. [Accessed Mar. 11, 2024].

[2] C. Cawley, “The 8 Best Lightweight Operating Systems for Raspberry Pi,” makeuseof.com, Nov. 7, 2021. [Online]. Available: https://www.makeuseof.com/tag/lightweight-operating-systems-raspberry-pi/. [Accessed Mar. 11, 2024].

Available:

[3]pkgs.org, [Online]. Available: https://pkgs.org/search/?q=motion. [Accessed Mar. 11, 2024].

My First Astrophoto

Good Morning from my Robotics Lab! This is Shadow_8472, and today I have a follow up with my telescope. Let’s get started!

In contrast to recent years of drought and water rationing, this winter has seen the blessing of much-needed rain. Overcast often blocks out the night sky, and mist can ruin a clear night anyway. But Tuesday night this last week was clear, dry, and still. I took my entry-level telescope to the front yard.

One of my goals with telescope is photography. I wanted a modestly decent picture of Saturn before investing heavily in equipment. But Saturn is setting too soon after sunset to catch above my suburban skyline – Jupiter will have to do; before try that again, I want to get some good detail on the moon because my first attempt at astrophotography with a modified chair bracket failed when I couldn’t line up my small, pre-smartphone era camera well enough to locate the planetary system.

This time, I aligned everything before going outside and happened to aim the the parts I was holding at a floor lamp. Camera’s screen showed an off-center circle, and I adjusted the bracket’s wing nuts and bolts until it was centered, but even that wasn’t enough. Unless the camera looks straight down the eyepiece, it gets these funny “shadows” along one side or the other. It’s like looking down a paper towel tube – done right, you see a circle of light in the middle of your vision. Rotate the tube either way that isn’t the axis, and you get this sharpened oval kind of shape before you get no light at all. The bracket as is takes care of tilting up and down, but I still have to worry about twisting left and right. My current setup also lets me vary how deep to drive the bolt, but that gets ugly with wing nuts and rotation and everything. This is all in addition to another joint with two degrees of freedom originally included with the cell phone mount.

Outside, the moon was not up, but Jupiter was sitting high in the western sky. I aligned my star scope on the planet, and when I looked through the eyepiece, three of the four Galilean Moons on display. (From an online tracker, I later learned Io was transiting in front of the planetary disk at the time.) Better yet I found them in my camera. I set a 10 second delay and captured this picture at around 9 PM on February 27, 2024 (Pacific Standard Time).

I followed up with seven additional images at a higher zoom; they fell victim to distortions I believe were related to rotational misalignment between the camera and eyepiece which weren’t as noticeable at the camera’s lower internal magnification.

  • The ten second delay timer proved important as it took as long as eight seconds for the telescope to stop shaking after touching the camera.
  • Focus on the telescope was also big on my mind with colors turning green or blue (asymmetrically from the misalignment, of course), and tiny adjustments were next to impossible with the knob jumping.
  • Zoom in on the camera too far, and I get a lens error when it gets stuck (I used the 2x eyepiece for a chance at locating Jupiter with the 4x).
  • And even then I couldn’t get a still image where Jupiter resolved as a circle – looking directly: I saw a disk; camera preview: disk; take picture: flare.
  • I was also having to re-aim the telescope every two exposures to track Jupiter as it moved across the sky. After a couple pictures, I rotated the camera so manual tracking wasn’t on a diagonal relative to the screen.

Improvements

From here, I have a few ideas on how to improve my setup: I could further modify the bracket to make it easier to use by reducing the number of parts I need to keep track of simultaneously – possibly by designing and 3D printing a custom bracket. I do have ideas. Alternatively: I have completed my goal of planetary photography, so I might consider indulging in a sensor that connects directly to the telescope instead of an eyepiece. It would be a tough call between that and a tracking mount, but taking the setup to an area with less light pollution can only improve my results.

Takeaway

I am enjoying learning more about God’s Creation. Looking closely at my picture, I can see more points of light than the ones I know about. Are any of these smaller moons? I don’t know. Is that red star to the right actually a red dwarf, or is it a camera error? I don’t know. What are the limits of my optics? I still have much to learn about them, but the saying is that the more you look, the more you will see.

Final Question

Are you into astronomy? What advice would you give someone who’s just starting out?

I look forward to your answers the comments below or on my Socials!