Roadmapping Switching to Linux Phones

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am dusting off my studies of Android in response to happenings around the family. Let’s get started.

Phones and Privacy

This round started with my father’s phone screen burning out. He got it repaired, but my sister’s phone screen was cracked, and I ended up giving her my otherwise unused one of the same model.

Consumer privacy has been an increasingly important question as computers integrate more tightly with our everyday lives. We now live in a time and place where most everyone old enough to read is coerced by convenience into loading themselves down with every tracker known to Man unless restrained by applicable law. And this issue will only continue to grow as technologies such as consumer VR and clinically installable neural interfaces mature and facilitate the collection of ever more intimate personal data. Legislation isn’t keeping up, but a minority of people unsatisfied with this status quo are developing open source alternatives that either let end-users mitigate abuse of technology by obfuscating their data or preventing its collection entirely.

My goal is to one day have the family on Linux phones. OK, what are the main obstacles? From previous research, cell network compatibility has been big, but support from our current carrier is nil the moment I read off my prototype PinePhone’s IMEI number (an individual cell modem identifier). Additional general knowledge research turned up that banking apps have a tendency to really hate working on Android devices without Google’s seal of approval.

Phone Carriers

Most major cell companies will prefer you buy a phone on credit that’s been locked to their network for a couple years. Per contract, you don’t get admin privileges until it’s paid off and unlocked – assuming you still have a unit at the end. Even if a service has a bring-your-own-phone program, it may be limited to a short list of models, even if your unapproved unit is compatible with the wireless technology[ies] their network is built on (which I believe was the case last time we switched carriers). Even then, a phone may only be compatible between any combination of calls, texting, data, or other wireless functionalities.

Complicating the matter for me specifically, I cannot even comfortably look at pictures where a part of the screen has been sacrificed for a camera “island” per the modern trend. I need a phone with like goodies in the bezel where they belong. After doing some research, I narrowed my choices down to the Librem 5 and the PinePhone Pro. General research on each for this week turned up year-old criticism about the American made Librem 5 refunds to weigh against the PinePhone being assembled in China like most other personal devices. I looked up each and found a compatibility chart for each and made a better informed recommendation to my parents for when we switch carriers.

Android On Linux

One high priority feature my parents are after in their phones is mobile deposit. That’s not happening on a week’s notice, even if I had a phone already to try it on. From a software point of view though, a Linux desktop is essentially identical to a Linux phone minus a cellular modem, SIM card, and miscellaneous other peripherals.

Many tools exist specifically to run Android apps on desktop. This week, I explored running BlissOS custom ROM on QEMU/KVM. QEMU and KVM took a while to straighten out, so I might be mistaken here, but QEMU is an emulator/hypervisor and KVM is a Linux kernel module for virtualization QEMU can optionally use for direct access to hardware for improved performance. Along the way, I was recommended in the direction of using an AMD graphics card instead of Nvidia. That meant using Derpy Chips, a computer from before Intel came up with the special circuitry needed for this kind of virtualization…

…Well, it looks like I’ve been carrying some bad info around, because this virtualization circuitry has been around for a lot longer than I thought! I navigated Derpy’s BIOS (Yeah, I know I’ve made a stink about them being UEFI and not BIOS, but they’re labeled BIOS despite being UEFI.) to turn on virtualization and and I got a proof of concept going. I tried using its command line installer, but couldn’t figure out how to work anything related to the hard drive. I could consistently get to –I assume– a live session run directly off the disk image. I successfully accessed the Internet from within the VM, but installing apps or even trying to make an entry on the file system remains an unsolved issue. Nevertheless: this is major progress.

Takeaway

Android at its core was made open source by Google to catch up to the iPhone. Now that it’s ahead, they’ve had seasons of moving as much of the definitive experience out of the public eye, but that’s not kept people from making custom ROM’s. While my next major goal in this project is to install an app, the another blockade on my developing road map is SafetyNet. When I get there, I’ll want to look up Magisk and Shamiko, two names I came across pertaining to the Android custom ROM community.

I’ll also note that I still have additional options to try, like something based on containerization. While writing this up, I took a second look at WayDroid, which I dismissed assuming it wouldn’t work in an X11 environment, and it just might.

Final Question

Do you have any experience with Linux phones? I would be most interested in hearing from you in the comments below or on my Socials!

How I Would Relearn Linux #7: Wandering

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am back with another Linux-learning tip. Let’s get started.

This edition of How I would Relearn Linux will be a bit heavier on the story side, but I’ll try to make it as interesting as possible. If I totaly lose you, feel free to skip to the Takeaway before making your way back to where you left off.

DerpyChips, one of my two daily drivers, has had a longstanding problem with running Discord where the proper installation crashes after updating (if applicable). I have it installed as a Snap installation, which got it working, but costs me my KDE themed mouse pointer, which I’m quite fond of. I thought to fix it as a side project. For the record, none of these hammers work: restarting the program, rebooting, reinstalling.

When in doubt, launch a failing program from the command line. I caught an error and looked it up, which landed me on Reddit where one user had fixed the exact same problem by finding a better driver. Derpy has an AMD card, so I looked through AMD’s site for the proper proprietary driver. It took an hour or so, but I found a list of 32 and 64 bit drivers for my exact card – they were just seven years old! The card is about twice that.

Now, I faced a dilemma. AMD’s driver might be the most compatible, but will it work with KDE 6, which I expect Derpy may be running within a year barring any major hardware upgrades? I have doubts. Besides, I didn’t feel like cleaning up a driver bricking my installation anymore. I’ll probably seek out an open source driver at a future date once I gathered a bit more background knowledge.

Along the way, I noticed a couple abbreviations tied to the graphics system: EGL and GLX. A search brought me to a Hackaday post about Firefox 94 and above switching from GLX to EGL as window management transitions away from the legacy X11 window management system in favor of Wayland for lower overhead, better security, and closer access to the hardware [1]. The top comment lamented this transition finishing with, “Sadly the kids don’t want a fun OS that does cool things like let you run your program on one machine but display it on a second. I guess they just want an OSX that they didn’t have to pay for.” My surprise died down when I read the screen name: XForever. The comments in reply were a frenzy of reports claiming X11 remote display either “never did [work]” [X], “worked perfectly fine 20 years ago and still does” [Traumflug], “only ‘worked’ because network security was a joke 20 years ago” [Pat], and that slow network could make or break the experience [Guest]. One user [raster] went back to refute XForever’s complaint with a link to a Git for Waypipe.

While the conversation brought up other methods of graphical computing from another device, like VNC (Virtual Network Client) and RDP (Remote Desktop Protocol), SSH caught my attention [Redhatter (VK4MSL)].

ssh -CY user@remotehost x11client

The unfamiliar flags turn out to mean compressing the datastream and trusting something to do with X11. I had to try for myself.

Logging in to my father’s computer, I brought up FreeTube over SSH no problem. I then tried MultiMC’s start script, and something errored out and it came up on his screen. I thought that was it without making a custom script or at least an export command, but I tried running bash as my x11client. It came up without prompts or access to command history, but when I called up the script, I got a window. The game itself was a bit slow to play given how choppy the video was, but it was good enough to navigate to an AFK point – until I turned up the graphics with extra shaders, then it dropped to 1fps.

Takeaway

Allow yourself to wander. Poke at annoying, non-critical problems every so often. Even if you don’t get to where you intended, you can still end up learning something objectively cool. I can see this tool will be very useful

Final Question

What cool stuff have you found by allowing yourself to wander?

Work cited

[1] M. Carlson, “Firefox Brings the Fire: Shifting from GLX to EGL,” hackaday.com,Nov. 23, 2021. Available: https://hackaday.com/2021/11/23/firefox-brings-the-fire-shifting-from-glx-to-egl/ [Accessed Mar. 25, 2024].

How I Would Relearn Linux #6: Bleeding Edge vs. Stable vs. Enterprise

Good Morning from my Robotics Lab! This is Shadow_8472 and today I have another installment in how I would relearn Linux. Let’s get started!

The Linux ecosystem has a lot of moving parts within easy view of the end user. Various distributions curate these often redundant parts in line with their respective goals. For example: how new do you like/want/need your software?

If “Cutting Edge” is the latest and greatest, then developers and enthusiasts willing to volunteer time and effort reporting bugs and provide feedback for new features belong on the “Bleeding Edge” – typically using a some sort of rolling release or compile-it-yourself based package manager.

On the hand, if the system’s goal is to not need sudo every few days, a stable system like Ubuntu or one of its many downstreams may be a better choice, but know that distros aimed at an Enterprise audience may leave you spending excessive amounts of time dumpster diving through backports for missing dependencies trying to compile source code when a two-year-old feature comes up missing (ask-me-how-I-know).

My pick is that I maintain a bit of everything. I have two computers I use as daily drivers. One runs EndeavourOS and another PopOS. EndeavourOS is Arch-based and on a rolling release while PopOS is Ubuntu-based with releases every few months. This way, if major changes break something with software I use across both devices don’t hit me all at once. Just this past week, I re-booted into KDE 6 on EndeavourOS and Discord immediately started having problems displaying messages as I draft them and occasionally the whole window would black out until interacted with. Turns out KDE 6 defaults to using Wayland instead of the old graphics display system, X (also called xorg, X11, x.org, etc). These are/this is [a] bug[s] that need[s] attention before X is retired from mainstream use. I now know to look out for it/them when KDE 6 comes to PopOS – probably sometime later this year. For now, X is still around in case a fallback is needed.

Takeaway

While my alert system to changing conditions is adequate for me at this time, another approach to consider is staying current on news regarding projects you use. I didn’t note his name, but I did come across someone saying he read about the KDE 6 Wayland. Time will prove the choice pioneering or foolish.

Final Question

Where do you balance on the innovation vs. stability scale?

Pi Spycam: Prototype Deployment

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am trying to solve an animal misbehavior problem around the house, and we don’t know who’s done doing it. And I’m literally dusting off an old project to help. Let’s get started!

Blinkie Pie is a Raspberry Pi 3B+ with a case themed after a PacMan ghost. I modified the files to include a Pi camera module looking out its left eye, but got bogged down with computer vision trying to automate critter punishment. Well, in the present day, one or both of our cats keeps doing something in the same area of the house, and a video feed served over HTTP (to access via browser) could let us monitor things on a second screen. After a brief survey of open source projects, Motion looks to be exactly what I think I need. [1]

DietPi looks like a good base OS. The article I found says it’s based on Debian Buster [2], but I’m only after a short-term project. I checked on Pkgs.org and Motion is listed as in the Debian Buster ARM repositories. [3]

Modern DietPi is based on Debian Bookworm as I found out when I downloaded it. Motion was present, but setup was annoying. I’ll spare the blow by blow, but the terminal program serving as its installer doesn’t think like me. Sometimes it ran quite slow (Pi 3B+ being old?). Credit to the install script where credit is due, but it got hung up trying to update, so I had to drop out and unsnag it manually.

Motion was obtuse to get working. While pounding around for a solution, I found a curated list of software calling the project motionEye. It didn’t find the camera module until after a reboot or two. My only confirmation was terminal activity in response to motion in front of the camera. And working from the command line, I can’t check on Motion’s web interface. A few hours of diagnostic research later, I used curl over SSH and found data on Blinkie Pie’s localhost:8081, but not over network_ip:8081 – which is a wise default configuration with the base OS lacking a firewall, but annoying for my low security use case of monitoring cats. I overrode this setting with a config option at ~/.motion/motion.conf.

# Restrict stream connections to the localhost.
stream_localhost off

I now had images around once per second, but they were coming through upside-down because of how I mounted my camera module in its case. I Modified a similar setting to allow the webUI, but it was very limited. I’d spent a while Thursday night trying to solve this flip issue from the config file, but I only ended up jamming terms from documentation in ways that didn’t work. I circled back around on Friday afternoon and used a search function to find settings for both rotation and flip, but later realized the system only flipped vertically (probably because I used two lines).

With high to mediocre hopes, I deployed Blinkie Pie to watch over Sabbath. The image came out dark, so I found an old desk lamp I had stashed in a closet and left a few other lights on. The good news is that it was more stable than during tentative testing. The only difference I can think of I that I was only SSH’ing over Wi-Fi once instead of for two links.

To access the recordings, my first instinct was to use SCP, a program to transfer files over the SSH protocol. DietPi operating system does not include SCP by default. Instead, I logged in with the FISH protocol, which doesn’t require anything special on the other end besides SSH. Bonus: I could use it with Dolphin (KDE’s file browser). Unfortunately, the default motion detection settings mostly caught human family. Our little, white dog starred in a couple automated recordings, but my black Labrador was seen in one clip being ordered to lay down in the observation zone without any footage from when he got up and left. At no point did I see a cat who wasn’t being carried.

Takeaway

This project didn’t catch anycreature in the act, but I’m still satisfied with my progress this week, and I intend to follow it up later this month where I tweak the configuration to be more sensitive to smaller creatures.

Final Question

Have you any suggestions on tracking cats with Motion or a similar technology?

Works Cited

[1] C. Schroder, “How to Operate Linux Spycams With Motion,” linux.com,July 10, 2014. [Online]. Available: https://www.linux.com/training-tutorials/how-operate-linux-spycams-motion/. [Accessed Mar. 11, 2024].

[2] C. Cawley, “The 8 Best Lightweight Operating Systems for Raspberry Pi,” makeuseof.com, Nov. 7, 2021. [Online]. Available: https://www.makeuseof.com/tag/lightweight-operating-systems-raspberry-pi/. [Accessed Mar. 11, 2024].

Available:

[3]pkgs.org, [Online]. Available: https://pkgs.org/search/?q=motion. [Accessed Mar. 11, 2024].

Study Month: Third Rocky Linux Server

Good Morning from my robotics Lab! This is Shadow_8472, and this February is a study month where I am not sticking myself to hard, weekly goals – though I am aiming for a cluster by the end of the month. Let’s get started!

I downloaded Rocky Linux 9.3 minimal .ISO and flashed it to a USB stick – overwriting an old Rocky Linux 8 installation media. Normally I would use my Ventoy multiboot USB, but Ventoy froze while loading last week before it got to the .ISO selection screen.

The Rocky Linux installer is probably my favorite. It’s non-liniar, so I can go back and change things before finalization without fear of wiping my other settings. My target drive had Debian already installed, and another drive has Linux Mint, which I do not want to overwrite. I confirmed my target drive at least four ways before proceeding. The networking tab let me assign the static IP I want with no undue hassle. Here, I also assigned it the name of Joystick.


I’m keeping this installation as slim as possible to focus on learning clustering. Cockpit (web admin page) is a must-have, but I’m not even putting Podman on this new installation until I believe it is needed. The eventual plan is to keep ButtonMash on as home server until I have RedLaptop and Joystick cooperating in a cluster. Then I’ll move ButtonMash over to that cluster with the possible need to migrate to Rocky 9.x in the process. For this week though, the cluster software I’m researching, Red Hat Cluster Suite, is proving difficult to find the package name[s] for.

I also need to keep a radar for another major goal of this project: an open source image board. I need a tags system, a way to pair two scans to display the backs of some photos, and an album system wouldn’t be bad.

Unboxing: System76 Thelio Mira

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project of the week. Let’s get started!

Some weeks ago, I helped my father, Leo_8472, spec up a Thelio Mira from System76, and it arrived this weekend. The first thing we did after unboxing yesterday (as of posting) was open it up and look inside the case. While everything appeared to be there, the system is very self-aware when it comes to airflow – having a dedicated duct from the side to the back for the CPU and an all around crowded feel inside the case. If you’re considering one of their systems, I’d recommend not opting to assemble your first one yourself.

We became concerned when the graphics card appeared to be the later-released budget variation on the NVIDIA RTX 4070 Ti one we thought we ordered. Leo found his receipt listing parts we remembered, and we set it up by my server stack for initial setup and taking inventory.

It shipped with PopOS installed – on a recovery partition with self-contained installation media. The installer appeared normal, but it skipped over/I didn’t notice it asking for installation drive, time zone, or host name – the later two of which we provided later.

When we ordered, Leo was very interested in Bluetooth, but I couldn’t find it. One of the first things he did after logging in after initial updates was find and test it. I installed SuperTuxKart to test it with his hands-free headset. He even beat a few races.

Other stuff we loaded up: Firefox data from Mint (4 tries to get right), FreeTube, Discord. I installed KDE as a desktop environment for when I need to use the computer, and chose SDDM for a login manager, and we had fun picking out themes. We found this black hole login splash screen I hacked to display mm/dd/yyyy instead of its default dd/mm/yyyy.

Over this process, we verified hardware with a few commands: lsblk (hard drive size), lspci (GPU, failed), free (RAM size), neofetch (installed special, wasn’t insightful towards GPU). Eventually, we confirmed the correct graphics card from within KDE’s System Settings>About this System.

Unfortunately, the system destabilized before we finished moving in. Leo documented the failure and we contacted support. I further noted that it still failed colorfully under the default “Pop” theme.

To do: copy over MultiMC, enable SSH, NFS mounts/automounts.

Takeaway

Even though it wasn’t immediately plug and play, I’m thankful for the time I’m spending with my father working on this system.

Final Question

Have you ever bought a system designed for Linux?

Rocky Server Stack Deep Dive: 2023 Part 4

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am exploring fuse-overlayfs as a possible patch between Podman and NFS. Last week’s post was practically a freebee, but I expect this one to be a doozy if it’s even possible. Let’s get started!

Context

For my homelab, I want to run Nextcloud in a rootless Podman 3.0.1 container with storage volumes on our NFS. For logistical reasons, Nextcloud needs to be on RedLaptop running Debian 11 (Linux kernel 5.10.0-26-amd64 x86_64). The NFS share I wish to share is mounted via systemd.

My most promising lead is from Podman Github maintainer rhatdan on October 28, 2023, where he made a comment about “fuse file system,” asking his colleague, @giuseppe, for thoughts to which there has been no reply as of the afternoon of November 10 [1]. I documented a number of major milestones there, which I’ll be covering here.

File System Overlays

Fuse file system turned out to be fuse-overlayfs, one of a few systems for fusing file systems. Basically: there are times when it’s useful to view two or more file systems at once. File system overlays can designate a lower file system and an upper file system. Any changes (file creation, deletion, movement, etc.) in this combined file system manifest in the upper file system, leaving the lower file system[s] alone.

Through a lot of trial and error, I set up a lower directory, an upper directory, a work directory, and a mountpoint. My upper directory and work directory had to be on the NFS, but I ran into an error about setting times. I double checked that there were no major problems related to Daylight Savings Time ending, but wasn’t able to clear the error. I sent out some extra help requests, but got no replies (Sunday, Nov. 12). A third of my search results are in Chinese, and the others are either not applicable or locked behind a paywall. Unless something changes, I’m stuck.

Quadlets

Github user eriksjolund got back to me with another idea: quadlets [1]. Using this project merged into Podman 4.4 and above, he demonstrated a Nextcloud/MariaDB/Redis/Nginx setup that saves all files as the underprivileged user running the containers. In theory, this sidesteps the NFS incompatibilities I’ve been experiencing all together.

The first drawback from my perspective is that I need to re-define all my containers as systemd services, which is something I’ve admittedly been meaning to do anyway. A second is again that this is a feature merged into Podman much later than what I’m working with. Unless I care to go digging through the Podman GitHub myself, I’m stuck with old code people will be reluctant to support.

Distro Hunt

Why am I even using Debian still? What is its core purpose? Stability. Debian’s philosophy is to provide proven software with few or no surprises left and the user polishes it to taste. As my own sysadmin, I can afford a little downtime. I don’t need the stability of a distro supporting the most diverse Linux family tree. Besides, this isn’t the first time community support has suggested features in the future of my installation’s code base. Promising solutions end in broken links. RAM is becoming a concern. Apt package manager has proven more needy than I’d care to babysit. If I am to be honest with myself, it’s time to start sunsetting Debian on this system and find something more up-to-date for RedLaptop. I’ll keep it around for now just in case.

My first choice was Fedora to get to know the RedHat family better. Fedora 39 CoreOS looked perfect for its focus on containers, but it looks like it will require a week or two to configure and might not agree with installing non-containerized software. Fedora 39 Server was more feature complete, but didn’t load up for my BIOS (as opposed to the new standard of UEFI); I later learned that new BIOS-based installations were dropped on or around Fedora 37.

I carefully considered other distributions with the aid of pkgs.org. Debian/Ubuntu family repositories go up to 4.3. Alpine Linux lacks systemd. Souls Linux is for desktops. OpenSuse Tumbleweed comes with warnings about being prepared to compile kernel modules. Arch is… Arch.

Fresh Linux Installation

With time running out in the week, I decided to forgo sampling new distros and went with minimal Rocky 9. Installation went as best can be expected. I added/configured cockpit, podman, cockpit-podman, nfs-utils, and nano. I added a podmanuser account, set it up to allow-lingering, and downloaded the container images I plan on working with on this machine: PiHole, Unbound; Caddy; Nextcloud, Redis, MariaDB; busybox.

Takeaway

I write this section on Friday afternoon, and I doubt I have enough time remaining to properly learn Quadlets and rebuild my stack, so I’m going to cut it off here. From what I’ve gathered already, Quadlets mostly uses Systemd unit files, a format I’ve apparently worked with before, but also needs Kubernetes syntax to define pods. I don’t know a thing about using Kubernetes. If nothing else, perhaps this endeavor will prepare me for a larger project where larger scale container orchestration is needed.

Final Question

Do you know of a way I might have interfaced Podman 3 with NFS? Did I look in the wrong places for help (Debian forums, perhaps)?

I look forward to hearing from you on my Socials!

Work Cited

[1]. Shadow_8472, D. “rhdan” Walsh, E. Sjölund, “Rootless NFS Volume Permissions: What am I doing wrong with my Nextcloud/MaraiDB/Redis pod? #20519,” github.com, Oct. 27, 2023-Nov. 10, 2023. [Online]. Available: https://github.com/containers/podman/discussions/20519#discussioncomment-7410665. [Accessed Nov. 12, 2023].

How I would Relearn Linux #5: Basic Scripting

Good Morning from my Robotics Lab! This is Shadow_8472 with another tip for how I would relearn Linux. Let’s get started.

Scripts

No serious system administrator has time to memorize long chains of commands taking minutes to type and might need fixing after the first try. When faced with a periodic task involving a long, involved [set of] command[s], it’s preferable to write them in a script to be run sequentially from file. While I did get by for a few years by using Bash’s (terminal shell, see section on shells) history functionality, a script is a much more sustainable way to “remember” commands.

To write a script, put your command[s] into a text document – one per line. The pound sign # at the start of a line turns it into a comment to clue future you or others into your thought process. Using a backslash \ character and additional whitespace for alignment, lines may be broken up at any point to increase readability. This becomes especially useful when dealing with half a dozen flags.

#this line is a comment
podman run \
    --name testpilot \
    --network podman \
    --ip 10.88.2.88 \
    -v ~/sandbox/testVolume:/root/vol1 \
    --rm \
    busybox:latest
#No interrupting commands – even with commented flags.
#    -v ~/sandbox/wrongTestVolume:/root/vol2 \

A script will then need permission to execute.

chmod +x <filename>

Lacking a proper explanation of the Linux permissions system, just know this is a fast way to ensure YOU have permission to EXECUTE your script.

Even understanding up to this point unlocks a powerhouse of possibility. These tools –sequential commands, comments, line breaks, and white space– are all I need to make scripts to stitch together more complicated programs I work with to build my home server. Most importantly: scripting commands allows me to clearly visualize the command I’m working on.

Shells

One slightly mind bending concept is the shell. In simplified terms, a computer shell is an operating environment. Open a terminal on a Linux workstation, and you probably have a Bash shell. Some part of desktop environment you opened that shell from is a graphical shell. From Bash, you can open another instance of Bash inside the last one like a Russian matrushka doll (nesting dolls). Connect to another computer with SSH, and there’s another shell (one one each machine working together, I think?). Other programs –such as a Python interpreter– may have a shell of their own.

Not all shells are intended for human interaction. Somewhere below the graphical environment is logically a shell either running directly atop the kernel (I don’t actually know if the kernel itself counts as a shell) optimized for running quickly.

Most important to this conversation: running a script will open a shell for its own use and close it afterwords. Fancier shell scripts can leverage many powers characteristic of typical programming languages. Flow control allows logic to branch and loop depending on the state of a whole assortment of variable types. And thanks to shell manipulation, variables can be isolated from one another in ways I’m totally unfamiliar with. And all this exposed power comes packaged directly into the operating system itself.

Takeaway

Shell scripting is a powerful tool. This single lesson is enough to unlock a wealth of potential, yet only a fraction of its total capabilities. Variables –for example– turned out more involved than I thought, so I scrapped a section them after writing about shells to support it. While it’s OK to run with only a partial knowledge, it’s also good to be aware of additional capacities when reading scripts written, polished, and published for use by other people.

Final Question

What projects have you designed with scripts?

Rocky Server Stack Deep Dive: 2023 Part 2

MAJOR progress! This week, I’ve finally cracked a snag that’s been holding me for two years. Read on as I start a deep dive into my Linux server stack.

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am continuing renovations on my home server, ButtonMash. Let’s get started!

The daily progress reports system worked out decently well for me last week, so I’ll keep with it for this series.

Caddy is an all-in-one piece of software for servers. My primary goal this week is to get it listening on port 44300 (the HTTPS port multiplied by 100 to get it out of privileged range) and forwarding vaultwarden.buttonmash.lan and bitwarden.buttonmash.lan to a port Vaultwarden, a Bitwarden client I use, is listening on over Buttonmash’s loopback (internal) network.

Tuesday Afternoon

From my Upstairs Workstation running EndeavourOS, I started off with a system upgrade and reboot while my workspace was clean. From last week, I remember Vaultwarden was already rigged to have port 44300, but straight away, I remembered its preferred configuration is HTTP coming into the container, so I’ll be sending it to 8000 instead.

My first step was to stop the systemd service I’d set up for it and start a container without the extra Podman volume and ROCKET arguments needed to manage its own HTTPS encryption. Getting my test install of Caddy going was more tricky. I tried to explicitly disable its web server, but figured it was too much trouble for a mere test, so I moved on to working with containers.

While trying to spin up a Caddy container alongside Pi-Hole, I ran into something called rootlessport hogging port 8000. I ran updates and rebooted the server. And then I realized I was trying to put both Caddy and Vaultwarden on the same port! I got the two running at the same time and arrived on Caddy’s slanted welcome page both with IP and via Pi-Hole-served domain_name:port_number.

Subdomains are my next target. I mounted a simple Caddyfile pointing to Vaultwarden and got stuck for a while researching how I was going to forward ports 80 and 443 to 8000 and 44300, respectively. Long story short, I examined an old command I used to forward DNS traffic to Pi-Hole and after a much background research about other communication protocols, I decided to forward just TCP and UDP. I left myself a note in my administration home directory.

DNS: Domain Name System – Finds IP address for URL’s.

sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=tcp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=udp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=tcp:toport=44300 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=udp:toport=44300 –permanent

I still don’t get a reply from vaultwarden.buttonmash.lan. I tried nslookup, my new favorite tool for diagnosing DNS, but from observing Caddy’s cluttered logs, I spotted it rejecting my domain name because it couldn’t authenticate it publically. I found a “directive” to add to my declaration of reverse proxy to use internal encryption.

But I still couldn’t reach anything of interest – because reverse-proxied traffic was just bouncing around inside the Caddy container! The easy solution –I think– would be to stack everything into the same pod. I still want to try keeping everything in separate containers though. Another easy solution would be to set the network mode to “host,” which comes with security concerns, but would work in-line with what I expected starting out. However, Podman comes with its own virtual network I can hook into instead of lobbing everything onto the host’s localhost as I have been doing. Learning this network will be my goal for tonight’s session.

Tuesday Night

The basic idea behind using a Podman network is to let your containers and pods communicate. While containers in a pod communicate as if over localhost, containers and pods using a Podman network communicate as if on a Local Area Network down to ip address ranges.

My big question was if this was across users, but I couldn’t find anyone saying one way or the other. Eventually, I worked out a control test. Adding the default Podman network, “podman,” to the relevant start scripts, I used ip a where available to find containers’ ip addresses.Pi-Hole then used curl to grab a “Hello World!” hosted by Caddy on the same user. I then curled the same ip:port from Vaultwarden’s container and failed to connect. This locked-down behavior is expected from a security point of view.

On this slight downer, I’m going to call it a night. My goal for tomorrow is to explore additional options and settle on one even if I don’t start until the day after. In the rough order of easy to difficult (and loosely the inverse of my favorites), I have:

  1. Run Caddy without a container.
  2. Run Caddy’s container rootfully.
  3. Run Caddy’s container in network mode host.
  4. Move all containers into a single user.
  5. Perform more firewalld magic. (Possibly a flawed concept)
  6. (Daydreaming!!) Root creates a network all users can communicate across.

Whatever I do, I’ll have to weigh factors like security and the difficulty of maintenance. I want to minimize the need for using root, but I also want to keep the separate accounts for separate services in case someone breaks out of a container. At the same time, I need to ask if making these connections will negate any benefit for separating them across accounts to begin with. I don’t know.

Wednesday Afternoon

I spent the whole thing composing a help request.

Wednesday Night

The names I am after for higher-power networking of Podman containers are Netavark and Aardvark. Between 2018 and around February 2022 it would have been Slirp4netns and its plethora of plugins. Here approaching the end of 2023, that leaves a8 and onword is an outright betrayal round four years worth of obsolete tutorials to not quite two years with the current information – and that’s assuming everyone switched the moment the new standard was released, which is an optimistic assumption to say the least. In either case, I should be zeroing in on my search.

Most discouraging are how most of my search results involving Netavark and Aardvark end up pointing back to the Red Hat article announcing their introduction for fresh installs in Podman 4.0.

My goal for tomorrow is to make contact with someone who can point me in the right direction. Other than that, I’m considering moving all my containers to Nextcloud’s account or creating a new one for everything to share. It’s been a while since I’ve been this desperate for an answer. I’d even settle for a “Sorry, but it doesn’t work that way!”

Thursday Morning

Overnight I got a “This is not possible, podman is designed to fully isolate users from each that includes networking,” on Podman’s GitHub from Lupa99, one of the project maintainers [1].

Thursday Afternoon

Per Tuesday Night’s entry, I have multiple known solutions to my problem. While I’d love an extended discourse about which option would be optimal from a security standpoint in a production environment, I need to remember I am running a homelab. No one will be losing millions of dollars over a few days of downtime. It is time to stop the intensive researching and start doing.

I settled on consolidating my containers into one user. The logical choice was Pi-Hole: the home directory was relatively clean, I’d only need to migrate Vaultwarden. I created base directories for each service noting how I will need to make my own containers some day for things like games servers. For now, Pi-Hole, Caddy, and Vaultwarden are my goals.

Just before supper, I migrated my existing Pi-Hole from hard-mounted directories to Podman volumes using Pi-Hole’s Settings>Teleporter>Backup feature.

Thursday Night

My tinkerings with Pi-Hole were not unnoticed. At family worship I had a couple family members reporting some ads slipping through. At the moment, I’m stumped. If need be, I can remigrate by copying the old instance with a temporary container and both places mounted. My working assumption though is that it’s normal cat and mouse shenanigans with blocklists just needing to update.

It’s been about an hour, and I just learned that any-subdomain.buttonmash.lan and buttonmash.lan are two very different things. Every subdomain I plan to use on ButtonMash needs to be specified on PiHole as well as Caddy. With subtest.buttonmash.lan pointed at Caddy and the same subdomain pointed at my port 2019 Hello World!, I get a new error message. It looks like port 80 might be having some trouble getting to Caddy…

$ sudo firewall-cmd –list-all

forward-ports:
port=53:proto=udp:toport=5300:toaddr=

That would be only Pi-Hole’s port forward. Looking at that note I left myself Tuesday, and I can see I forwarded ports 8000 and 44300 into themselves! The error even ended up in the section above. Here’s the revised version:

sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=udp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=44300 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=udp:toport=44300 --permanent

I also removed Tuesday’s flubs, but none of these changes showed up until I used

sudo firewall-cmd --reload

And so, with Pi-Hole forwarding subdomains individually and the firewall actually forwarding the HTTP and HTTPS ports (never mind that incoming UDP is still blocked for now), I went to https://vaultwarden.buttonmash.lan and was greeted with Firefox screaming at me, “Warning: Potential Security Risk Ahead” as expected. I’ll call that a good stopping point for the day.

My goal for tomorrow is to finish configuring my subdomains and extract the keys my devices need to trust Caddy’s root authority. It would also be good to either diagnose my Pi-Hole migration or re-migrate it a bit more aggressively.

Friday Afternoon

To go any farther on, I need to extract Caddy’s root Certificate Authority (CA) certificate and install it into the trust store of each device I expect to access the services I’m setting up. I’m shaky on my confidence here, but there are two layers of certificates: root and intermediate. The root key is kept secret, and is used to generate intermediate certificates. Intermediate keys are issued to websites to be used for encryption when communicating with clients. Clients can then use the root certificate to verify that a site’s intermediate certificate is made from an intermediate key generated from the CA’s root key. Please no one quote me on this – it’s only a good-faith effort to understand a very convoluted ritual our computers play to know who to trust.

For containerized Caddy installations, this file can be found at:

/data/caddy/pki/authorities/local/root.crt

This leads me to the trust command. Out of curiosity, I ran trust list on my workstation and lost count around 95, but I estimate between 120 and 150. To tell Linux to trust my CA, I entered:

trust anchor <path-to-.crt-file>

And then Firefox gave me a new warning: “The page isn’t redirecting properly,” suggesting an issue with cookies. I just had to correct some mismatched ip addresses. Now, after a couple years of working toward this goal, I finally have that HTTPS padlock. I’m going to call it a day for Sabbath.

My goal for Saturday night and/or Sunday is to clean things up a bit:

  • Establish trust on the rest of the home’s devices.
  • Finish Vaultwarden migration
  • Reverse Proxy my webUI’s to go through Caddy: GoldenOakLibry, PiHole, Cockpit (both ButtonMash and RedLaptop)
  • Configure Caddy so I can access its admin page as needed.
  • Remove -p ####:## bound ports from containers and make them go through Caddy. (NOT COCKPIT UNTIL AVAILABLE FROM REDUNDANT SERVER!!!)
  • Close up unneeded holes in the firewall.
  • Remove unneeded files I generated along the way.
  • Configure GoldenOakLibry to only accept connections through Caddy. Ideally, it would only accept proxied connections from ButtonMash or RedLaptop.
  • Turn my containers into systemd services and leave notes on how to update those services
  • Set up a mirrored Pi-Hole and Caddy on RedLaptop

Saturday Night

Wow. What was I thinking? I could spend a month in and of itself chewing on that list, and I don’t see myself as having the focus to follow through with everything. As it was, it took me a good half hour to just come up with the list.

Sunday

I didn’t get nearly as much done as I envisioned over the weekend because of a mental crash.

Nevertheless, I did do a little additional research. Where EndeavourOS was immediately recipient to the root certificate such that Firefox displayed an HTTPS padlock, the process remains incomplete from where I tried it on PopOS today. I followed the straightforward instructions found for Debian family systems on Arch Wiki, but when I tell it to update-ca-certificates, it claims to have added something no matter how many times I repeat the command without any of the numbers actually changing. I’ve reached out for help.

Monday Morning

I’ve verified that my certificate shows up in /etc/ssl/certs/ca-certificates.crt. This appears to be an issue with Firefox and KDE’s default browser on Debian-based systems. I’ll decide another week if I want to install the certificate directly to Firefox or if I want to explore the Firefox-Debian thing further.

Takeaway

Thinking back on this week, I am again reminded of the importance of leaving notes about how to maintain your system. Even the fog:head AM brain is better able to jot down a relevant URL that made everything clear where the same page may be difficult to re-locate in half a year.

My goal for next week is to develop Nextcloud further, though I’ll keep in mind the other list items from Friday.

Final Question

What do you think of my order of my list from Friday? Did I miss something obvious? Am I making it needlessly overcomplicated?

Let me know in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, Luap99, “How Do I Network Rootless Containers Between Users? #20408,” github.com, Oct. 19, 2023. [Online]. https://github.com/containers/podman/discussions/20408. [Accessed Oct 23, 2023].

[2]. Arch Wiki, “User:Grawity/Adding a trusted CA certificate,” archlinux.org, Oct. 6 2022 (last edited), [Online]. https://wiki.archlinux.org/title/User:Grawity/Adding_a_trusted_CA_certificate#System-wide_–_Debian,_Ubuntu_(update-ca-certificates). [Accessed Oct 23, 2023].

I Have a Cloud!!!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am celebrating my Project of the Year. I have Nextcloud installed! Let’s get started!

My NextCloud Experience in Brief

This week, I achieved a major milestone for my homelab – one I’ve been poking since at least early March of this year when I noted, “Nextcloud has been a wish list item since I gave up using Google’s ecosystem,” [1].

As I learned more about the tech I’m working with, I added specifications for storing scanned pictures on high capacity, but slow hard disks while making smaller files available with the speed of solid state – only to learn later how rootless Podman is incompatible with NFS. I studied “Docker Secrets” to learn best practices for password management – only to move the project to an older version of Podman without that feature. One innovation to survive this torrent of course corrections is running the containers in a pod, but even that was shifted around in the switch from Rocky 8 to Debian.

Two major trials kept me occupied for a lot of this time. The first involved getting the containers to talk to each other, which they weren’t willing to do over localhost, but they were when given a mostly equivalent IPv4 loopback address.

The second was the apparent absence of a database despite multiple attempts to debug my startup script. Nextcloud was talking to MariaDB, but it was like MariaDB didn’t have the database specified in the command to create its container. For this, I created an auxiliary MariaDB container in client mode (Thank-you-dev-team-for-this-feature!) and saw enough to make me think there was none. I wasn’t sure though.

One Final Piece

#remove MariaDB’s “dirty” volume
podman volume rm MariaDBVolume
#reset everything
./resetVolumes
./servicestart

There was no huge push this week. I came across an issue on GitHub [2] wondering why there was no database being created. By starting a bash shell within the MariaDB container, I found there were some files from some long-forgotten attempt at starting a database. All I had to do was completely reset the Podman volume instead of pruning empty volumes as I had been doing.

Future Nextcloud Work

Now that I have the scripts to produce fresh instances, I still have a few work items I want to keep in mind.

I expect to wipe this one clean and create a proper admin account separate from my main username, a practice I want to better get into when setting up services.

Adjacent, but I’ll want access on my tablet, which will entail getting the Bitwarden client to cooperate with my home server.

I still want my data on GoldenOakLibry. I’m hoping I can create a virtual disk or two to satisfy Podman which in turn RedLaptop can relay over NFS to network storage.

Final Question

Have you taken the time to set up your own personal cloud? I look forward to hearing about your setup in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “I Studied Podman Volumes,”letsbuildroboticswithshadow8472.com,March 6, 2023. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2023/03/06/i-studied-podman-volumes/. [Accessed Oct. 2, 2023].

[2] rosshettel, thaJeztah, et. all, “MYSQL_DATABASE does not create database,” github.com, July 9, 2016. [Online].Available: https://github.com/MariaDB/mariadb-docker/issues/68. [Accessed Oct. 2, 2023].