Server Rebuild With Quadlet

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am continuing my work on my home’s network. Let’s get started!

State of the Homelab

Bitwarden. I self host using Vaultwarden, a 3rd party server. Done properly, it fits nicely in a larger homelab stack, but its OCI container can stand alone in a development environment. Due to skill limitations, I’ve been using it in this configuration. My recent network work has invalidated my manually self-signed certificates, and I’d rather focus my efforts on upgrades instead of re-learning the old system to maintain it.

Today, I am working on my newest server, Joystick (Rocky Linux 9). I compiled some command-by-command notes on using `podman generate systemd` to make self-starting, rootless containers, but just as I was getting the hang of it again, a warning message encouraged me to use a newer technique: Quadlet.

Quadlet

Quadlets? I’ve studied them before, but failed to grasp key concepts. It finally clicked though: they replace not just running `podman generate systemd` once I have a working prototype setup, but also everything I might want to do leading up to that point including defining Podman networks and volumes. Just make your Quadlet definitions once, and the system handles the rest.

The tutorial I found that best matches my use case can be found at mo8it.com [1]. Follow the link under Works Cited for the full text. It’s very detailed; I couldn’t have done a better job myeslf. But it doesn’t cover everything, like how `sudo su user` isn’t a true login for `systemctl –user …`. I had to use a twitchy Cockpit terminal for that (Wayland-Firefox bug).

Caddy

Caddy is the base of my dream homelab tech tree, so I’ll start there. My existing prototype calls for a Podman network, two Podman volumes, and a Caddyfile I’m mounting as a volume from the file system. I threw together caddy.container based on my script, but only the supporting network and volumes showed up. Systemd picked up on “mysleep.container,” an example from RedHat.

As it turned out, caddy.container had missed a capitalization. I found the problem by commenting out lines, reloading, and running `systemctl –user list-unit-files` to see when it didn’t load. Likewise, my Caddyfile volume had a file path bug to squash.

Vaultwarden

Good, that’s started and should be stable. On to Vaultwarden. I updated both ButtonMash and Joystick’s NFS unit files to copy over relevant files, but Joystick’s SELinux didn’t like my user’s fingerprints (owner/group/SELinux data) on the NFS definitions. I cleaned those up with a series of cp and mv commands with sudo and then I could enable the automounts.

Vaultwarden went up with simple enough debugging, but the challenge was in accessing it. I toyed with Cerberus/OPNsense (hardware firewall) DNS overrides until Caddy returned a test message from <domain.lan>:<port#>.

Everything

My next battle was with Joystick’s firewall: I forgot to forward tcp traffic from ports 80 and 443 to 8000 and 44300, respectively. Back on Cerberus, I had to actually figure out the alias system and use that. Firefox needed Caddy’s root certificate. Bouncing back to the network Quadlet, I configured it according to another tutorial doing something very similar to what I want [2]. I configured mine without an external DNS. A final adjustment to my Caddyfile to correct Vaultwarden’s fully qualified domain name, and I was in – padlock and everything.

Takeaways

I come out of this project with an intuition of how to manage Systemd files – especially Quadlet. The Quadlet workflow makes Podman container recipes for Systemd, and a working container will work forever – baring bad updates. I would still recommend prototyping with scripts when stuck though. When a Quadlet fails, there is no obvious error message to look up – it just fails to show up.

Even though it is still new, a lot of my time on Joystick this week was diagnosing my own sloppiness. Reboots helped when I got stuck, and thanks to Quadlet, I didn’t have to worry about spaghetti scripts like how I originally organized ButtonMash and never stabilized this victory I re-achieved today.

Final Question

NextCloud is going to be very similar, which I will make a pod along with MariaDB and Redis containers. But I am still missing one piece: NFS. How do I do that?

I look forward to your answers below or on my Socials.

Works Cited

[1] mo8bit, “Quadlet: Running Podman containers under systemd,” mo8it.com, Jan. 2-Feb. 19, 2024. [Online]. Available: https://mo8it.com/blog/quadlet/. [accessed: Sept. 13, 2024].

[2] G. Coletto, “How to install multi-container applications with Podman quadlets,” giacomo.coletto.io, May 25, 2024. [Online]. Available: https://giacomo.coletto.io/blog/podman-quadlets/. [accessed: Sept. 13, 2024].

Joystick Server Reinstall

Good Morning from my Robotics Lab! This is Shadow8472, and today I am rebuilding Joystick, Button Mash’s twin Rocky Linux server. Let’s get started!

Project Goals

This is a revival of my photo scanning project, first and foremost. I need to get it working, and while I will afford myself some time to experiment with doing it right, it’s more important I get it done. Once I have it working better than Button Mash, I can move my production flag over to Joystick and remodel Button Mash “properly.”

And by “properly,” I mean rootless Podman access to a network attached storage. I have explored NFS (Network File System) extensively, but Podman does things with userspace that NFS doesn’t support. I may need to be open to an alternative or using root. SSHFS lays a file system over SSH, but

Installation

I did my usual stuff when installing Linux. I remembered ahead of time that Joystick doesn’t work with Ventoy, so I flashed a USB (Balena Etcher) large enough for the full 10 GB image I downloaded. I did a clean install over its previous Rocky Linux installation. I also adjusted Joystick’s boot order so Rocky 9.4 loads by default. The system is dual booted with Linux Mint, where I did an apt update/upgrade.

Booted to Rocky, I enabled Cockpit web interface for remote management, a feature I selected for installation with the system. I created a limited user for running Podman, enabled lingering on it, got Cockpit logged into it as a “remote host,” and disabled normal password interactions. I pulled Podman containers for Caddy, Nextcloud, MariaDB, Redis, BusyBox, and Vaultwarden, and gave each a directory with a start.sh and a mountpoint. I excluded Pi-Hole and Unbound from my planned lineup because those functions are now handled by our router and would have required messing with special ports.

Troubleshooting

NFS fought me hard though. It didn’t help that Cockpit’s terminal kept glitching out on me. I noticed this behavior on ButtonMash’s Cockpit as well. Rebooting and refreshing didn’t do anything, but I found a control by navigating my browser away and back. I eventually got to thinking about what other parts it might be, and I came up with a bad version of Firefox, or Wayland. I ran Firefox over SSH on a machine still running Xorg server; it was Wayland.

And at this point, my workstation had a record bad meltdown, which ate my remaining blog time. Be sure to read about it next week!

Takeaway

Projects get interrupted. Something comes up and grand plans wait in the background. I’m tired of server doing just that, but it’s happened again.

Final Question

My longest enemy remains: Rootless Podman vs. Network Storage. How do I do it? I look forward to hearing from you in the comments below or on my Socials!

Study Month: Week Four

Good Morning from my Robotics Lab! This is Shadow_8472 with a small update on what I learned this week. Let’s get started!

Not much progress happened this week. I won’t be clustering this month, but I did study the High Availability Add-On. It has a package name in there called Pacemaker, but I don’t know if it’s independent or a common library or what. Furthermore: I don’t know if a high availability cluster is right for me or not. I circled back around to Beowulf clusters after over a year of stuggle, but couldn’t conclusively tell if Red Hat’s implementation is one or not. Best I can figure is that you Beowulf once you have your task working on the first node and need to expand.

With that said, I did have some trouble along the way. Joystick, my new Rocky Linux 9 installation won’t boot – the computer keeps going to Mint no matter what I say in the BIOS. I tried running update-grub on Mint, but that just tidied away Debian with no new Rocky option. I vaguely remember something like this happening before, but I could be misapplying a memory. I want to try a GRUB disk; if that doesn’t solve it, I may need to reinstall. I still have the USB stick.

On a tangentially related note, I checked in on ButtonMash when Joystick didn’t boot and found it unupdated. It also thought it was July, 2018 and that everyone’s SSL encryption certificates were bad until I’d completely fixed the time. I suspect the motherboard’s system clock battery may be experiencing issues.

In conclusion for this month, I’ve been burned out. It’s been nice not having the near-constant stress of finishing a project or taking a walk of shame to split it off into [yet] another week, but at the same time, I didn’t get much done at all.

I still have a goal of combining multiple computers into a single, more powerful one behind a unified Cockpit experience. If I think about it though, I might be after some sort of Podman clustering technique, and I’ll still be no better off at getting Podman working over NFS than when I started!

Rocky Server Stack Deep Dive: 2023 Part 5

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am learning more about Podman Quadlets for my homelab. Let’s get started!

Systemd and Quadlets

From my incomplete research going into this topic, I already know Quadlets is a system for efficiently integrating Podman containers in with Systemd. It was merged into Podman v4.4, and I had a small pain of a time trying to find a distribution with both that and legacy BIOS support along with a list of other requirements.

But what is Systemd? In short: Systemd is the init process –a process that manages other processes– used by most Linux distributions that aren’t trying to optimize for a low RAM or storage footprint. As it turns out, I’ve already had minimal exposure to it while writing unit files for NFS [auto]mounts and a static IP address on Debian. Systemd in turn bases units off these unit files to manage the operating system.


While Systemd unit files defining Podman containers can be written by hand, Quadlets can automate their creation based off simpler unit files of its own: .container, .network, .volume, and .kube. The first three look similar enough to concepts I’m familiar enough with that I figure I could hack an example into doing what I need.

But I’m interested in pods. With .pod unit files only a controversial feature request at best, that leaves me to explore .kube files, which run Kubernetes YAML files. I know nothing about writing Kubernetes YAML files from scratch, and I refuse to cram for them Thanksgiving week.

My project died here for a few hours. One Systemd tutorial brought up Syncthing in an example, and I spent a while on a tangent looking at that, but it too is too large to cram for this week. I unenthusiastically browsed back to Kubernetes, and found:

podman generate kube

Looks like I just might get away with adapting my scripts after all this week. With this in mind, I copied over my files from my laptop’s Debian drive to its new-last-week Rocky 9 installation. Focusing on Nextcloud, I cleared out my dead-end work with Fuse, abstracted volumes, and other junk before realizing BusyBox was likely a more suitable testing grounds.

My First Kuberneties File

I came up with the following bash script for such a pod:

podman pod stop busyBoxPod
podman pod rm busyBoxPod
podman pod create busyBoxPod
podman create \
--pod busyBoxPod \
--name BusyBox \
--volume fastvolume:/root/disk \
-it \
--rm \
busybox

And here is

# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.6.1
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2023-11-23T01:29:45Z"
  labels:
    app: busyBoxPod
  name: busyBoxPod
spec:
  containers:
  - image: docker.io/library/busybox:latest
    name: BusyBox
    stdin: true
    tty: true
    volumeMounts:
    - mountPath: /root/disk
      name: fastvolume-pvc
  volumes:
  - name: fastvolume-pvc
    persistentVolumeClaim:
      claimName: fastvolume

I saved this output as busyBoxPod.yml and returned to Nextcloud.

Nextcloud put up a small tantrum getting re-updated for Podman 4.6.1. I had to look up how to Podman Secrets, and apply :z to volumes to satisfy SELinux. Redis however, refused to accept a password from Podman Secrets, so I rolled back that change. The pod should insulate it anyway. I got it to a point where it needed a domain name.

Branching out to bring up Pi-Hole and Caddy, I learned how the default Unbound configuration for the container I used only forwards DNS requests to Cloudflare. I’ll want to fix this later. I used firewall-cmd to forward ports for HTTP, HTTPS, and DNS to underprivileged ports for rootless containers.

Takeaway

UNCLE! I find more and more of my time supposedly working on server is procrastinating and stressing over either minutia or blankly staring at my screens when I muster enough focus to ignore distractions. There’s no way around it; I’m officially burned out on this project. I’ll maybe come back to it after the new year. I really wanted to get my .kube files working for at least Pi-Hole and Caddy, but it’s going to be a hard pass at the moment.

Final Question

I’m considering covering a free/open source game or few over December. What are your recommendations?

I look forward to hearing from you on my Socials!

I Have a Cloud!!!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am celebrating my Project of the Year. I have Nextcloud installed! Let’s get started!

My NextCloud Experience in Brief

This week, I achieved a major milestone for my homelab – one I’ve been poking since at least early March of this year when I noted, “Nextcloud has been a wish list item since I gave up using Google’s ecosystem,” [1].

As I learned more about the tech I’m working with, I added specifications for storing scanned pictures on high capacity, but slow hard disks while making smaller files available with the speed of solid state – only to learn later how rootless Podman is incompatible with NFS. I studied “Docker Secrets” to learn best practices for password management – only to move the project to an older version of Podman without that feature. One innovation to survive this torrent of course corrections is running the containers in a pod, but even that was shifted around in the switch from Rocky 8 to Debian.

Two major trials kept me occupied for a lot of this time. The first involved getting the containers to talk to each other, which they weren’t willing to do over localhost, but they were when given a mostly equivalent IPv4 loopback address.

The second was the apparent absence of a database despite multiple attempts to debug my startup script. Nextcloud was talking to MariaDB, but it was like MariaDB didn’t have the database specified in the command to create its container. For this, I created an auxiliary MariaDB container in client mode (Thank-you-dev-team-for-this-feature!) and saw enough to make me think there was none. I wasn’t sure though.

One Final Piece

#remove MariaDB’s “dirty” volume
podman volume rm MariaDBVolume
#reset everything
./resetVolumes
./servicestart

There was no huge push this week. I came across an issue on GitHub [2] wondering why there was no database being created. By starting a bash shell within the MariaDB container, I found there were some files from some long-forgotten attempt at starting a database. All I had to do was completely reset the Podman volume instead of pruning empty volumes as I had been doing.

Future Nextcloud Work

Now that I have the scripts to produce fresh instances, I still have a few work items I want to keep in mind.

I expect to wipe this one clean and create a proper admin account separate from my main username, a practice I want to better get into when setting up services.

Adjacent, but I’ll want access on my tablet, which will entail getting the Bitwarden client to cooperate with my home server.

I still want my data on GoldenOakLibry. I’m hoping I can create a virtual disk or two to satisfy Podman which in turn RedLaptop can relay over NFS to network storage.

Final Question

Have you taken the time to set up your own personal cloud? I look forward to hearing about your setup in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “I Studied Podman Volumes,”letsbuildroboticswithshadow8472.com,March 6, 2023. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2023/03/06/i-studied-podman-volumes/. [Accessed Oct. 2, 2023].

[2] rosshettel, thaJeztah, et. all, “MYSQL_DATABASE does not create database,” github.com, July 9, 2016. [Online].Available: https://github.com/MariaDB/mariadb-docker/issues/68. [Accessed Oct. 2, 2023].

My Nextcloud Sees its Database, but…

Good Morning from my Robotics Lab! This is Shadow_8472, and today is another Nextcloud shot. Let’s get started!

It almost needs not an introduction at this point. I’ve lost track of how long I’ve been after a Nextcloud instance – and even the number of times I’ve complained about the same. I’ve demoed it using an integrated database, and I’ve since gotten its container into a Podman pod. My task today is getting the paired MariaDB container to behave.

Let’s start with my first error. Nextcloud is starting for the first time and it’s time to make an admin account.

Failed to connect to the database: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory

The container has been launched with flags providing environment variables pointing Nextcloud at the database and specifying the username/password to get in. I found someone having the exact same problem 4 years ago on Stack Overflow, and if I get user2880156’s explanation, “localhost” is not the same as spelling out a loopback IP address, like 127.0.0.1 [1]. While some networked applications don’t care how they interact as server and client when on the same computer [namespace?], Nextcloud and MariaDB are not such a pair. It has something to do with types of network sockets, but I don’t understand beyond that. What matters to me now is that replacing “localhost” with “127.0.0.1” got me to my next error.

MySQL username and/or password not valid You need to enter details of an existing account.

I’ve checked my environment variables more times than I can count. The required usernames and passwords were copy-pasted between the respective environment variables. The best lead I have is that the container may be bugged. People online have reported success with manually creating the full permissions/non-root account the database container is supposed to create on initial creation.

This insight led me to seek a manual fix from within the pod – specifically from a MariaDB command line as mentioned in the container’s documentation. After trying several different ways to modify a terminal command to run with Cockpit, I gave up, re-added a couple flags I was avoiding, and got the interface I was after rendered to both bash and Cockpit. I additionally split the command back into a separate script.

The MariaDB command line is unlike any I’ve used before. My first impressions are that commands are in all caps and variables are lower case. My final achievement before running out of time was to extract a list of user names (root, healthCheck, and databaseUser), where they’ve logged in from (variations on locations within the pod), and a partial hexadecimal output of their passwords.

Takeaway

Solving localhost vs 127.0.0.1 addresses has given me insight into Cockpit’s curious behavior when remoting into to itself with localhost. Command line access to the database from within the pod is also huge.

Final Question

I’m not taking bets, and I’m afraid to ask, but what additional roadblocks should I expect?

Work Cited

[1] akrea, et. all, “nextcloud and mariadb (both) on docker: SQLSTATE[HY000] [2002] No such file or directory,” stackoverflow.com Mar. 10, 2019-Mar. 22, 2023. [Online]. Available:https://stackoverflow.com/questions/55088828/nextcloud-and-mariadb-both-on-docker-sqlstatehy000-2002-no-such-file-or-d. [Accessed Sept. 4, 2023].

Podman-Nextcloud: Climb Shorter Mountains

Good Morning from my Robotics Lab! This is Shadow_8472 and today it’s bully or be bullied as I take another swing at getting my Nextcloud instance even partially usable. Let’s get started!

If there’s one long term project of mine that just loves humiliating me, it’s getting Nextcloud operational. My eventual goal is to have it running in a rootless Podman container with a way to quickly move it to an auxiliary server. My strategy thus far has been to prepare three Podman volumes hosted on GoldenOakLibry (NAS) over NFS while accounting for the speed needs of the MariaDB and Nextcloud volumes with an SSD and vs the capacity needs of the PhotoTrunk volume with a RAID 5 array of HDD’s.

NAS: Network Attached Storage

NFS: Network File System

SSD: Solid State Drive

HDD: Hard Disk Drive

Lowering Expectations

I’ve lost count of how many times NFS has given me grief on this project, so I eliminated it. I moved the SSD from where it was on GoldenOakLibry to ButtonMash, my main server computer. I added it to /etc/fstab – bricking Rocky Linux. ButtonMash is dual booted, so I booted to Debian for repairs.

Rocky’s installation system uses an LVM2 format, which Debian can’t read by default. An LVM2 package exists, and I installed it. LVM2 partitions show up in lsblk as sub-partitions of an actual partition, and it is these sub-partitions that get mounted, for example:

sudo mount /dev/rl_buttonmash/root /mnt/temp

to mount the sub-partition that shows up as rl_buttonmash-root. While I did explore for a quick fix, it’s a very good thing when each side of a dual booted machine can repair the other. Mounting a file system is a very important tool in that kit.

Upon closer inspection, a contributing factor to bricking Rocky was the root account being locked. The computer booted into an emergency mode and got stuck in a loop ending with “Press ENTER to continue…” Unlocking it didn’t get me anywhere when I looked at the logs per the loop’s recommendation, but the command lsblk -f clued me in that I was mounting the drive using the wrong file system type, an error which was soon remedied after I discovered it.

Project Impossible

The move hardly seemed to fix anything as hoped. I didn’t solve much. I kept getting NFS related errors when trying to run the pod, even after moving to a new mountpoint I’d never touched with NFS automounts. I even tried mounting the volumes using hard links pointed at the mounted “data drive” and I still couldn’t get a working Nextcloud instance. Somewhere in my shambling among the apparently limited content available regarding this topic online, I found the following warning on Oracle’s Podman documentation:

Caution: When containers are run by users without root permissions, Podman lacks the necessary permissions to access network shares and mounted volumes. If you intend to run containers as a standard user, only configure directory locations on local file systems [1].

Rootless Podman lacks network share permissions. OK, so NFS out unless I can selectively give Podman network permission without going full root. Until then, Podman is limited to the local disk, and if I’m understanding this warning correctly, mounted drives are also off the table. My plans for a Photo Trunk upgrade may be grounded indefinitely, and with ButtonMash’s Rocky drive being only 60GB, I’m not looking to burden it with anything resembling bulk storage.

Takeaway

The next logical innovation would be to rebuild the project on a computer with more storage. Barring a full makeover of ButtonMash, I do have my Red Laptop as an auxiliary server. I made a new account, but in all reality, this inspiration came after my research cutoff. It’s a project for another week once again.

Final Question

My project directory is messy with scripts to the point where I started a README file. Have you ever had a project so involved as to need one?

I look forward to hearing about it in the comments below or on my Socials.

Work Cited

[1] Oracle, “Configuring Storage for Podman,” oracle.com, [Online]. Available: https://docs.oracle.com/en/operating-systems/oracle-linux/podman/podman-ConfiguringStorageforPodman.html. [Accessed July 27, 2023].

My PiHole is “Half Baked”

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am installing PiHole. With luck, I’ll have be configuring some of its other functions to augment my home’s network as well. Let’s get started!

PiHole, Take II

I can rant about the evils of Google ‘til boredom do its part. However, this search engine is between inconvenient and impossible to ignore, given its impressive list of “hobbies” from STEM projects to smartphones. It’s an open secret few care to think about that their empire is built off user exploitation. I installed ad blocker browser plugins over their aggression last presidential election cycle.

Earlier this month, I read about Manifest v3, the new browser-plugin interface library created by Google. Their precautions against spyware just so happen to cripple ad blockers, among other legitimate plugins. This walking conflict of interest is set to roll out January, 2023, and Firefox is going along with it.

When a browser loads a web page, it asks a DNS service to translate the page’s URL into an IP address. It then finds, loads, and renders the page at that IP. This may involve loading other pages –such as ads– as elements of the original page. Network ad blockers protect you by fudging bad URL’s addresses.

Objectives

My main goal this week is to kill ads across my home network. Follow-up objectives include advanced PiHole features and a private DNS for even better protection.

Night 1

My first attempt at PiHole was messy. I set up PiHole OCI/“Docker” containers across my two servers – ButtonMash and my old laptop. Like before, the main router skipped IP’s on me. I had it repaired within an hour thanks to my same laptop functioning as a workstation with a static IP. With the router upgrade to my upstairs workstation, I easily archived its settings and outfitted it with its own wider network static IP – complete with a netmask wide enough to chase down its rogue counterpart should it shift again (Did I have laptop’s static IP netmask configured incorrectly this whole time?!).

Surprise! The expanded subnet didn’t work because the rogue router had its own subnet mask I was outside of. The dance was too involved for a play by play, but I only really felt helpless while trying to avoid hiking around to different workstations to clean up after this failed networking spell. As I reassembled the router for normal operation, I reasoned out that my router’s firmware is hardwired not to consider a DNS coming from a LAN connection, like I’m trying to do.

Flashing open source firmware is out of the question. For one, I wouldn’t know how to fix it and don’t have a replacement. Two: apparently its chipset manufacturer isn’t a fan of open source – the help thread I spotted recommended contacting OP’s government representative if he wanted to do anything about it.

Night 2

I did a bit of research before dismantling the network again. DHCP settings include optional fields for DNS requests. This should let me direct computers straight to PiHole instead of relaying the request in a convoluted workaround involving a NAT table and possibly causing a network loop.

This means each router is now a separate task. The responsible thing to do now is ensure my subnet router can behave before working on the main one. It’s not long before I fry my DNS settings. Navigation around my local network remains unaffected, but I eventually resort to restoring my backup from yesterday, re-applying the static IP, and updating the backup.

My best bet from here is to finalize my PiHole install. My initial container creation was the absolute minimum: port 80 web interface, port 53/TCP+UDP. There’s a lengthy list of environment variables to browse.

A Few Days Later

Jackpot! My mind cleared enough before bed to skim PiHole Docker’s documentation on GitHub. It has a list of example deployments – including a shell script. I converted it for Podman, entered my environment variables, and –during debugging– axed the logic for relaying logs as it was causing problems and I can view them directly with Cockpit-Podman.

PiHole User

But where to land it? I’ll eventually integrate as I master Caddy. Leaving the container running as root lets it use the proper ports, but I know better. Thanks to discoveries I spun off into last week’s project, I can now make more underprivileged, Cockpit-enabled users than I will ever need by using loopback the address (127.0.0.1/8).

The run script was easy to copy over to my new PiHole user. I gave it the directories it wanted as mountable volumes and shifted ports around until I was happy. I took the time to tidy up my firewall, combining a couple related entries and reclosing the normal DNS port.

I remember having issues with Vaultwarden’s stability over the course of days to weeks. The problem was occasionally annoying as Bitwarden only requires its home server when modifying the password vault, but PiHole will be sorely missed the moment it goes down. The one place I found the solution was in the official Podman troubleshooting guide on their GitHub [1]:

loginctl allow-linger userName

I sadly could not verify this was my previous, solution to my Vaultwarden long-term issues, but it’s not entirely unfamiliar, and it’s my best-informed guess.

DNS Port Forwarding

With PiHole secured in its own, easily accessible account, I soon experienced how picky DNS requests are about using the privileged port 53. All my attempts at manually telling OpenWRT to use port 5300 failed. I expect the the story will be the same if I try with on my main router.

I found the solution where Woody from b-woody.com blogged about almost the exact same project last May [2]: port forward port 53 to port 5300. Paranoid about goofing my firewall over command line I ran my version of Woody’s commands past r/TechSupport’s Discord channel. Moderator Donjuanal confirmed my omission of a trailing “:toaddr=”, but questioned my blind use of tcp, explaining how DNS clients default to udp for speed.

sudo firewall-cmd --zone=public --add-forward-port=port=53:proto=udp:toport=5300 --permanent

Even with this measure in place, I had to access the web console and tick Settings>DNS>Interface settings>Potentially dangerous options>Permit all origins before my local requests made it through. This may need to be addressed later.

Takeaway

I am so glad to have PiHole installed, even if it doesn’t appear to be doing much more than the uBlock Origin Firefox plugin. I’m researching the next segment though, and I estimate another week or more worth of work before it is configured alongside a private DNS server. Worth noting is that Firefox is leaving in the features ad block requires, despite potential security concerns. This is as good enough stopping point.

Final Question

Do you use PiHole? I’d be happy to hear about your experience.

I look forward hearing your answers on in the comments below or on my Socials.

Works Cited

[1] eriksjolund, “Podman\ Troubleshooting\ A list of common issues and solutions for Podman,” github.com, Nov. 19, 2022. [Online]. Available: https://github.com/containers/podman/blob/main/troubleshooting.md [Accessed Jan. 30, 2023].

[2] Woody, “Run PiHole in a rootless Podman container,” b-woody.com, May 12, 2022.[Online]. Available: https://b-woody.com/posts/2022-05-12-pihole-on-a-rootless-podman-container/ [Accessed Jan. 30, 2023].

[3] Can You Block It, “CAN YOU BLOCK IT?\ AN SIMPLE AD BLOCK TESTER” canyoublokit.com, 2021. [Online]. Available: https://canyoublockit.com/ [Accessed Jan. 30, 2023].

I Glitched Cockpit and Discovered Multi-user Login

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project for the week. Let’s get started!

My mother needed an extra browser, so I installed Firefox hardened it a little. I took the liberty of adding the Bitwarden plugin, encouraging her to make an account on my self-hosted instance. Remembering my failure so far to diagnose the “Network Error” blocking log in, I spared the time to learn how new Bitwarden clients are slightly incompatible with old Vaultwarden servers.

I easily could have updated Vaultwarden with maybe a note on the blog Discord. Instead, I felt like adding VaultwardenUsr@localhost to Cockpit with “Add new host.” This stunt worked at the cost of forwarding shadow8472@ButtonMash to VaultwardenUsr@ButtonMash when to logging in. Relogging didn’t help, and the hosts list saw VaultwardenUsr as the primary login – disallowing me from removing it, and as a remote login – blocking my attempts to add my real primary account back in with the same stunt.

While exploring this bug, I logged into my old laptop server and linked its Cockpit back into ButtonMash without getting forwarded to VaultwardenUsr. At this point, I submitted a bug report to Cockpit’s GitHub. I soon found the malformed host list at /etc/cockpit/machines.d/99-webui.json. I backed it up, purged the malformed entry, and updated GitHub with my workaround.

Out of curiosity, I added VaultwardenUsr@192.168.0.— as an alternate host. This sends packets for an extra detour, but it works as required. Only after all this did I update my Vaultwarden image from Docker Hub and deploy a new container from it using the same command as the last two successful times.

Note: While working on next week’s project, I logged into VaultwardenUsr@127.0.0.1 and other loopback IP’s with no problems. It’s just name@localhost that causes problems.

Takeaway

1 day for the win! My push for PiHole and supporting network projects has been intense lately, so it’s great to have a smaller project where I still learn while by doing something important.

Final Question

Have you ever misused a software feature successfully? What challenges did you face before getting it to work how you had in mind?

look forward hearing your answers on in the comments below or on my Socials.