Server Rebuild With Quadlet

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am continuing my work on my home’s network. Let’s get started!

State of the Homelab

Bitwarden. I self host using Vaultwarden, a 3rd party server. Done properly, it fits nicely in a larger homelab stack, but its OCI container can stand alone in a development environment. Due to skill limitations, I’ve been using it in this configuration. My recent network work has invalidated my manually self-signed certificates, and I’d rather focus my efforts on upgrades instead of re-learning the old system to maintain it.

Today, I am working on my newest server, Joystick (Rocky Linux 9). I compiled some command-by-command notes on using `podman generate systemd` to make self-starting, rootless containers, but just as I was getting the hang of it again, a warning message encouraged me to use a newer technique: Quadlet.

Quadlet

Quadlets? I’ve studied them before, but failed to grasp key concepts. It finally clicked though: they replace not just running `podman generate systemd` once I have a working prototype setup, but also everything I might want to do leading up to that point including defining Podman networks and volumes. Just make your Quadlet definitions once, and the system handles the rest.

The tutorial I found that best matches my use case can be found at mo8it.com [1]. Follow the link under Works Cited for the full text. It’s very detailed; I couldn’t have done a better job myeslf. But it doesn’t cover everything, like how `sudo su user` isn’t a true login for `systemctl –user …`. I had to use a twitchy Cockpit terminal for that (Wayland-Firefox bug).

Caddy

Caddy is the base of my dream homelab tech tree, so I’ll start there. My existing prototype calls for a Podman network, two Podman volumes, and a Caddyfile I’m mounting as a volume from the file system. I threw together caddy.container based on my script, but only the supporting network and volumes showed up. Systemd picked up on “mysleep.container,” an example from RedHat.

As it turned out, caddy.container had missed a capitalization. I found the problem by commenting out lines, reloading, and running `systemctl –user list-unit-files` to see when it didn’t load. Likewise, my Caddyfile volume had a file path bug to squash.

Vaultwarden

Good, that’s started and should be stable. On to Vaultwarden. I updated both ButtonMash and Joystick’s NFS unit files to copy over relevant files, but Joystick’s SELinux didn’t like my user’s fingerprints (owner/group/SELinux data) on the NFS definitions. I cleaned those up with a series of cp and mv commands with sudo and then I could enable the automounts.

Vaultwarden went up with simple enough debugging, but the challenge was in accessing it. I toyed with Cerberus/OPNsense (hardware firewall) DNS overrides until Caddy returned a test message from <domain.lan>:<port#>.

Everything

My next battle was with Joystick’s firewall: I forgot to forward tcp traffic from ports 80 and 443 to 8000 and 44300, respectively. Back on Cerberus, I had to actually figure out the alias system and use that. Firefox needed Caddy’s root certificate. Bouncing back to the network Quadlet, I configured it according to another tutorial doing something very similar to what I want [2]. I configured mine without an external DNS. A final adjustment to my Caddyfile to correct Vaultwarden’s fully qualified domain name, and I was in – padlock and everything.

Takeaways

I come out of this project with an intuition of how to manage Systemd files – especially Quadlet. The Quadlet workflow makes Podman container recipes for Systemd, and a working container will work forever – baring bad updates. I would still recommend prototyping with scripts when stuck though. When a Quadlet fails, there is no obvious error message to look up – it just fails to show up.

Even though it is still new, a lot of my time on Joystick this week was diagnosing my own sloppiness. Reboots helped when I got stuck, and thanks to Quadlet, I didn’t have to worry about spaghetti scripts like how I originally organized ButtonMash and never stabilized this victory I re-achieved today.

Final Question

NextCloud is going to be very similar, which I will make a pod along with MariaDB and Redis containers. But I am still missing one piece: NFS. How do I do that?

I look forward to your answers below or on my Socials.

Works Cited

[1] mo8bit, “Quadlet: Running Podman containers under systemd,” mo8it.com, Jan. 2-Feb. 19, 2024. [Online]. Available: https://mo8it.com/blog/quadlet/. [accessed: Sept. 13, 2024].

[2] G. Coletto, “How to install multi-container applications with Podman quadlets,” giacomo.coletto.io, May 25, 2024. [Online]. Available: https://giacomo.coletto.io/blog/podman-quadlets/. [accessed: Sept. 13, 2024].

Rocky Server Stack Deep Dive: 2023 Part 2

MAJOR progress! This week, I’ve finally cracked a snag that’s been holding me for two years. Read on as I start a deep dive into my Linux server stack.

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am continuing renovations on my home server, ButtonMash. Let’s get started!

The daily progress reports system worked out decently well for me last week, so I’ll keep with it for this series.

Caddy is an all-in-one piece of software for servers. My primary goal this week is to get it listening on port 44300 (the HTTPS port multiplied by 100 to get it out of privileged range) and forwarding vaultwarden.buttonmash.lan and bitwarden.buttonmash.lan to a port Vaultwarden, a Bitwarden client I use, is listening on over Buttonmash’s loopback (internal) network.

Tuesday Afternoon

From my Upstairs Workstation running EndeavourOS, I started off with a system upgrade and reboot while my workspace was clean. From last week, I remember Vaultwarden was already rigged to have port 44300, but straight away, I remembered its preferred configuration is HTTP coming into the container, so I’ll be sending it to 8000 instead.

My first step was to stop the systemd service I’d set up for it and start a container without the extra Podman volume and ROCKET arguments needed to manage its own HTTPS encryption. Getting my test install of Caddy going was more tricky. I tried to explicitly disable its web server, but figured it was too much trouble for a mere test, so I moved on to working with containers.

While trying to spin up a Caddy container alongside Pi-Hole, I ran into something called rootlessport hogging port 8000. I ran updates and rebooted the server. And then I realized I was trying to put both Caddy and Vaultwarden on the same port! I got the two running at the same time and arrived on Caddy’s slanted welcome page both with IP and via Pi-Hole-served domain_name:port_number.

Subdomains are my next target. I mounted a simple Caddyfile pointing to Vaultwarden and got stuck for a while researching how I was going to forward ports 80 and 443 to 8000 and 44300, respectively. Long story short, I examined an old command I used to forward DNS traffic to Pi-Hole and after a much background research about other communication protocols, I decided to forward just TCP and UDP. I left myself a note in my administration home directory.

DNS: Domain Name System – Finds IP address for URL’s.

sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=tcp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=udp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=tcp:toport=44300 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=udp:toport=44300 –permanent

I still don’t get a reply from vaultwarden.buttonmash.lan. I tried nslookup, my new favorite tool for diagnosing DNS, but from observing Caddy’s cluttered logs, I spotted it rejecting my domain name because it couldn’t authenticate it publically. I found a “directive” to add to my declaration of reverse proxy to use internal encryption.

But I still couldn’t reach anything of interest – because reverse-proxied traffic was just bouncing around inside the Caddy container! The easy solution –I think– would be to stack everything into the same pod. I still want to try keeping everything in separate containers though. Another easy solution would be to set the network mode to “host,” which comes with security concerns, but would work in-line with what I expected starting out. However, Podman comes with its own virtual network I can hook into instead of lobbing everything onto the host’s localhost as I have been doing. Learning this network will be my goal for tonight’s session.

Tuesday Night

The basic idea behind using a Podman network is to let your containers and pods communicate. While containers in a pod communicate as if over localhost, containers and pods using a Podman network communicate as if on a Local Area Network down to ip address ranges.

My big question was if this was across users, but I couldn’t find anyone saying one way or the other. Eventually, I worked out a control test. Adding the default Podman network, “podman,” to the relevant start scripts, I used ip a where available to find containers’ ip addresses.Pi-Hole then used curl to grab a “Hello World!” hosted by Caddy on the same user. I then curled the same ip:port from Vaultwarden’s container and failed to connect. This locked-down behavior is expected from a security point of view.

On this slight downer, I’m going to call it a night. My goal for tomorrow is to explore additional options and settle on one even if I don’t start until the day after. In the rough order of easy to difficult (and loosely the inverse of my favorites), I have:

  1. Run Caddy without a container.
  2. Run Caddy’s container rootfully.
  3. Run Caddy’s container in network mode host.
  4. Move all containers into a single user.
  5. Perform more firewalld magic. (Possibly a flawed concept)
  6. (Daydreaming!!) Root creates a network all users can communicate across.

Whatever I do, I’ll have to weigh factors like security and the difficulty of maintenance. I want to minimize the need for using root, but I also want to keep the separate accounts for separate services in case someone breaks out of a container. At the same time, I need to ask if making these connections will negate any benefit for separating them across accounts to begin with. I don’t know.

Wednesday Afternoon

I spent the whole thing composing a help request.

Wednesday Night

The names I am after for higher-power networking of Podman containers are Netavark and Aardvark. Between 2018 and around February 2022 it would have been Slirp4netns and its plethora of plugins. Here approaching the end of 2023, that leaves a8 and onword is an outright betrayal round four years worth of obsolete tutorials to not quite two years with the current information – and that’s assuming everyone switched the moment the new standard was released, which is an optimistic assumption to say the least. In either case, I should be zeroing in on my search.

Most discouraging are how most of my search results involving Netavark and Aardvark end up pointing back to the Red Hat article announcing their introduction for fresh installs in Podman 4.0.

My goal for tomorrow is to make contact with someone who can point me in the right direction. Other than that, I’m considering moving all my containers to Nextcloud’s account or creating a new one for everything to share. It’s been a while since I’ve been this desperate for an answer. I’d even settle for a “Sorry, but it doesn’t work that way!”

Thursday Morning

Overnight I got a “This is not possible, podman is designed to fully isolate users from each that includes networking,” on Podman’s GitHub from Lupa99, one of the project maintainers [1].

Thursday Afternoon

Per Tuesday Night’s entry, I have multiple known solutions to my problem. While I’d love an extended discourse about which option would be optimal from a security standpoint in a production environment, I need to remember I am running a homelab. No one will be losing millions of dollars over a few days of downtime. It is time to stop the intensive researching and start doing.

I settled on consolidating my containers into one user. The logical choice was Pi-Hole: the home directory was relatively clean, I’d only need to migrate Vaultwarden. I created base directories for each service noting how I will need to make my own containers some day for things like games servers. For now, Pi-Hole, Caddy, and Vaultwarden are my goals.

Just before supper, I migrated my existing Pi-Hole from hard-mounted directories to Podman volumes using Pi-Hole’s Settings>Teleporter>Backup feature.

Thursday Night

My tinkerings with Pi-Hole were not unnoticed. At family worship I had a couple family members reporting some ads slipping through. At the moment, I’m stumped. If need be, I can remigrate by copying the old instance with a temporary container and both places mounted. My working assumption though is that it’s normal cat and mouse shenanigans with blocklists just needing to update.

It’s been about an hour, and I just learned that any-subdomain.buttonmash.lan and buttonmash.lan are two very different things. Every subdomain I plan to use on ButtonMash needs to be specified on PiHole as well as Caddy. With subtest.buttonmash.lan pointed at Caddy and the same subdomain pointed at my port 2019 Hello World!, I get a new error message. It looks like port 80 might be having some trouble getting to Caddy…

$ sudo firewall-cmd –list-all

forward-ports:
port=53:proto=udp:toport=5300:toaddr=

That would be only Pi-Hole’s port forward. Looking at that note I left myself Tuesday, and I can see I forwarded ports 8000 and 44300 into themselves! The error even ended up in the section above. Here’s the revised version:

sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=udp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=44300 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=udp:toport=44300 --permanent

I also removed Tuesday’s flubs, but none of these changes showed up until I used

sudo firewall-cmd --reload

And so, with Pi-Hole forwarding subdomains individually and the firewall actually forwarding the HTTP and HTTPS ports (never mind that incoming UDP is still blocked for now), I went to https://vaultwarden.buttonmash.lan and was greeted with Firefox screaming at me, “Warning: Potential Security Risk Ahead” as expected. I’ll call that a good stopping point for the day.

My goal for tomorrow is to finish configuring my subdomains and extract the keys my devices need to trust Caddy’s root authority. It would also be good to either diagnose my Pi-Hole migration or re-migrate it a bit more aggressively.

Friday Afternoon

To go any farther on, I need to extract Caddy’s root Certificate Authority (CA) certificate and install it into the trust store of each device I expect to access the services I’m setting up. I’m shaky on my confidence here, but there are two layers of certificates: root and intermediate. The root key is kept secret, and is used to generate intermediate certificates. Intermediate keys are issued to websites to be used for encryption when communicating with clients. Clients can then use the root certificate to verify that a site’s intermediate certificate is made from an intermediate key generated from the CA’s root key. Please no one quote me on this – it’s only a good-faith effort to understand a very convoluted ritual our computers play to know who to trust.

For containerized Caddy installations, this file can be found at:

/data/caddy/pki/authorities/local/root.crt

This leads me to the trust command. Out of curiosity, I ran trust list on my workstation and lost count around 95, but I estimate between 120 and 150. To tell Linux to trust my CA, I entered:

trust anchor <path-to-.crt-file>

And then Firefox gave me a new warning: “The page isn’t redirecting properly,” suggesting an issue with cookies. I just had to correct some mismatched ip addresses. Now, after a couple years of working toward this goal, I finally have that HTTPS padlock. I’m going to call it a day for Sabbath.

My goal for Saturday night and/or Sunday is to clean things up a bit:

  • Establish trust on the rest of the home’s devices.
  • Finish Vaultwarden migration
  • Reverse Proxy my webUI’s to go through Caddy: GoldenOakLibry, PiHole, Cockpit (both ButtonMash and RedLaptop)
  • Configure Caddy so I can access its admin page as needed.
  • Remove -p ####:## bound ports from containers and make them go through Caddy. (NOT COCKPIT UNTIL AVAILABLE FROM REDUNDANT SERVER!!!)
  • Close up unneeded holes in the firewall.
  • Remove unneeded files I generated along the way.
  • Configure GoldenOakLibry to only accept connections through Caddy. Ideally, it would only accept proxied connections from ButtonMash or RedLaptop.
  • Turn my containers into systemd services and leave notes on how to update those services
  • Set up a mirrored Pi-Hole and Caddy on RedLaptop

Saturday Night

Wow. What was I thinking? I could spend a month in and of itself chewing on that list, and I don’t see myself as having the focus to follow through with everything. As it was, it took me a good half hour to just come up with the list.

Sunday

I didn’t get nearly as much done as I envisioned over the weekend because of a mental crash.

Nevertheless, I did do a little additional research. Where EndeavourOS was immediately recipient to the root certificate such that Firefox displayed an HTTPS padlock, the process remains incomplete from where I tried it on PopOS today. I followed the straightforward instructions found for Debian family systems on Arch Wiki, but when I tell it to update-ca-certificates, it claims to have added something no matter how many times I repeat the command without any of the numbers actually changing. I’ve reached out for help.

Monday Morning

I’ve verified that my certificate shows up in /etc/ssl/certs/ca-certificates.crt. This appears to be an issue with Firefox and KDE’s default browser on Debian-based systems. I’ll decide another week if I want to install the certificate directly to Firefox or if I want to explore the Firefox-Debian thing further.

Takeaway

Thinking back on this week, I am again reminded of the importance of leaving notes about how to maintain your system. Even the fog:head AM brain is better able to jot down a relevant URL that made everything clear where the same page may be difficult to re-locate in half a year.

My goal for next week is to develop Nextcloud further, though I’ll keep in mind the other list items from Friday.

Final Question

What do you think of my order of my list from Friday? Did I miss something obvious? Am I making it needlessly overcomplicated?

Let me know in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, Luap99, “How Do I Network Rootless Containers Between Users? #20408,” github.com, Oct. 19, 2023. [Online]. https://github.com/containers/podman/discussions/20408. [Accessed Oct 23, 2023].

[2]. Arch Wiki, “User:Grawity/Adding a trusted CA certificate,” archlinux.org, Oct. 6 2022 (last edited), [Online]. https://wiki.archlinux.org/title/User:Grawity/Adding_a_trusted_CA_certificate#System-wide_–_Debian,_Ubuntu_(update-ca-certificates). [Accessed Oct 23, 2023].

I Switched My Operations to Caddy Web Server

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am rebuilding my home server, Button Mash, from the operating system up. Let’s get started!

Caddy Over Nginx

I spent well over a month obsessing over Nginx, so why would I start over now? As far as I am concerned, Caddy is the piece of software for my use case. While I am sure Nginx is great at what it does, I keep slamming into its learning curve – especially with integrating Let’s Encrypt, a tool for automating SSL encryption (HTTPS/the green padlock). Caddy builds that functionality in while still doing everything I wanted Nginx for.

The official Caddy install instructions [1] for Fedora, Red Hat, and CentOS systems are as follows:

$ dnf install ‘dnf-command(copr)’
$ dnf copr enable @caddy/caddy
$ dnf install caddy

First of all, new command: copr. Background research time! COPR (Cool Other Package Repositories) is a Fedora project I feel comfortable comparing to the Arch User Repository (AUR) or Personal Package Archive (PPA): it lets users make their own software repositories.

Installation went smoothly. When I enabled the repository, I had to accept a GPG key that wasn’t mentioned in the instructions at all. From a user point of view, they appear to fill a similar purpose here to a SSH keys: special numbers use math to prove you are still you in case you get lost.

Caddy uses an HTML interface (a REST API – Don’t ask, I don’t understand myself) on the computer’s internal network known as loopback or localhost on port 2019. Caddy additionally serves everything over HTTPS by default. If it cannot convince Let’s Encrypt to give it a security certificate, it will sign one itself and tell the operating system to trust it. In other words, if I were not running ButtonMash headless (without a graphical interface), I’d be able to try connecting to localhost:2019 with a favorite browser, like at least one of the limited supply of Caddy tutorials did.

IP Range Transplant

I should have just done my experimentation on DerpyChips or something. Instead, I pressed on with trying to point a family-owned domain name at Button Mash. This side adventure sprouted into last week’s post. In short: ButtonMash’s static IP kept was in conflict with what my ISP-provided equipment kept trying to assign it, resulting in an estimated 50% downtime from a confused router. Upgrading to the next gateway may have allowed us to free up the IP range for the gaming router’s use, but it’s not out for our area yet. My father and I switched our network connections over to a “gaming router” we had laying about and enabled bridge mode on the gateway to supposedly disable its router part. I have my doubts about how it’s actually implemented.

Most of our computers gladly accepted the new IP range, but GoldenOakLibry and ButtonMash –having static IP’s– were holdouts. I temporarily reactivated a few lines of configuration on my laptop to set a static IP so I could talk with them directly and manually transfer them over to the new IP range, breaking NFS shares and Vaultwarden on them respectively.

In the confusion, ButtonMash lost its DNS settings; those were easy enough to fix by copying a config line to point those requests to the router. GoldenOakLibry took a bit longer to figure out because the NFS shares themselves had to accept traffic from the new IP range with settings buried deep within the web interface. Once that was sorted, I had to adjust the .mount files in or around /etc/systemd/system on several computers. Editing note: While trying to upload, I found I could not access GoldenOakLibry on at least a couple of my machines. Note 2: I had to change the DHCP settings to the new IP range on my Raspberry Pi reverse Wi-Fi router. Systemd on both goofed systems needed a “swift kick” to fix them.

sudo systemctl start <mount-path-with-hyphens>.mount

Repairs Incomplete

That left Vaultwarden. I was already in a it’s-broken:-fix-it-properly mentality from the modem/router spinoff project. I got as far as briefly forwarding the needed ports for an incompletely configured Caddy to respond with an error message before deciding I wanted to ensure Bitwarden was locked down tightly before exposing it to the Internet. That wasn’t happening without learning Caddy’s reverse proxy, as I put Vaultwarden exclusively onto a loopback port.

Speaking of loopback, I found the official Caddy tutorials lacking. They –like many others after them– never consider a pupil with a headless server. I have not yet figured out how to properly convince my other computers to trust Caddy’s self-signed certificates and open up the administration endpoint. That will come in another post. I did get it to serve stuff over HTTP by listing IP’s as http://<LAN address>, but Bitwarden/Vaultwarden won’t let me log in on plain HTTP, even over a trusted network and confine the annoying log to a file.

As far as I can tell, the administration API on port 2019 does not serve a normal web page. Despite my efforts, the most access I have gotten to it was for it to error out with “host not allowed.” I haven’t made total sense of it yet. I recognize some of the jargon being used, but its exact mechanics are beyond me for the time being.

Takeaway

Caddy is a powerful tool. The documentation is aesthetically presented and easy enough to understand if you aren’t skimming. But you will have a much better time teaching yourself when you aren’t trying to learn it over the network like I did.

Final Question

Do you know Caddy? I can tell I’m close, but I can’t know for sure if I’m there in terms of the HTTP API and just don’t recognize it yet. I look forward hearing from you in the comments below or on my Discord server.

Works Cited

[1] “Install,” Caddy Documentation. [Online], Available: https://caddyserver.com/docs/install. [Accessed: June 6, 2022].