Server Rebuild With Quadlet

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am continuing my work on my home’s network. Let’s get started!

State of the Homelab

Bitwarden. I self host using Vaultwarden, a 3rd party server. Done properly, it fits nicely in a larger homelab stack, but its OCI container can stand alone in a development environment. Due to skill limitations, I’ve been using it in this configuration. My recent network work has invalidated my manually self-signed certificates, and I’d rather focus my efforts on upgrades instead of re-learning the old system to maintain it.

Today, I am working on my newest server, Joystick (Rocky Linux 9). I compiled some command-by-command notes on using `podman generate systemd` to make self-starting, rootless containers, but just as I was getting the hang of it again, a warning message encouraged me to use a newer technique: Quadlet.

Quadlet

Quadlets? I’ve studied them before, but failed to grasp key concepts. It finally clicked though: they replace not just running `podman generate systemd` once I have a working prototype setup, but also everything I might want to do leading up to that point including defining Podman networks and volumes. Just make your Quadlet definitions once, and the system handles the rest.

The tutorial I found that best matches my use case can be found at mo8it.com [1]. Follow the link under Works Cited for the full text. It’s very detailed; I couldn’t have done a better job myeslf. But it doesn’t cover everything, like how `sudo su user` isn’t a true login for `systemctl –user …`. I had to use a twitchy Cockpit terminal for that (Wayland-Firefox bug).

Caddy

Caddy is the base of my dream homelab tech tree, so I’ll start there. My existing prototype calls for a Podman network, two Podman volumes, and a Caddyfile I’m mounting as a volume from the file system. I threw together caddy.container based on my script, but only the supporting network and volumes showed up. Systemd picked up on “mysleep.container,” an example from RedHat.

As it turned out, caddy.container had missed a capitalization. I found the problem by commenting out lines, reloading, and running `systemctl –user list-unit-files` to see when it didn’t load. Likewise, my Caddyfile volume had a file path bug to squash.

Vaultwarden

Good, that’s started and should be stable. On to Vaultwarden. I updated both ButtonMash and Joystick’s NFS unit files to copy over relevant files, but Joystick’s SELinux didn’t like my user’s fingerprints (owner/group/SELinux data) on the NFS definitions. I cleaned those up with a series of cp and mv commands with sudo and then I could enable the automounts.

Vaultwarden went up with simple enough debugging, but the challenge was in accessing it. I toyed with Cerberus/OPNsense (hardware firewall) DNS overrides until Caddy returned a test message from <domain.lan>:<port#>.

Everything

My next battle was with Joystick’s firewall: I forgot to forward tcp traffic from ports 80 and 443 to 8000 and 44300, respectively. Back on Cerberus, I had to actually figure out the alias system and use that. Firefox needed Caddy’s root certificate. Bouncing back to the network Quadlet, I configured it according to another tutorial doing something very similar to what I want [2]. I configured mine without an external DNS. A final adjustment to my Caddyfile to correct Vaultwarden’s fully qualified domain name, and I was in – padlock and everything.

Takeaways

I come out of this project with an intuition of how to manage Systemd files – especially Quadlet. The Quadlet workflow makes Podman container recipes for Systemd, and a working container will work forever – baring bad updates. I would still recommend prototyping with scripts when stuck though. When a Quadlet fails, there is no obvious error message to look up – it just fails to show up.

Even though it is still new, a lot of my time on Joystick this week was diagnosing my own sloppiness. Reboots helped when I got stuck, and thanks to Quadlet, I didn’t have to worry about spaghetti scripts like how I originally organized ButtonMash and never stabilized this victory I re-achieved today.

Final Question

NextCloud is going to be very similar, which I will make a pod along with MariaDB and Redis containers. But I am still missing one piece: NFS. How do I do that?

I look forward to your answers below or on my Socials.

Works Cited

[1] mo8bit, “Quadlet: Running Podman containers under systemd,” mo8it.com, Jan. 2-Feb. 19, 2024. [Online]. Available: https://mo8it.com/blog/quadlet/. [accessed: Sept. 13, 2024].

[2] G. Coletto, “How to install multi-container applications with Podman quadlets,” giacomo.coletto.io, May 25, 2024. [Online]. Available: https://giacomo.coletto.io/blog/podman-quadlets/. [accessed: Sept. 13, 2024].

Rocky Server Stack Deep Dive: 2023 Part 2

MAJOR progress! This week, I’ve finally cracked a snag that’s been holding me for two years. Read on as I start a deep dive into my Linux server stack.

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am continuing renovations on my home server, ButtonMash. Let’s get started!

The daily progress reports system worked out decently well for me last week, so I’ll keep with it for this series.

Caddy is an all-in-one piece of software for servers. My primary goal this week is to get it listening on port 44300 (the HTTPS port multiplied by 100 to get it out of privileged range) and forwarding vaultwarden.buttonmash.lan and bitwarden.buttonmash.lan to a port Vaultwarden, a Bitwarden client I use, is listening on over Buttonmash’s loopback (internal) network.

Tuesday Afternoon

From my Upstairs Workstation running EndeavourOS, I started off with a system upgrade and reboot while my workspace was clean. From last week, I remember Vaultwarden was already rigged to have port 44300, but straight away, I remembered its preferred configuration is HTTP coming into the container, so I’ll be sending it to 8000 instead.

My first step was to stop the systemd service I’d set up for it and start a container without the extra Podman volume and ROCKET arguments needed to manage its own HTTPS encryption. Getting my test install of Caddy going was more tricky. I tried to explicitly disable its web server, but figured it was too much trouble for a mere test, so I moved on to working with containers.

While trying to spin up a Caddy container alongside Pi-Hole, I ran into something called rootlessport hogging port 8000. I ran updates and rebooted the server. And then I realized I was trying to put both Caddy and Vaultwarden on the same port! I got the two running at the same time and arrived on Caddy’s slanted welcome page both with IP and via Pi-Hole-served domain_name:port_number.

Subdomains are my next target. I mounted a simple Caddyfile pointing to Vaultwarden and got stuck for a while researching how I was going to forward ports 80 and 443 to 8000 and 44300, respectively. Long story short, I examined an old command I used to forward DNS traffic to Pi-Hole and after a much background research about other communication protocols, I decided to forward just TCP and UDP. I left myself a note in my administration home directory.

DNS: Domain Name System – Finds IP address for URL’s.

sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=tcp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=udp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=tcp:toport=44300 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=udp:toport=44300 –permanent

I still don’t get a reply from vaultwarden.buttonmash.lan. I tried nslookup, my new favorite tool for diagnosing DNS, but from observing Caddy’s cluttered logs, I spotted it rejecting my domain name because it couldn’t authenticate it publically. I found a “directive” to add to my declaration of reverse proxy to use internal encryption.

But I still couldn’t reach anything of interest – because reverse-proxied traffic was just bouncing around inside the Caddy container! The easy solution –I think– would be to stack everything into the same pod. I still want to try keeping everything in separate containers though. Another easy solution would be to set the network mode to “host,” which comes with security concerns, but would work in-line with what I expected starting out. However, Podman comes with its own virtual network I can hook into instead of lobbing everything onto the host’s localhost as I have been doing. Learning this network will be my goal for tonight’s session.

Tuesday Night

The basic idea behind using a Podman network is to let your containers and pods communicate. While containers in a pod communicate as if over localhost, containers and pods using a Podman network communicate as if on a Local Area Network down to ip address ranges.

My big question was if this was across users, but I couldn’t find anyone saying one way or the other. Eventually, I worked out a control test. Adding the default Podman network, “podman,” to the relevant start scripts, I used ip a where available to find containers’ ip addresses.Pi-Hole then used curl to grab a “Hello World!” hosted by Caddy on the same user. I then curled the same ip:port from Vaultwarden’s container and failed to connect. This locked-down behavior is expected from a security point of view.

On this slight downer, I’m going to call it a night. My goal for tomorrow is to explore additional options and settle on one even if I don’t start until the day after. In the rough order of easy to difficult (and loosely the inverse of my favorites), I have:

  1. Run Caddy without a container.
  2. Run Caddy’s container rootfully.
  3. Run Caddy’s container in network mode host.
  4. Move all containers into a single user.
  5. Perform more firewalld magic. (Possibly a flawed concept)
  6. (Daydreaming!!) Root creates a network all users can communicate across.

Whatever I do, I’ll have to weigh factors like security and the difficulty of maintenance. I want to minimize the need for using root, but I also want to keep the separate accounts for separate services in case someone breaks out of a container. At the same time, I need to ask if making these connections will negate any benefit for separating them across accounts to begin with. I don’t know.

Wednesday Afternoon

I spent the whole thing composing a help request.

Wednesday Night

The names I am after for higher-power networking of Podman containers are Netavark and Aardvark. Between 2018 and around February 2022 it would have been Slirp4netns and its plethora of plugins. Here approaching the end of 2023, that leaves a8 and onword is an outright betrayal round four years worth of obsolete tutorials to not quite two years with the current information – and that’s assuming everyone switched the moment the new standard was released, which is an optimistic assumption to say the least. In either case, I should be zeroing in on my search.

Most discouraging are how most of my search results involving Netavark and Aardvark end up pointing back to the Red Hat article announcing their introduction for fresh installs in Podman 4.0.

My goal for tomorrow is to make contact with someone who can point me in the right direction. Other than that, I’m considering moving all my containers to Nextcloud’s account or creating a new one for everything to share. It’s been a while since I’ve been this desperate for an answer. I’d even settle for a “Sorry, but it doesn’t work that way!”

Thursday Morning

Overnight I got a “This is not possible, podman is designed to fully isolate users from each that includes networking,” on Podman’s GitHub from Lupa99, one of the project maintainers [1].

Thursday Afternoon

Per Tuesday Night’s entry, I have multiple known solutions to my problem. While I’d love an extended discourse about which option would be optimal from a security standpoint in a production environment, I need to remember I am running a homelab. No one will be losing millions of dollars over a few days of downtime. It is time to stop the intensive researching and start doing.

I settled on consolidating my containers into one user. The logical choice was Pi-Hole: the home directory was relatively clean, I’d only need to migrate Vaultwarden. I created base directories for each service noting how I will need to make my own containers some day for things like games servers. For now, Pi-Hole, Caddy, and Vaultwarden are my goals.

Just before supper, I migrated my existing Pi-Hole from hard-mounted directories to Podman volumes using Pi-Hole’s Settings>Teleporter>Backup feature.

Thursday Night

My tinkerings with Pi-Hole were not unnoticed. At family worship I had a couple family members reporting some ads slipping through. At the moment, I’m stumped. If need be, I can remigrate by copying the old instance with a temporary container and both places mounted. My working assumption though is that it’s normal cat and mouse shenanigans with blocklists just needing to update.

It’s been about an hour, and I just learned that any-subdomain.buttonmash.lan and buttonmash.lan are two very different things. Every subdomain I plan to use on ButtonMash needs to be specified on PiHole as well as Caddy. With subtest.buttonmash.lan pointed at Caddy and the same subdomain pointed at my port 2019 Hello World!, I get a new error message. It looks like port 80 might be having some trouble getting to Caddy…

$ sudo firewall-cmd –list-all

forward-ports:
port=53:proto=udp:toport=5300:toaddr=

That would be only Pi-Hole’s port forward. Looking at that note I left myself Tuesday, and I can see I forwarded ports 8000 and 44300 into themselves! The error even ended up in the section above. Here’s the revised version:

sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=udp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=44300 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=udp:toport=44300 --permanent

I also removed Tuesday’s flubs, but none of these changes showed up until I used

sudo firewall-cmd --reload

And so, with Pi-Hole forwarding subdomains individually and the firewall actually forwarding the HTTP and HTTPS ports (never mind that incoming UDP is still blocked for now), I went to https://vaultwarden.buttonmash.lan and was greeted with Firefox screaming at me, “Warning: Potential Security Risk Ahead” as expected. I’ll call that a good stopping point for the day.

My goal for tomorrow is to finish configuring my subdomains and extract the keys my devices need to trust Caddy’s root authority. It would also be good to either diagnose my Pi-Hole migration or re-migrate it a bit more aggressively.

Friday Afternoon

To go any farther on, I need to extract Caddy’s root Certificate Authority (CA) certificate and install it into the trust store of each device I expect to access the services I’m setting up. I’m shaky on my confidence here, but there are two layers of certificates: root and intermediate. The root key is kept secret, and is used to generate intermediate certificates. Intermediate keys are issued to websites to be used for encryption when communicating with clients. Clients can then use the root certificate to verify that a site’s intermediate certificate is made from an intermediate key generated from the CA’s root key. Please no one quote me on this – it’s only a good-faith effort to understand a very convoluted ritual our computers play to know who to trust.

For containerized Caddy installations, this file can be found at:

/data/caddy/pki/authorities/local/root.crt

This leads me to the trust command. Out of curiosity, I ran trust list on my workstation and lost count around 95, but I estimate between 120 and 150. To tell Linux to trust my CA, I entered:

trust anchor <path-to-.crt-file>

And then Firefox gave me a new warning: “The page isn’t redirecting properly,” suggesting an issue with cookies. I just had to correct some mismatched ip addresses. Now, after a couple years of working toward this goal, I finally have that HTTPS padlock. I’m going to call it a day for Sabbath.

My goal for Saturday night and/or Sunday is to clean things up a bit:

  • Establish trust on the rest of the home’s devices.
  • Finish Vaultwarden migration
  • Reverse Proxy my webUI’s to go through Caddy: GoldenOakLibry, PiHole, Cockpit (both ButtonMash and RedLaptop)
  • Configure Caddy so I can access its admin page as needed.
  • Remove -p ####:## bound ports from containers and make them go through Caddy. (NOT COCKPIT UNTIL AVAILABLE FROM REDUNDANT SERVER!!!)
  • Close up unneeded holes in the firewall.
  • Remove unneeded files I generated along the way.
  • Configure GoldenOakLibry to only accept connections through Caddy. Ideally, it would only accept proxied connections from ButtonMash or RedLaptop.
  • Turn my containers into systemd services and leave notes on how to update those services
  • Set up a mirrored Pi-Hole and Caddy on RedLaptop

Saturday Night

Wow. What was I thinking? I could spend a month in and of itself chewing on that list, and I don’t see myself as having the focus to follow through with everything. As it was, it took me a good half hour to just come up with the list.

Sunday

I didn’t get nearly as much done as I envisioned over the weekend because of a mental crash.

Nevertheless, I did do a little additional research. Where EndeavourOS was immediately recipient to the root certificate such that Firefox displayed an HTTPS padlock, the process remains incomplete from where I tried it on PopOS today. I followed the straightforward instructions found for Debian family systems on Arch Wiki, but when I tell it to update-ca-certificates, it claims to have added something no matter how many times I repeat the command without any of the numbers actually changing. I’ve reached out for help.

Monday Morning

I’ve verified that my certificate shows up in /etc/ssl/certs/ca-certificates.crt. This appears to be an issue with Firefox and KDE’s default browser on Debian-based systems. I’ll decide another week if I want to install the certificate directly to Firefox or if I want to explore the Firefox-Debian thing further.

Takeaway

Thinking back on this week, I am again reminded of the importance of leaving notes about how to maintain your system. Even the fog:head AM brain is better able to jot down a relevant URL that made everything clear where the same page may be difficult to re-locate in half a year.

My goal for next week is to develop Nextcloud further, though I’ll keep in mind the other list items from Friday.

Final Question

What do you think of my order of my list from Friday? Did I miss something obvious? Am I making it needlessly overcomplicated?

Let me know in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, Luap99, “How Do I Network Rootless Containers Between Users? #20408,” github.com, Oct. 19, 2023. [Online]. https://github.com/containers/podman/discussions/20408. [Accessed Oct 23, 2023].

[2]. Arch Wiki, “User:Grawity/Adding a trusted CA certificate,” archlinux.org, Oct. 6 2022 (last edited), [Online]. https://wiki.archlinux.org/title/User:Grawity/Adding_a_trusted_CA_certificate#System-wide_–_Debian,_Ubuntu_(update-ca-certificates). [Accessed Oct 23, 2023].

Rocky Server Stack Deep Dive: 2023 Part 1

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am revisiting my Vaultwarden server to better understand the process. Let’s get started!

Monday Evening (October 9, 2023)

OK, so here’s the deal from memory: I want to put Bitwarden on my tablet. My Bitwarden server is a 3rd party implementation called Vaultwarden. Official Bitwarden clients, such as the one I want to put on my Android tablet, make use of the same encryption scheme as your average browser that relies on a system of certificates going back to a root authority. I made my own authority to get my project working, so my computers will complain by default about the connection being insecure. My desktops have been content warning me every so often, but Android is fresh ground to cover. This past week, I noticed an expired certificate, so it’s time for maintenance.

It’s early in the week, but this has been a historically difficult topic. I’ll be using my December 13, 2021 post as a guide [1].

Monday Night

I’ve assessed this project’s state: “dusty.” Vaultwarden is containerized on ButtonMash in a locked down account configured to allow-lingering (non-root users otherwise get kicked eventually). The last time I touched Vaultwarden was before I learned the virtue of scripting as opposed to copy-pasting complex commands I will need again in months to years. I foresee the next topic in my Relearning Linux series.

My goal for tomorrow is to brush up on my terminology and generate a new certificate authority.

Tuesday Afternoon

While reviewing my above-mentioned post, I remembered Caddy Web Server and looked it up. Its self-signed security certificate functionality was a little much hassle for me to work out around June 6, 2022 [2]. I shelved it pending a better router before further experimenting with Let’s Encrypt for real certificates, which still hasn’t happened.

Also in my June 6, 2022 post, I owed my disappointment to difficulty with the official Caddy tutorials assuming loopback access vs. me running on a headless server being left without clear instructions to find certificate I need to install into my browser on another computer. One idea coming to mind would be to find a browser container to VNC into. But then I’d still be left without the automatic certificate unless I made my own container with Caddy+Firefox+VNC server. An excellent plan Z if I ever heard one because I am not looking to explore that right now, but it isn’t not an option.

VNC: Virtual Network Client, Remote desktop. A desktop environment hosted one place, but rendered and operated from across a network.

For a plan A, I’d like to try the documentation again. Perhaps the documentation will be more clear to my slightly more experienced eyes. For what it’s worth, Caddy is still installed from before.

Tuesday Night

I’ve explored my old Caddy installation, and my limited understanding is starting to return. My big snag was thinking I needed to use the default HTTP ports while running Caddy rootless. Subdomains may have been involved. I’m interested in trying Caddy containerized, where I have a better grasp on managing ports. Furthermore, if domain names are of concern, I believe involving Pi-Hole may be my best bet, and with that I fear this project just became a month long or longer.

Anyway, I’m going to look into Pi-Hole again. I have an instance up and passing domain names, but I’ve not gotten it to filter anything. Hopefully interpreting a host name won’t be too difficult a goal for tomorrow.

In the meantime, I noticed PiHole was complaining about a newer version available. I went to work on it, but the container was configured to be extremely stubborn about staying up. I force-deleted the image, and PiHole redownloaded using the updated container. I’ve left a note with instructions where I expect to find it next time I need to update.

Wednesday Afternoon

To refresh myself, my current goal is get PiHole to direct buttonmash.lan to Caddy on a underprivileged port for use with reverse proxy access within ButtonMash.

I found an unambiguous tutorial affirming my general intuitions with Pi-Hole itself, meaning my problems are with directing requests to Pi-Hole. For this I went to my router, which wanted an update from July, but had trouble downloading. I hit refresh and it found a September update. It installed, and I had to manually reconnect a Wi-Fi connection (OpenWRT on a Raspberry Pi).

I’ve blown a good chunk of my afternoon with zero obvious forward progress since the last paragraph. Where to start? I’ve learned nslookup, a tool for testing DNS. I validated that my issue with Pi-Hole is getting traffic to Pi-Hole with it. The real temptation to unleash a horde of profanities was my home router from Tp-Link. The moment I pointed DNS at ButtonMash on 192.168.0.X: “To avoid IP conflict with the front-end-device, your router’s IP address has been changed to 192.168.1.1 . Do you want to continue to visit 192.168.0.1?” That was a chore to undo. I needed a browser on a static IP – meaning my RedLaptop sitting in retirement as a server.

DNS: Domain Name Service. This is the part of the Internet that translates the domain names within URL’s into IP address so your browser can find a requested web page.

I’ve appealed to r/pihole for help [3].

Wednesday Night

I’ve gotten a few bites on Reddit, but my problem isn’t solved yet. I performed an update on my laptop and installed similar tool to nslookup (Debian didn’t come with it) to verify that OpenWRT wasn’t the culprit. I’ve found forum posts

No forward progress today, but I did make some “sideways progress.” At least I knew how to fix the netquake I caused.

My goal for tomorrow: I don’t know. IPv6 has been suggested. I’ll explore that. Maybe I’ll move my OpenWRT router for my upstairs workstation over to assign ButtonMash’s Pi-Hole.

Thursday Afternoon

Good day! Late last night I posted my appeal to r/pihole to r/techsupport’s Discord. While answering the responses I received last night, I ran my nslookup on a 192.168.0.X workstation receiving an IP over DHCP – and I got an unexpected success. Silly me had run my test over my statically addressed laptop. Also, leaving it overnight gave DHCP a chance to refresh. The test still failed upstairs workstation, so pointed OpenWRT at ButtonMash instead of the router for DNS. I’ll still need to cleanup my static IP’s, but I should be good to play with pointing PiHole at Caddy.

My next big task is configuring Caddy.

Thursday Night

My next big task is preparing Vaultwarden for use with Caddy.

Vaultwarden is one of the longest-running and most important services on ButtonMash. It was for Vaultwarden I started learning Podman. It was for Vaultwarden I learned how to allow-lingering on a server account. It was for Vaultwarden I researched NGINX and Let’sEncrypt to serve as a reverse proxy and HTTPS support, but later found Caddy to do the same things with hopefully less hassle. Vaultwarden has been with me so long, it was Bitwarden-rs (or some variant thereof) when I started using it and I didn’t follow the name when it was changed. With Vaultwarden, I’ve grown from dumping a raw command into a terminal to using a script on into turning that script into a rootless systemctl service that will refuse to stay down until stopped properly.

But problematically, Vaultwarden holds port 44300, which I want for Caddy. It’s also been running with an integrated HTTPS support module called ROCKET which the developers strongly discourage for regular deployments. This security certificate’s expiration is what is driving me to make progress like mad this week. I’ve spent the past couple (few?) years learning the building blocks to support Vaultwarden properly along with other services to run along side it. It’s time to grow once again.

I spent an hour or so re-researching how to set Podman up as a systemd service. And then I found a note pointing me towards the exact webpage that helped me conquer it last time. With that, I feel the need for a break and take Friday off. I’ll be back after Sabbath (Saturday night) or on Sunday to write a Takeaway and Final Question. I’ve got to give all this information some time to settle.

Of note: I did come across a Redhat blog post about an auto scale-down feature not present on RHEL 8, but it looks interesting for when I migrate to a newer version of Rocky Linux in seven months when active support ends. https://www.redhat.com/en/blog/painless-services-implementing-serverless-rootless-podman-and-systemd

Sunday Morning

This has been a very successful week, but I still have a way to go before I finish my server[’s software] stack: Nextcloud on Android. I researched a couple possible next steps as a possible encore. Caddy would be the next logical step, but Bitwarden has the port I want it listening on. Alternatively: I dug up the name “Unbound,” a self-hosted DNS server to increase privacy from Internet Service Providers, but I’d want more research time.

Takeaway

Again: what a week! I think I have my work lined up for the month. As of Sunday night, my goal for next week is Caddy. With Caddy in place, I’ll have an easier time adding new services while punching fewer holes in ButtonMash server’s firewall.

Final Question

Even now, I’m questioning if I want to migrate Rocky 9 or hold out and hope Rocky 10 shows up before/around Rocky 8’s end of life. Rocky’s upstream, Red Hat Enterprise Linux (RHEL) doesn’t have a clockwork release cycle like Ubuntu, and RHEL 8 appears to be planned for support much longer than Rocky 8. That feature I learned about does sound tempting. Upgrade or hold out?

I look forward to hearing from you in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “Self-Signed Vaultwarden Breakdown,” Dec. 13, 2021. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2021/12/13/self-signed-vaultwarden-breakdown/. [Accessed Oct. 16, 2023].

[2] Shadow_8472, “I Switched My Operations to Caddy Web Server,” June 6, 2022. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2022/06/06/i-switched-my-operations-to-caddy-web-server/. [Accessed Oct. 16, 2023].

[3] Shadow_8472, et. all, “Tp-Link isn’t sending traffic to containerized Pi-Hole (I think??),”reddit.com,Oct. 11, 2023. [Online]. Available: https://www.reddit.com/r/pihole/comments/175wp2l/tplink_isnt_sending_traffic_to_containerized/. [Accessed Oct. 16, 2023].