Rocky Server Stack Deep Dive: 2023 Part 4

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am exploring fuse-overlayfs as a possible patch between Podman and NFS. Last week’s post was practically a freebee, but I expect this one to be a doozy if it’s even possible. Let’s get started!

Context

For my homelab, I want to run Nextcloud in a rootless Podman 3.0.1 container with storage volumes on our NFS. For logistical reasons, Nextcloud needs to be on RedLaptop running Debian 11 (Linux kernel 5.10.0-26-amd64 x86_64). The NFS share I wish to share is mounted via systemd.

My most promising lead is from Podman Github maintainer rhatdan on October 28, 2023, where he made a comment about “fuse file system,” asking his colleague, @giuseppe, for thoughts to which there has been no reply as of the afternoon of November 10 [1]. I documented a number of major milestones there, which I’ll be covering here.

File System Overlays

Fuse file system turned out to be fuse-overlayfs, one of a few systems for fusing file systems. Basically: there are times when it’s useful to view two or more file systems at once. File system overlays can designate a lower file system and an upper file system. Any changes (file creation, deletion, movement, etc.) in this combined file system manifest in the upper file system, leaving the lower file system[s] alone.

Through a lot of trial and error, I set up a lower directory, an upper directory, a work directory, and a mountpoint. My upper directory and work directory had to be on the NFS, but I ran into an error about setting times. I double checked that there were no major problems related to Daylight Savings Time ending, but wasn’t able to clear the error. I sent out some extra help requests, but got no replies (Sunday, Nov. 12). A third of my search results are in Chinese, and the others are either not applicable or locked behind a paywall. Unless something changes, I’m stuck.

Quadlets

Github user eriksjolund got back to me with another idea: quadlets [1]. Using this project merged into Podman 4.4 and above, he demonstrated a Nextcloud/MariaDB/Redis/Nginx setup that saves all files as the underprivileged user running the containers. In theory, this sidesteps the NFS incompatibilities I’ve been experiencing all together.

The first drawback from my perspective is that I need to re-define all my containers as systemd services, which is something I’ve admittedly been meaning to do anyway. A second is again that this is a feature merged into Podman much later than what I’m working with. Unless I care to go digging through the Podman GitHub myself, I’m stuck with old code people will be reluctant to support.

Distro Hunt

Why am I even using Debian still? What is its core purpose? Stability. Debian’s philosophy is to provide proven software with few or no surprises left and the user polishes it to taste. As my own sysadmin, I can afford a little downtime. I don’t need the stability of a distro supporting the most diverse Linux family tree. Besides, this isn’t the first time community support has suggested features in the future of my installation’s code base. Promising solutions end in broken links. RAM is becoming a concern. Apt package manager has proven more needy than I’d care to babysit. If I am to be honest with myself, it’s time to start sunsetting Debian on this system and find something more up-to-date for RedLaptop. I’ll keep it around for now just in case.

My first choice was Fedora to get to know the RedHat family better. Fedora 39 CoreOS looked perfect for its focus on containers, but it looks like it will require a week or two to configure and might not agree with installing non-containerized software. Fedora 39 Server was more feature complete, but didn’t load up for my BIOS (as opposed to the new standard of UEFI); I later learned that new BIOS-based installations were dropped on or around Fedora 37.

I carefully considered other distributions with the aid of pkgs.org. Debian/Ubuntu family repositories go up to 4.3. Alpine Linux lacks systemd. Souls Linux is for desktops. OpenSuse Tumbleweed comes with warnings about being prepared to compile kernel modules. Arch is… Arch.

Fresh Linux Installation

With time running out in the week, I decided to forgo sampling new distros and went with minimal Rocky 9. Installation went as best can be expected. I added/configured cockpit, podman, cockpit-podman, nfs-utils, and nano. I added a podmanuser account, set it up to allow-lingering, and downloaded the container images I plan on working with on this machine: PiHole, Unbound; Caddy; Nextcloud, Redis, MariaDB; busybox.

Takeaway

I write this section on Friday afternoon, and I doubt I have enough time remaining to properly learn Quadlets and rebuild my stack, so I’m going to cut it off here. From what I’ve gathered already, Quadlets mostly uses Systemd unit files, a format I’ve apparently worked with before, but also needs Kubernetes syntax to define pods. I don’t know a thing about using Kubernetes. If nothing else, perhaps this endeavor will prepare me for a larger project where larger scale container orchestration is needed.

Final Question

Do you know of a way I might have interfaced Podman 3 with NFS? Did I look in the wrong places for help (Debian forums, perhaps)?

I look forward to hearing from you on my Socials!

Work Cited

[1]. Shadow_8472, D. “rhdan” Walsh, E. Sjölund, “Rootless NFS Volume Permissions: What am I doing wrong with my Nextcloud/MaraiDB/Redis pod? #20519,” github.com, Oct. 27, 2023-Nov. 10, 2023. [Online]. Available: https://github.com/containers/podman/discussions/20519#discussioncomment-7410665. [Accessed Nov. 12, 2023].

Rocky Server Stack Deep Dive: 2023 Part 1

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am revisiting my Vaultwarden server to better understand the process. Let’s get started!

Monday Evening (October 9, 2023)

OK, so here’s the deal from memory: I want to put Bitwarden on my tablet. My Bitwarden server is a 3rd party implementation called Vaultwarden. Official Bitwarden clients, such as the one I want to put on my Android tablet, make use of the same encryption scheme as your average browser that relies on a system of certificates going back to a root authority. I made my own authority to get my project working, so my computers will complain by default about the connection being insecure. My desktops have been content warning me every so often, but Android is fresh ground to cover. This past week, I noticed an expired certificate, so it’s time for maintenance.

It’s early in the week, but this has been a historically difficult topic. I’ll be using my December 13, 2021 post as a guide [1].

Monday Night

I’ve assessed this project’s state: “dusty.” Vaultwarden is containerized on ButtonMash in a locked down account configured to allow-lingering (non-root users otherwise get kicked eventually). The last time I touched Vaultwarden was before I learned the virtue of scripting as opposed to copy-pasting complex commands I will need again in months to years. I foresee the next topic in my Relearning Linux series.

My goal for tomorrow is to brush up on my terminology and generate a new certificate authority.

Tuesday Afternoon

While reviewing my above-mentioned post, I remembered Caddy Web Server and looked it up. Its self-signed security certificate functionality was a little much hassle for me to work out around June 6, 2022 [2]. I shelved it pending a better router before further experimenting with Let’s Encrypt for real certificates, which still hasn’t happened.

Also in my June 6, 2022 post, I owed my disappointment to difficulty with the official Caddy tutorials assuming loopback access vs. me running on a headless server being left without clear instructions to find certificate I need to install into my browser on another computer. One idea coming to mind would be to find a browser container to VNC into. But then I’d still be left without the automatic certificate unless I made my own container with Caddy+Firefox+VNC server. An excellent plan Z if I ever heard one because I am not looking to explore that right now, but it isn’t not an option.

VNC: Virtual Network Client, Remote desktop. A desktop environment hosted one place, but rendered and operated from across a network.

For a plan A, I’d like to try the documentation again. Perhaps the documentation will be more clear to my slightly more experienced eyes. For what it’s worth, Caddy is still installed from before.

Tuesday Night

I’ve explored my old Caddy installation, and my limited understanding is starting to return. My big snag was thinking I needed to use the default HTTP ports while running Caddy rootless. Subdomains may have been involved. I’m interested in trying Caddy containerized, where I have a better grasp on managing ports. Furthermore, if domain names are of concern, I believe involving Pi-Hole may be my best bet, and with that I fear this project just became a month long or longer.

Anyway, I’m going to look into Pi-Hole again. I have an instance up and passing domain names, but I’ve not gotten it to filter anything. Hopefully interpreting a host name won’t be too difficult a goal for tomorrow.

In the meantime, I noticed PiHole was complaining about a newer version available. I went to work on it, but the container was configured to be extremely stubborn about staying up. I force-deleted the image, and PiHole redownloaded using the updated container. I’ve left a note with instructions where I expect to find it next time I need to update.

Wednesday Afternoon

To refresh myself, my current goal is get PiHole to direct buttonmash.lan to Caddy on a underprivileged port for use with reverse proxy access within ButtonMash.

I found an unambiguous tutorial affirming my general intuitions with Pi-Hole itself, meaning my problems are with directing requests to Pi-Hole. For this I went to my router, which wanted an update from July, but had trouble downloading. I hit refresh and it found a September update. It installed, and I had to manually reconnect a Wi-Fi connection (OpenWRT on a Raspberry Pi).

I’ve blown a good chunk of my afternoon with zero obvious forward progress since the last paragraph. Where to start? I’ve learned nslookup, a tool for testing DNS. I validated that my issue with Pi-Hole is getting traffic to Pi-Hole with it. The real temptation to unleash a horde of profanities was my home router from Tp-Link. The moment I pointed DNS at ButtonMash on 192.168.0.X: “To avoid IP conflict with the front-end-device, your router’s IP address has been changed to 192.168.1.1 . Do you want to continue to visit 192.168.0.1?” That was a chore to undo. I needed a browser on a static IP – meaning my RedLaptop sitting in retirement as a server.

DNS: Domain Name Service. This is the part of the Internet that translates the domain names within URL’s into IP address so your browser can find a requested web page.

I’ve appealed to r/pihole for help [3].

Wednesday Night

I’ve gotten a few bites on Reddit, but my problem isn’t solved yet. I performed an update on my laptop and installed similar tool to nslookup (Debian didn’t come with it) to verify that OpenWRT wasn’t the culprit. I’ve found forum posts

No forward progress today, but I did make some “sideways progress.” At least I knew how to fix the netquake I caused.

My goal for tomorrow: I don’t know. IPv6 has been suggested. I’ll explore that. Maybe I’ll move my OpenWRT router for my upstairs workstation over to assign ButtonMash’s Pi-Hole.

Thursday Afternoon

Good day! Late last night I posted my appeal to r/pihole to r/techsupport’s Discord. While answering the responses I received last night, I ran my nslookup on a 192.168.0.X workstation receiving an IP over DHCP – and I got an unexpected success. Silly me had run my test over my statically addressed laptop. Also, leaving it overnight gave DHCP a chance to refresh. The test still failed upstairs workstation, so pointed OpenWRT at ButtonMash instead of the router for DNS. I’ll still need to cleanup my static IP’s, but I should be good to play with pointing PiHole at Caddy.

My next big task is configuring Caddy.

Thursday Night

My next big task is preparing Vaultwarden for use with Caddy.

Vaultwarden is one of the longest-running and most important services on ButtonMash. It was for Vaultwarden I started learning Podman. It was for Vaultwarden I learned how to allow-lingering on a server account. It was for Vaultwarden I researched NGINX and Let’sEncrypt to serve as a reverse proxy and HTTPS support, but later found Caddy to do the same things with hopefully less hassle. Vaultwarden has been with me so long, it was Bitwarden-rs (or some variant thereof) when I started using it and I didn’t follow the name when it was changed. With Vaultwarden, I’ve grown from dumping a raw command into a terminal to using a script on into turning that script into a rootless systemctl service that will refuse to stay down until stopped properly.

But problematically, Vaultwarden holds port 44300, which I want for Caddy. It’s also been running with an integrated HTTPS support module called ROCKET which the developers strongly discourage for regular deployments. This security certificate’s expiration is what is driving me to make progress like mad this week. I’ve spent the past couple (few?) years learning the building blocks to support Vaultwarden properly along with other services to run along side it. It’s time to grow once again.

I spent an hour or so re-researching how to set Podman up as a systemd service. And then I found a note pointing me towards the exact webpage that helped me conquer it last time. With that, I feel the need for a break and take Friday off. I’ll be back after Sabbath (Saturday night) or on Sunday to write a Takeaway and Final Question. I’ve got to give all this information some time to settle.

Of note: I did come across a Redhat blog post about an auto scale-down feature not present on RHEL 8, but it looks interesting for when I migrate to a newer version of Rocky Linux in seven months when active support ends. https://www.redhat.com/en/blog/painless-services-implementing-serverless-rootless-podman-and-systemd

Sunday Morning

This has been a very successful week, but I still have a way to go before I finish my server[’s software] stack: Nextcloud on Android. I researched a couple possible next steps as a possible encore. Caddy would be the next logical step, but Bitwarden has the port I want it listening on. Alternatively: I dug up the name “Unbound,” a self-hosted DNS server to increase privacy from Internet Service Providers, but I’d want more research time.

Takeaway

Again: what a week! I think I have my work lined up for the month. As of Sunday night, my goal for next week is Caddy. With Caddy in place, I’ll have an easier time adding new services while punching fewer holes in ButtonMash server’s firewall.

Final Question

Even now, I’m questioning if I want to migrate Rocky 9 or hold out and hope Rocky 10 shows up before/around Rocky 8’s end of life. Rocky’s upstream, Red Hat Enterprise Linux (RHEL) doesn’t have a clockwork release cycle like Ubuntu, and RHEL 8 appears to be planned for support much longer than Rocky 8. That feature I learned about does sound tempting. Upgrade or hold out?

I look forward to hearing from you in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “Self-Signed Vaultwarden Breakdown,” Dec. 13, 2021. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2021/12/13/self-signed-vaultwarden-breakdown/. [Accessed Oct. 16, 2023].

[2] Shadow_8472, “I Switched My Operations to Caddy Web Server,” June 6, 2022. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2022/06/06/i-switched-my-operations-to-caddy-web-server/. [Accessed Oct. 16, 2023].

[3] Shadow_8472, et. all, “Tp-Link isn’t sending traffic to containerized Pi-Hole (I think??),”reddit.com,Oct. 11, 2023. [Online]. Available: https://www.reddit.com/r/pihole/comments/175wp2l/tplink_isnt_sending_traffic_to_containerized/. [Accessed Oct. 16, 2023].