Server Rebuild With Quadlet

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am continuing my work on my home’s network. Let’s get started!

State of the Homelab

Bitwarden. I self host using Vaultwarden, a 3rd party server. Done properly, it fits nicely in a larger homelab stack, but its OCI container can stand alone in a development environment. Due to skill limitations, I’ve been using it in this configuration. My recent network work has invalidated my manually self-signed certificates, and I’d rather focus my efforts on upgrades instead of re-learning the old system to maintain it.

Today, I am working on my newest server, Joystick (Rocky Linux 9). I compiled some command-by-command notes on using `podman generate systemd` to make self-starting, rootless containers, but just as I was getting the hang of it again, a warning message encouraged me to use a newer technique: Quadlet.

Quadlet

Quadlets? I’ve studied them before, but failed to grasp key concepts. It finally clicked though: they replace not just running `podman generate systemd` once I have a working prototype setup, but also everything I might want to do leading up to that point including defining Podman networks and volumes. Just make your Quadlet definitions once, and the system handles the rest.

The tutorial I found that best matches my use case can be found at mo8it.com [1]. Follow the link under Works Cited for the full text. It’s very detailed; I couldn’t have done a better job myeslf. But it doesn’t cover everything, like how `sudo su user` isn’t a true login for `systemctl –user …`. I had to use a twitchy Cockpit terminal for that (Wayland-Firefox bug).

Caddy

Caddy is the base of my dream homelab tech tree, so I’ll start there. My existing prototype calls for a Podman network, two Podman volumes, and a Caddyfile I’m mounting as a volume from the file system. I threw together caddy.container based on my script, but only the supporting network and volumes showed up. Systemd picked up on “mysleep.container,” an example from RedHat.

As it turned out, caddy.container had missed a capitalization. I found the problem by commenting out lines, reloading, and running `systemctl –user list-unit-files` to see when it didn’t load. Likewise, my Caddyfile volume had a file path bug to squash.

Vaultwarden

Good, that’s started and should be stable. On to Vaultwarden. I updated both ButtonMash and Joystick’s NFS unit files to copy over relevant files, but Joystick’s SELinux didn’t like my user’s fingerprints (owner/group/SELinux data) on the NFS definitions. I cleaned those up with a series of cp and mv commands with sudo and then I could enable the automounts.

Vaultwarden went up with simple enough debugging, but the challenge was in accessing it. I toyed with Cerberus/OPNsense (hardware firewall) DNS overrides until Caddy returned a test message from <domain.lan>:<port#>.

Everything

My next battle was with Joystick’s firewall: I forgot to forward tcp traffic from ports 80 and 443 to 8000 and 44300, respectively. Back on Cerberus, I had to actually figure out the alias system and use that. Firefox needed Caddy’s root certificate. Bouncing back to the network Quadlet, I configured it according to another tutorial doing something very similar to what I want [2]. I configured mine without an external DNS. A final adjustment to my Caddyfile to correct Vaultwarden’s fully qualified domain name, and I was in – padlock and everything.

Takeaways

I come out of this project with an intuition of how to manage Systemd files – especially Quadlet. The Quadlet workflow makes Podman container recipes for Systemd, and a working container will work forever – baring bad updates. I would still recommend prototyping with scripts when stuck though. When a Quadlet fails, there is no obvious error message to look up – it just fails to show up.

Even though it is still new, a lot of my time on Joystick this week was diagnosing my own sloppiness. Reboots helped when I got stuck, and thanks to Quadlet, I didn’t have to worry about spaghetti scripts like how I originally organized ButtonMash and never stabilized this victory I re-achieved today.

Final Question

NextCloud is going to be very similar, which I will make a pod along with MariaDB and Redis containers. But I am still missing one piece: NFS. How do I do that?

I look forward to your answers below or on my Socials.

Works Cited

[1] mo8bit, “Quadlet: Running Podman containers under systemd,” mo8it.com, Jan. 2-Feb. 19, 2024. [Online]. Available: https://mo8it.com/blog/quadlet/. [accessed: Sept. 13, 2024].

[2] G. Coletto, “How to install multi-container applications with Podman quadlets,” giacomo.coletto.io, May 25, 2024. [Online]. Available: https://giacomo.coletto.io/blog/podman-quadlets/. [accessed: Sept. 13, 2024].

Joystick Server Reinstall

Good Morning from my Robotics Lab! This is Shadow8472, and today I am rebuilding Joystick, Button Mash’s twin Rocky Linux server. Let’s get started!

Project Goals

This is a revival of my photo scanning project, first and foremost. I need to get it working, and while I will afford myself some time to experiment with doing it right, it’s more important I get it done. Once I have it working better than Button Mash, I can move my production flag over to Joystick and remodel Button Mash “properly.”

And by “properly,” I mean rootless Podman access to a network attached storage. I have explored NFS (Network File System) extensively, but Podman does things with userspace that NFS doesn’t support. I may need to be open to an alternative or using root. SSHFS lays a file system over SSH, but

Installation

I did my usual stuff when installing Linux. I remembered ahead of time that Joystick doesn’t work with Ventoy, so I flashed a USB (Balena Etcher) large enough for the full 10 GB image I downloaded. I did a clean install over its previous Rocky Linux installation. I also adjusted Joystick’s boot order so Rocky 9.4 loads by default. The system is dual booted with Linux Mint, where I did an apt update/upgrade.

Booted to Rocky, I enabled Cockpit web interface for remote management, a feature I selected for installation with the system. I created a limited user for running Podman, enabled lingering on it, got Cockpit logged into it as a “remote host,” and disabled normal password interactions. I pulled Podman containers for Caddy, Nextcloud, MariaDB, Redis, BusyBox, and Vaultwarden, and gave each a directory with a start.sh and a mountpoint. I excluded Pi-Hole and Unbound from my planned lineup because those functions are now handled by our router and would have required messing with special ports.

Troubleshooting

NFS fought me hard though. It didn’t help that Cockpit’s terminal kept glitching out on me. I noticed this behavior on ButtonMash’s Cockpit as well. Rebooting and refreshing didn’t do anything, but I found a control by navigating my browser away and back. I eventually got to thinking about what other parts it might be, and I came up with a bad version of Firefox, or Wayland. I ran Firefox over SSH on a machine still running Xorg server; it was Wayland.

And at this point, my workstation had a record bad meltdown, which ate my remaining blog time. Be sure to read about it next week!

Takeaway

Projects get interrupted. Something comes up and grand plans wait in the background. I’m tired of server doing just that, but it’s happened again.

Final Question

My longest enemy remains: Rootless Podman vs. Network Storage. How do I do it? I look forward to hearing from you in the comments below or on my Socials!

Rocky Server Stack Deep Dive: 2023 Part 5

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am learning more about Podman Quadlets for my homelab. Let’s get started!

Systemd and Quadlets

From my incomplete research going into this topic, I already know Quadlets is a system for efficiently integrating Podman containers in with Systemd. It was merged into Podman v4.4, and I had a small pain of a time trying to find a distribution with both that and legacy BIOS support along with a list of other requirements.

But what is Systemd? In short: Systemd is the init process –a process that manages other processes– used by most Linux distributions that aren’t trying to optimize for a low RAM or storage footprint. As it turns out, I’ve already had minimal exposure to it while writing unit files for NFS [auto]mounts and a static IP address on Debian. Systemd in turn bases units off these unit files to manage the operating system.


While Systemd unit files defining Podman containers can be written by hand, Quadlets can automate their creation based off simpler unit files of its own: .container, .network, .volume, and .kube. The first three look similar enough to concepts I’m familiar enough with that I figure I could hack an example into doing what I need.

But I’m interested in pods. With .pod unit files only a controversial feature request at best, that leaves me to explore .kube files, which run Kubernetes YAML files. I know nothing about writing Kubernetes YAML files from scratch, and I refuse to cram for them Thanksgiving week.

My project died here for a few hours. One Systemd tutorial brought up Syncthing in an example, and I spent a while on a tangent looking at that, but it too is too large to cram for this week. I unenthusiastically browsed back to Kubernetes, and found:

podman generate kube

Looks like I just might get away with adapting my scripts after all this week. With this in mind, I copied over my files from my laptop’s Debian drive to its new-last-week Rocky 9 installation. Focusing on Nextcloud, I cleared out my dead-end work with Fuse, abstracted volumes, and other junk before realizing BusyBox was likely a more suitable testing grounds.

My First Kuberneties File

I came up with the following bash script for such a pod:

podman pod stop busyBoxPod
podman pod rm busyBoxPod
podman pod create busyBoxPod
podman create \
--pod busyBoxPod \
--name BusyBox \
--volume fastvolume:/root/disk \
-it \
--rm \
busybox

And here is

# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.6.1
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2023-11-23T01:29:45Z"
  labels:
    app: busyBoxPod
  name: busyBoxPod
spec:
  containers:
  - image: docker.io/library/busybox:latest
    name: BusyBox
    stdin: true
    tty: true
    volumeMounts:
    - mountPath: /root/disk
      name: fastvolume-pvc
  volumes:
  - name: fastvolume-pvc
    persistentVolumeClaim:
      claimName: fastvolume

I saved this output as busyBoxPod.yml and returned to Nextcloud.

Nextcloud put up a small tantrum getting re-updated for Podman 4.6.1. I had to look up how to Podman Secrets, and apply :z to volumes to satisfy SELinux. Redis however, refused to accept a password from Podman Secrets, so I rolled back that change. The pod should insulate it anyway. I got it to a point where it needed a domain name.

Branching out to bring up Pi-Hole and Caddy, I learned how the default Unbound configuration for the container I used only forwards DNS requests to Cloudflare. I’ll want to fix this later. I used firewall-cmd to forward ports for HTTP, HTTPS, and DNS to underprivileged ports for rootless containers.

Takeaway

UNCLE! I find more and more of my time supposedly working on server is procrastinating and stressing over either minutia or blankly staring at my screens when I muster enough focus to ignore distractions. There’s no way around it; I’m officially burned out on this project. I’ll maybe come back to it after the new year. I really wanted to get my .kube files working for at least Pi-Hole and Caddy, but it’s going to be a hard pass at the moment.

Final Question

I’m considering covering a free/open source game or few over December. What are your recommendations?

I look forward to hearing from you on my Socials!

Rocky Server Stack Deep Dive: 2023 Part 4

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am exploring fuse-overlayfs as a possible patch between Podman and NFS. Last week’s post was practically a freebee, but I expect this one to be a doozy if it’s even possible. Let’s get started!

Context

For my homelab, I want to run Nextcloud in a rootless Podman 3.0.1 container with storage volumes on our NFS. For logistical reasons, Nextcloud needs to be on RedLaptop running Debian 11 (Linux kernel 5.10.0-26-amd64 x86_64). The NFS share I wish to share is mounted via systemd.

My most promising lead is from Podman Github maintainer rhatdan on October 28, 2023, where he made a comment about “fuse file system,” asking his colleague, @giuseppe, for thoughts to which there has been no reply as of the afternoon of November 10 [1]. I documented a number of major milestones there, which I’ll be covering here.

File System Overlays

Fuse file system turned out to be fuse-overlayfs, one of a few systems for fusing file systems. Basically: there are times when it’s useful to view two or more file systems at once. File system overlays can designate a lower file system and an upper file system. Any changes (file creation, deletion, movement, etc.) in this combined file system manifest in the upper file system, leaving the lower file system[s] alone.

Through a lot of trial and error, I set up a lower directory, an upper directory, a work directory, and a mountpoint. My upper directory and work directory had to be on the NFS, but I ran into an error about setting times. I double checked that there were no major problems related to Daylight Savings Time ending, but wasn’t able to clear the error. I sent out some extra help requests, but got no replies (Sunday, Nov. 12). A third of my search results are in Chinese, and the others are either not applicable or locked behind a paywall. Unless something changes, I’m stuck.

Quadlets

Github user eriksjolund got back to me with another idea: quadlets [1]. Using this project merged into Podman 4.4 and above, he demonstrated a Nextcloud/MariaDB/Redis/Nginx setup that saves all files as the underprivileged user running the containers. In theory, this sidesteps the NFS incompatibilities I’ve been experiencing all together.

The first drawback from my perspective is that I need to re-define all my containers as systemd services, which is something I’ve admittedly been meaning to do anyway. A second is again that this is a feature merged into Podman much later than what I’m working with. Unless I care to go digging through the Podman GitHub myself, I’m stuck with old code people will be reluctant to support.

Distro Hunt

Why am I even using Debian still? What is its core purpose? Stability. Debian’s philosophy is to provide proven software with few or no surprises left and the user polishes it to taste. As my own sysadmin, I can afford a little downtime. I don’t need the stability of a distro supporting the most diverse Linux family tree. Besides, this isn’t the first time community support has suggested features in the future of my installation’s code base. Promising solutions end in broken links. RAM is becoming a concern. Apt package manager has proven more needy than I’d care to babysit. If I am to be honest with myself, it’s time to start sunsetting Debian on this system and find something more up-to-date for RedLaptop. I’ll keep it around for now just in case.

My first choice was Fedora to get to know the RedHat family better. Fedora 39 CoreOS looked perfect for its focus on containers, but it looks like it will require a week or two to configure and might not agree with installing non-containerized software. Fedora 39 Server was more feature complete, but didn’t load up for my BIOS (as opposed to the new standard of UEFI); I later learned that new BIOS-based installations were dropped on or around Fedora 37.

I carefully considered other distributions with the aid of pkgs.org. Debian/Ubuntu family repositories go up to 4.3. Alpine Linux lacks systemd. Souls Linux is for desktops. OpenSuse Tumbleweed comes with warnings about being prepared to compile kernel modules. Arch is… Arch.

Fresh Linux Installation

With time running out in the week, I decided to forgo sampling new distros and went with minimal Rocky 9. Installation went as best can be expected. I added/configured cockpit, podman, cockpit-podman, nfs-utils, and nano. I added a podmanuser account, set it up to allow-lingering, and downloaded the container images I plan on working with on this machine: PiHole, Unbound; Caddy; Nextcloud, Redis, MariaDB; busybox.

Takeaway

I write this section on Friday afternoon, and I doubt I have enough time remaining to properly learn Quadlets and rebuild my stack, so I’m going to cut it off here. From what I’ve gathered already, Quadlets mostly uses Systemd unit files, a format I’ve apparently worked with before, but also needs Kubernetes syntax to define pods. I don’t know a thing about using Kubernetes. If nothing else, perhaps this endeavor will prepare me for a larger project where larger scale container orchestration is needed.

Final Question

Do you know of a way I might have interfaced Podman 3 with NFS? Did I look in the wrong places for help (Debian forums, perhaps)?

I look forward to hearing from you on my Socials!

Work Cited

[1]. Shadow_8472, D. “rhdan” Walsh, E. Sjölund, “Rootless NFS Volume Permissions: What am I doing wrong with my Nextcloud/MaraiDB/Redis pod? #20519,” github.com, Oct. 27, 2023-Nov. 10, 2023. [Online]. Available: https://github.com/containers/podman/discussions/20519#discussioncomment-7410665. [Accessed Nov. 12, 2023].

Rocky Server Stack Deep Dive: 2023 Part 3

Good Morning from my Robotics Lab! This is Shadow_8472 and today is part 3 of this year’s big server push. Let’s get started!

Tuesday

Podman NFS

I had a slow start to the week, but I decided to poke at if Podman over NFS was actually as impossible as I thought. It wasn’t. A VERY special thank you to ikkeT, whom I made contact with over the official Podman Discord server, who pointed out, “nfs doesn’t know about selinux.”

Backing up a bit, I had a few minutes waiting for supper today, so I decided to start a clean attempt to mount a directory over NFS. My trials leading up to creating a persistence volume worked until I tried mounting one from GoldenOakLibry over NFS. Once I removed SELinux’s :z permissions flag per ikkeT’s advice, the container started and I found a flag I’d previously left in that directory.

Sadly, applying this quick fix did not solve my old scripts. I plan on re-building them this week.

In light of this discovery, I moved a 1TB solid state drive labeled “MineOS” from ButtonMash back to GoldenOakLibry. Around half a dozen micro-blunders later, I additionally removed its line from ButtonMash’s file system table (fstab) and noted that Debian 11 –which my RedLaptop is running– will be facing an End of Life (EoL) situation around a month after Rocky 8 on ButtonMash.

Vaultwarden Migration

It was a bit messy, but I did a “Take II” on migrating Vaultwarden into Caddy’s reach. I used root to copy the vw-data directory from Vaultwarden’s dedicated user to PiHole’s. I copied it again as PiHole’s user to correct root’s ownership of all the sub-directories, and then removed root’s copy… confirming. one. file. at. a. time.

For the migration proper (as well as my NFS testing earlier), I learned a container with a minimalistic set of command line tools called BusyBox. I spun it up mounting both a newly created Vaultwarden volume alongside the old directory. Inside, I copied all files over.

I started Valulwarden using the new volume and Caddy for the reverse-proxy to access it, but I also had to stop the production grade Pi-Hole in favor of the development configuration. I navigated over to it with my local domain and found the migration was a success.

Android root.crt Import

When I got up this morning, I told myself I would install the root certificate from Caddy on my Android tablet. I copied the file over USB, and then went to Settings>Security>OTHER SECURITY SETTINGS>CREDENTIAL STORAGE>User certificates, where a pop-up menu found it.

Certificate in-place and environment pointed at my Vaultwarden subdomain, Bitwarden now signs in with little issue.

Wednesday

Goals for Today

Still so much to do. My main target for today will be re-building Nextcloud’s script to store data over NFS so it can be hosted by either ButtonMash or RedLaptop.

Likewise, it will be good to have Pi-Hole’s data hosted in a common space to be hosted in parallel, though I’ll want to consider how the two instances will fight or get along with each other. I’ll want to compose a help request to a Pi-Hole community after I have Nextcloud working.

Minor Cleanup Tasks

Vaultwarden: I migrated my Bitwarden clients on my daily drivers (DerpyChips and my Upstairs Workstation) to the new Vaultwarden server. For Derpy, I did concede to installing Caddy’s root certificate directly into Firefox without pressing for making it access the system’s trust anchors. I may or may not have forgotten about that issue when logging out.

Pi-Hole: I spotted a configuration with the upstream DNS pointed incorrectly. It must not have migrated correctly. It’s now pointed at its local upstream router. I additionally did a little research on a redundant Pi-Hole, and I think I’ll not fuss with joint memory after all.

Unbound: This is more of scoping out a project closely related to Pi-Hole. Unlike many of the other projects I’m using, Unbound’s developer’s, Nlnet Labs, do not appear to have an official Docker container.

RedLaptop’s Package Manager

Debian needs more attention than I pay it on RedLaptop. Judging by its errors, it looks stuck on invalid cryptography signatures related to repositories for OBS, a streaming program I never got working properly. I uninstalled the suite, but had difficulty determining which repository needed to be removed – if any. Follow up investigation strongly hinted involvement with Lutris, which I want to keep. I found a page on software.opensuse.org and puzzled out which command to apply for my own setup. This did not change anything.

I poked around farther in apt’s config files. I cleared a warning by re-enabling a WINE listing and pointing it at Debian 11’s repositories.

Podman Sorely Out of Date… Maybe?

3 strikes against RedLaptop’s Podman! It doesn’t support secrets. It doesn’t support “volume export” or “volume import” for tarballing volumes (as I found out while duplicating Pi-Hole from ButtonMash just now). But one mission-critical piece is networking between containers and pods.

Previously this month, I learned about how Podman 4.0 brought in a new networking protocol. RedLaptop has Podman 3.0.1 available per Debian’s “Stability first” mentality. When I failed to find Podman’s network sub-command by tab-to-complete, my first instinct was to try installing a newer version from a Debian backports/main or backports-sloppy/main repository. I found no such package exists for installation, and found a topic on Reddit confirming my observations.

Compiling a new version myself would be an option, but could take anywhere from an hour to a full week. Maybe a package intended for Debian 12 would work, but the path of least resistance toward is learning to work with the old standard, slirip4netns. Long story short, for my purposes, I didn’t need to change anything after all, though I did passively learn a little about how Podman networks have IP address ranges same as any other subnet.

Pi-Hole and Caddy

The plan here is for redundancy for planned downtime for ButtonMash. I have an end-of-year goal of starting to scan those family photos by the end of the year.

So, continuing to work on RedLaptop. Pi-Hole is already running with copied volumes from ButtonMash. For port forwarding, I installed firewalld, used the commands from last week for port forwarding, and accidentally locked myself out when reloading the firewall. SSH was still open though, so Cockpit on ButtonMash bailed me out without needing physical access or going straight terminal. While I was at it, I opened the ports I plan on using for now.

I confirmed my work by accessing the admin panel on RedLaptop’s Pi-Hole and testing its DNS lookup with nslookup. To finish the task, I told the home router to recommend RedLaptop as a secondary DNS option over DHCP and gave the two installations in-container hostnames based on their respective host machines.

Caddy needs to be a different story. RedLaptop is getting its own domain so I expect more headache trying to work out domain names than if I only adapt my Caddyfile.

Goals

I didn’t reach either of my stated goals per-se, but that’s OK. My understanding of running redundant Pi-Holes has grown considerably and I did partially address a few issues on RedLaptop. I’ll work on Nextcloud tomorrow for sure.

Thursday

Nextcloud or Bust

It’s Nextcloud day. In sequence to get Nextcloud running on RedLaptop, I made a subdomain in PiHole, added a reverse proxy entry in Caddy, corrected the IP’s in the Caddyfile, backed up my working Nextcloud deployment, networked a Nextcloud’s pod to Caddy’s and Pi-Hole’s, and added Caddy’s root certificate to my workstation’s trust stores.

Nextcloud then got back to me, “Access through untrusted domain,” pointing me toward config/config.php. I found the window from Cockpit-Podman a bit small for such a large file, so I mounted Nextcloud’s persistency volume in a BusyBox container… no nano text editor. While BusyBox had vi, but not vim, Nextcloud has none of the above. I pulled up a vi/vim cheat sheet.

There exists a meme I once saw about using a young programmer trying to exit vim as part of a random text generator. It’s not far off. Even with the cheat sheet, it took me a few minutes – especially with vim having some extra and equally unintuitive shortcuts to save and quit. For the record, I found :w to save and :q to quit.

After changing out RedLaptop’s static IP for nextcloud.redlaptop.lan as a “trusted_domain,” I reached the login screen. I adjusted my password in Bitwarden accordingly.

While trying to add another entry to my Caddyfile, I decided to cement how I’m going to organize it: alphabetical by domain, the successive subdomains. Top level domains point at individual machines, each service gets a subdomain, and admin consoles get a subdomain from there. Panels strictly for admin access get an admin subdomain. I’ll have to think about the merits of how I assign Podman network IP’s to containers. Right now, it’s in the order I first set them up, but I’m thinking a hash might be of merit.

Next, I followed up on my work to restore the solid state drive as a GoldenOakLibry NFS share. I re-enabled the automounts on ButtonMash and copied over the required files to RedLaptop. And then I did some last-minute tests on mounting volumes over NFS. I could create volumes no problem on an NFS share, but using them suddenly requires root permissions. I launched a help request to the Podman Discord and continued with mounting directories.

Continuing on with an old plan involving three mounts, I aimed MariaDB and the general Nextcloud data volumes at the small, but fast NFS share. But where exactly to host the bulk storage? I examined the old volume with BusyBox and found that users each have their own photos. I’m totally making a dedicated Photo Trunk account tomorrow and mounting it in there.

Redis is recommended for caching data to maintain performance as your database grows larger. On a passing inspection, it didn’t look all that bad to set up, so I downloaded a container and added it to the Nextcloud pod.

Thinking ahead about final presentation of the Photo Trunk project, I checked in on the image board I thought I wanted, Philomina, only to learn many such projects rely on Amazon Web Service, which is a hard pass for my open source, self-hosted approach.

Goals

I’d say I did a lot better, but I still have a bunch of cleanup to do. Tomorrow, I want to make this new Nextcloud installation, create accounts for admin, myself, and a special one dedicated for Photo Trunk.

Friday

I started work by navigating to nextcloud.redlaptop.lan and found it totally blank. The pod wasn’t running. Whatever I did after correcting a capitalization mismatch, Podman locked up on me. I rebooted. While Pi-Hole and Caddy went up manually, I found the Nextcloud pod spazzing out. From MariaDB’s container, one representative repetition of many:

2023-10-27 22:10:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.0.2+maria~ubu2204 started.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted

Nextcloud’s container was similarly complaining about blocks of chown permissions not working. I reached out for help again, and was recommended to look into root squashing. I tried a couple different things, updated GoldenOakLibry, and still got the same result. Further investigation will be required.

Goals

It grows close to sundown, so I must break it off here. My goal for Saturday night/Sunday is again Nextcloud over NFS. I’ve been having a lot of grief out of Podman locking up on me, so I might have to reset the whole thing. I’ve had to do it before, but with everything in the same place this time, I’ll want to write a script to pull all container images I use.

Saturday Night

New Automation Scripts

Podaman is still giving me issues with containers stuck in improper states. I prepared a script to re-download all my container images so I don’t have to rebuild manually in case this keeps happening. While this is a workaround, it may allow me to notice a pattern if I observe Podman failing enough times in the long run. Right now though, the only pattern I see is that rebooting semi-fixes Podman hanging for a few minutes of me working on it.

I also created a script to call each of the other startup scripts. I don’t anticipate needing it in the long term though. One thing of note for both this and the reload script is that I wrote it from my tablet, which has previously refused to load Cockpit through the browser I was using.

Podman Reset and Cleanup

STUPID! While going to do the heavy duty Podman reset, I specifically checked its warning:

$ podman system reset
WARNING! This will remove:
- all containers
- all pods
- all images
- all build cache
Are you sure you want to continue? [y/N]

NO MENTION OF VOLUMES!!! It nuked my volumes along with everything else. For what it’s worth, I can still rebuild relatively quickly. I’ll just design my system to work with “bind mounts” instead of relying so heavily on volumes this time around. I was NOT wanting a wipe this through!

I took this opportunity in the rough to polish a few of my file names. My biggest loss besides my (fallback Nextcloud installation) was the RedLaptop copy of Pi-Hole, which I’d modified a bit from its original. On my upstairs workstation at least, I also updated RedLaptop’s root certificate authority certificate.

Goals

A few suggestions to try came in over Sabbath. My goal for tomorrow is to look into them if nothing else.

Sunday

I spent my mental energy on an event today, but I got a little research in nonetheless. To recap: rootless Podman does things with namespaces which NFS doesn’t support.

GitHub user rhatdan, a maintainer on Podman, speculated I could use something called fuse file system. I don’t get the full concept, but it appears to be the better option should it develop.

Another GitHub user going by eriksjolund separately suggested manually managing my namespace and helped me clear out a fair number of errors. It took a few tries, but as of writing, I suspect this course of action may prove a dry end because it appears to conflict with how I’m interfacing my containers with Caddy. This suggestion too needs additional time to develop, but develop it sure did when I hit refresh and found a fresh reply. It was interesting –to say the least– working out what some of the recommended changes to my script did for myself.

This just in on the way to publication: it looks like manual UID/GID mapping has run its course unless someone new has a brilliant idea that’s simpler than compiling Podman from source or updating Debian.

To be honest, I’m getting burned out on server. It’s been an awesome run seeing things start coming to life, but I feel the need some time off from it. I’ll follow up with these two leads, but don’t be surprised if I do a Re-Learning Linux post next week.

Takeaway

I’m beginning to have moments where I see ButtonMash, RedLaptop, and GoldenOakLibry as parts of a greater whole. While working out the earlier two to operate in parallel, I’m creating a system with redundancy built in. I can work on one and improve the other when I’m confident in my abilities to do it correctly.

Final Question

Rootless Podman NFS. How?

Work Cited

[1] Shadow8472, D. Walsh “rhatdan,” E. Sjölund “eriksjolund,”“Rootless NFS Volume Permissions: What am I doing wrong with my Nextcloud/MaraiDB/Redis pod?,” github.com, Oct. 27, 2023-.[Online]. Available: https://github.com/containers/podman/discussions/20519. [Accessed Oct. 30, 2023].

Rocky Server Stack Deep Dive: 2023 Part 2

MAJOR progress! This week, I’ve finally cracked a snag that’s been holding me for two years. Read on as I start a deep dive into my Linux server stack.

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am continuing renovations on my home server, ButtonMash. Let’s get started!

The daily progress reports system worked out decently well for me last week, so I’ll keep with it for this series.

Caddy is an all-in-one piece of software for servers. My primary goal this week is to get it listening on port 44300 (the HTTPS port multiplied by 100 to get it out of privileged range) and forwarding vaultwarden.buttonmash.lan and bitwarden.buttonmash.lan to a port Vaultwarden, a Bitwarden client I use, is listening on over Buttonmash’s loopback (internal) network.

Tuesday Afternoon

From my Upstairs Workstation running EndeavourOS, I started off with a system upgrade and reboot while my workspace was clean. From last week, I remember Vaultwarden was already rigged to have port 44300, but straight away, I remembered its preferred configuration is HTTP coming into the container, so I’ll be sending it to 8000 instead.

My first step was to stop the systemd service I’d set up for it and start a container without the extra Podman volume and ROCKET arguments needed to manage its own HTTPS encryption. Getting my test install of Caddy going was more tricky. I tried to explicitly disable its web server, but figured it was too much trouble for a mere test, so I moved on to working with containers.

While trying to spin up a Caddy container alongside Pi-Hole, I ran into something called rootlessport hogging port 8000. I ran updates and rebooted the server. And then I realized I was trying to put both Caddy and Vaultwarden on the same port! I got the two running at the same time and arrived on Caddy’s slanted welcome page both with IP and via Pi-Hole-served domain_name:port_number.

Subdomains are my next target. I mounted a simple Caddyfile pointing to Vaultwarden and got stuck for a while researching how I was going to forward ports 80 and 443 to 8000 and 44300, respectively. Long story short, I examined an old command I used to forward DNS traffic to Pi-Hole and after a much background research about other communication protocols, I decided to forward just TCP and UDP. I left myself a note in my administration home directory.

DNS: Domain Name System – Finds IP address for URL’s.

sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=tcp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=udp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=tcp:toport=44300 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=udp:toport=44300 –permanent

I still don’t get a reply from vaultwarden.buttonmash.lan. I tried nslookup, my new favorite tool for diagnosing DNS, but from observing Caddy’s cluttered logs, I spotted it rejecting my domain name because it couldn’t authenticate it publically. I found a “directive” to add to my declaration of reverse proxy to use internal encryption.

But I still couldn’t reach anything of interest – because reverse-proxied traffic was just bouncing around inside the Caddy container! The easy solution –I think– would be to stack everything into the same pod. I still want to try keeping everything in separate containers though. Another easy solution would be to set the network mode to “host,” which comes with security concerns, but would work in-line with what I expected starting out. However, Podman comes with its own virtual network I can hook into instead of lobbing everything onto the host’s localhost as I have been doing. Learning this network will be my goal for tonight’s session.

Tuesday Night

The basic idea behind using a Podman network is to let your containers and pods communicate. While containers in a pod communicate as if over localhost, containers and pods using a Podman network communicate as if on a Local Area Network down to ip address ranges.

My big question was if this was across users, but I couldn’t find anyone saying one way or the other. Eventually, I worked out a control test. Adding the default Podman network, “podman,” to the relevant start scripts, I used ip a where available to find containers’ ip addresses.Pi-Hole then used curl to grab a “Hello World!” hosted by Caddy on the same user. I then curled the same ip:port from Vaultwarden’s container and failed to connect. This locked-down behavior is expected from a security point of view.

On this slight downer, I’m going to call it a night. My goal for tomorrow is to explore additional options and settle on one even if I don’t start until the day after. In the rough order of easy to difficult (and loosely the inverse of my favorites), I have:

  1. Run Caddy without a container.
  2. Run Caddy’s container rootfully.
  3. Run Caddy’s container in network mode host.
  4. Move all containers into a single user.
  5. Perform more firewalld magic. (Possibly a flawed concept)
  6. (Daydreaming!!) Root creates a network all users can communicate across.

Whatever I do, I’ll have to weigh factors like security and the difficulty of maintenance. I want to minimize the need for using root, but I also want to keep the separate accounts for separate services in case someone breaks out of a container. At the same time, I need to ask if making these connections will negate any benefit for separating them across accounts to begin with. I don’t know.

Wednesday Afternoon

I spent the whole thing composing a help request.

Wednesday Night

The names I am after for higher-power networking of Podman containers are Netavark and Aardvark. Between 2018 and around February 2022 it would have been Slirp4netns and its plethora of plugins. Here approaching the end of 2023, that leaves a8 and onword is an outright betrayal round four years worth of obsolete tutorials to not quite two years with the current information – and that’s assuming everyone switched the moment the new standard was released, which is an optimistic assumption to say the least. In either case, I should be zeroing in on my search.

Most discouraging are how most of my search results involving Netavark and Aardvark end up pointing back to the Red Hat article announcing their introduction for fresh installs in Podman 4.0.

My goal for tomorrow is to make contact with someone who can point me in the right direction. Other than that, I’m considering moving all my containers to Nextcloud’s account or creating a new one for everything to share. It’s been a while since I’ve been this desperate for an answer. I’d even settle for a “Sorry, but it doesn’t work that way!”

Thursday Morning

Overnight I got a “This is not possible, podman is designed to fully isolate users from each that includes networking,” on Podman’s GitHub from Lupa99, one of the project maintainers [1].

Thursday Afternoon

Per Tuesday Night’s entry, I have multiple known solutions to my problem. While I’d love an extended discourse about which option would be optimal from a security standpoint in a production environment, I need to remember I am running a homelab. No one will be losing millions of dollars over a few days of downtime. It is time to stop the intensive researching and start doing.

I settled on consolidating my containers into one user. The logical choice was Pi-Hole: the home directory was relatively clean, I’d only need to migrate Vaultwarden. I created base directories for each service noting how I will need to make my own containers some day for things like games servers. For now, Pi-Hole, Caddy, and Vaultwarden are my goals.

Just before supper, I migrated my existing Pi-Hole from hard-mounted directories to Podman volumes using Pi-Hole’s Settings>Teleporter>Backup feature.

Thursday Night

My tinkerings with Pi-Hole were not unnoticed. At family worship I had a couple family members reporting some ads slipping through. At the moment, I’m stumped. If need be, I can remigrate by copying the old instance with a temporary container and both places mounted. My working assumption though is that it’s normal cat and mouse shenanigans with blocklists just needing to update.

It’s been about an hour, and I just learned that any-subdomain.buttonmash.lan and buttonmash.lan are two very different things. Every subdomain I plan to use on ButtonMash needs to be specified on PiHole as well as Caddy. With subtest.buttonmash.lan pointed at Caddy and the same subdomain pointed at my port 2019 Hello World!, I get a new error message. It looks like port 80 might be having some trouble getting to Caddy…

$ sudo firewall-cmd –list-all

forward-ports:
port=53:proto=udp:toport=5300:toaddr=

That would be only Pi-Hole’s port forward. Looking at that note I left myself Tuesday, and I can see I forwarded ports 8000 and 44300 into themselves! The error even ended up in the section above. Here’s the revised version:

sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=udp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=44300 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=udp:toport=44300 --permanent

I also removed Tuesday’s flubs, but none of these changes showed up until I used

sudo firewall-cmd --reload

And so, with Pi-Hole forwarding subdomains individually and the firewall actually forwarding the HTTP and HTTPS ports (never mind that incoming UDP is still blocked for now), I went to https://vaultwarden.buttonmash.lan and was greeted with Firefox screaming at me, “Warning: Potential Security Risk Ahead” as expected. I’ll call that a good stopping point for the day.

My goal for tomorrow is to finish configuring my subdomains and extract the keys my devices need to trust Caddy’s root authority. It would also be good to either diagnose my Pi-Hole migration or re-migrate it a bit more aggressively.

Friday Afternoon

To go any farther on, I need to extract Caddy’s root Certificate Authority (CA) certificate and install it into the trust store of each device I expect to access the services I’m setting up. I’m shaky on my confidence here, but there are two layers of certificates: root and intermediate. The root key is kept secret, and is used to generate intermediate certificates. Intermediate keys are issued to websites to be used for encryption when communicating with clients. Clients can then use the root certificate to verify that a site’s intermediate certificate is made from an intermediate key generated from the CA’s root key. Please no one quote me on this – it’s only a good-faith effort to understand a very convoluted ritual our computers play to know who to trust.

For containerized Caddy installations, this file can be found at:

/data/caddy/pki/authorities/local/root.crt

This leads me to the trust command. Out of curiosity, I ran trust list on my workstation and lost count around 95, but I estimate between 120 and 150. To tell Linux to trust my CA, I entered:

trust anchor <path-to-.crt-file>

And then Firefox gave me a new warning: “The page isn’t redirecting properly,” suggesting an issue with cookies. I just had to correct some mismatched ip addresses. Now, after a couple years of working toward this goal, I finally have that HTTPS padlock. I’m going to call it a day for Sabbath.

My goal for Saturday night and/or Sunday is to clean things up a bit:

  • Establish trust on the rest of the home’s devices.
  • Finish Vaultwarden migration
  • Reverse Proxy my webUI’s to go through Caddy: GoldenOakLibry, PiHole, Cockpit (both ButtonMash and RedLaptop)
  • Configure Caddy so I can access its admin page as needed.
  • Remove -p ####:## bound ports from containers and make them go through Caddy. (NOT COCKPIT UNTIL AVAILABLE FROM REDUNDANT SERVER!!!)
  • Close up unneeded holes in the firewall.
  • Remove unneeded files I generated along the way.
  • Configure GoldenOakLibry to only accept connections through Caddy. Ideally, it would only accept proxied connections from ButtonMash or RedLaptop.
  • Turn my containers into systemd services and leave notes on how to update those services
  • Set up a mirrored Pi-Hole and Caddy on RedLaptop

Saturday Night

Wow. What was I thinking? I could spend a month in and of itself chewing on that list, and I don’t see myself as having the focus to follow through with everything. As it was, it took me a good half hour to just come up with the list.

Sunday

I didn’t get nearly as much done as I envisioned over the weekend because of a mental crash.

Nevertheless, I did do a little additional research. Where EndeavourOS was immediately recipient to the root certificate such that Firefox displayed an HTTPS padlock, the process remains incomplete from where I tried it on PopOS today. I followed the straightforward instructions found for Debian family systems on Arch Wiki, but when I tell it to update-ca-certificates, it claims to have added something no matter how many times I repeat the command without any of the numbers actually changing. I’ve reached out for help.

Monday Morning

I’ve verified that my certificate shows up in /etc/ssl/certs/ca-certificates.crt. This appears to be an issue with Firefox and KDE’s default browser on Debian-based systems. I’ll decide another week if I want to install the certificate directly to Firefox or if I want to explore the Firefox-Debian thing further.

Takeaway

Thinking back on this week, I am again reminded of the importance of leaving notes about how to maintain your system. Even the fog:head AM brain is better able to jot down a relevant URL that made everything clear where the same page may be difficult to re-locate in half a year.

My goal for next week is to develop Nextcloud further, though I’ll keep in mind the other list items from Friday.

Final Question

What do you think of my order of my list from Friday? Did I miss something obvious? Am I making it needlessly overcomplicated?

Let me know in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, Luap99, “How Do I Network Rootless Containers Between Users? #20408,” github.com, Oct. 19, 2023. [Online]. https://github.com/containers/podman/discussions/20408. [Accessed Oct 23, 2023].

[2]. Arch Wiki, “User:Grawity/Adding a trusted CA certificate,” archlinux.org, Oct. 6 2022 (last edited), [Online]. https://wiki.archlinux.org/title/User:Grawity/Adding_a_trusted_CA_certificate#System-wide_–_Debian,_Ubuntu_(update-ca-certificates). [Accessed Oct 23, 2023].

I Have a Cloud!!!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am celebrating my Project of the Year. I have Nextcloud installed! Let’s get started!

My NextCloud Experience in Brief

This week, I achieved a major milestone for my homelab – one I’ve been poking since at least early March of this year when I noted, “Nextcloud has been a wish list item since I gave up using Google’s ecosystem,” [1].

As I learned more about the tech I’m working with, I added specifications for storing scanned pictures on high capacity, but slow hard disks while making smaller files available with the speed of solid state – only to learn later how rootless Podman is incompatible with NFS. I studied “Docker Secrets” to learn best practices for password management – only to move the project to an older version of Podman without that feature. One innovation to survive this torrent of course corrections is running the containers in a pod, but even that was shifted around in the switch from Rocky 8 to Debian.

Two major trials kept me occupied for a lot of this time. The first involved getting the containers to talk to each other, which they weren’t willing to do over localhost, but they were when given a mostly equivalent IPv4 loopback address.

The second was the apparent absence of a database despite multiple attempts to debug my startup script. Nextcloud was talking to MariaDB, but it was like MariaDB didn’t have the database specified in the command to create its container. For this, I created an auxiliary MariaDB container in client mode (Thank-you-dev-team-for-this-feature!) and saw enough to make me think there was none. I wasn’t sure though.

One Final Piece

#remove MariaDB’s “dirty” volume
podman volume rm MariaDBVolume
#reset everything
./resetVolumes
./servicestart

There was no huge push this week. I came across an issue on GitHub [2] wondering why there was no database being created. By starting a bash shell within the MariaDB container, I found there were some files from some long-forgotten attempt at starting a database. All I had to do was completely reset the Podman volume instead of pruning empty volumes as I had been doing.

Future Nextcloud Work

Now that I have the scripts to produce fresh instances, I still have a few work items I want to keep in mind.

I expect to wipe this one clean and create a proper admin account separate from my main username, a practice I want to better get into when setting up services.

Adjacent, but I’ll want access on my tablet, which will entail getting the Bitwarden client to cooperate with my home server.

I still want my data on GoldenOakLibry. I’m hoping I can create a virtual disk or two to satisfy Podman which in turn RedLaptop can relay over NFS to network storage.

Final Question

Have you taken the time to set up your own personal cloud? I look forward to hearing about your setup in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “I Studied Podman Volumes,”letsbuildroboticswithshadow8472.com,March 6, 2023. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2023/03/06/i-studied-podman-volumes/. [Accessed Oct. 2, 2023].

[2] rosshettel, thaJeztah, et. all, “MYSQL_DATABASE does not create database,” github.com, July 9, 2016. [Online].Available: https://github.com/MariaDB/mariadb-docker/issues/68. [Accessed Oct. 2, 2023].

What are Relational Databases?

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am targeting Nextcloud for major milestone completion. Let’s get started.

Previously on my Nextcloud saga, I finally managed to run a pod with both Nextcloud and MariaDB in their own containers and get them to talk. The words they exchanged were along the lines of “Access Denied.” I also have a script to add a temporary third container for database command line access.

My next immediate job is learning enough MySQL to diagnose Nextcloud’s refusal to create an admin account. I found a few commands to try and learned that the database I expected to be there on container creation didn’t appear to be. Container logs didn’t report any errors. I need some more background research, even if that’s the only thing I do this week.

Most important things first: what is the relationship between MariaDB and MySQL? I’ve been treating the former as if it were an implementation or distribution of the later – like the difference between Debian and Linux. But according to MariaDB’s site, they’re a fork that avoids “vendor-lock” while maintaining protocol compatibility with MySQL [1]. So MySQL help should still work for MariaDB on a “close enough” basis, sort of like how Debian solutions may still work for Ubuntu systems. When available: use a matching guide.

Contrary to what I said in my closing words last time I handled Nextcloud, commands being written in all caps is only convention – the software is usually a lot more forgiving. SELECT equals sELeCT which equals select and SelEct as well as the other 60 possible combinations.

MariaDB is what is called a “relational database.” Fancy phrase to me, but here goes an explanation. Data is information. Information can be organized into tables for later retrieval. Zoom out one level, and now tables with information themselves become data that interacts with information in other tables. MariaDB can track these “relations.”

Takeaway

Needless to say, relational databases gets messy fast. Considering how my database is meant to be locked up tight within a pod on a server that blocks direct access, a containerized webUI I can expose will do nicely. MariaDB’s website has a lengthy list of graphical clients, but only phpMyAdmin showed up as having this feature when I used “find on page” [2]. Importantly, it also shows up in Docker.io as a Docker Official Image, which I can run on Podman.

Final Question

As of posting, I’m planning on spending a week or few each for learning phpMyAdmin and refining my MariaDB understanding. Otherwise, if someone can answer what my problem is that Nextcloud isn’t finding the expected database. The following is a command from my pod creation script.

podman create \
        --pod NextcloudRedLaptop \
        --name MariaDBContainer \
        -v MariaDBVolume:/var/lib/mysql:Z \
        -e MARIADB_ROOT_PASSWORD="<gibberish1>" \
        -e MARIADB_DATABASE=NextcloudDB \
        -e MARIADB_USER=nextcloudDbUser \
        -e MARIADB_PASSWORD="Gibberish2" \
        --restart on-failure \
        docker.io/library/mariadb:latest

And here is what I think is the important output from MariaDB:

MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.010 sec)

If I understand what I need correctly, I should have had one show up titled “NextcloudDB” on container creation.

And so, my final question is this: Why doesn’t it work? Am I even looking in the right spot?

If I oversimplified or got something wrong about relational databases, be sure to tell me all about it in the comments!

Works Cited

[1] MariaDB, “MariaDB vs. MySQL The difference between choice and vendor lock-in,” MariaDB, [Online]. Available: https://mariadb.com/database-topics/mariadb-vs-mysql/. [Accessed Sept. 18, 2023].

[2] MariaDB, “Graphical and Enhanced Clients,” MariaDB, [Online]. Available: https://mariadb.com/kb/en/graphical-and-enhanced-clients/. [Accessed Sept. 18, 2023].

Nextcloud Soon?

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am pressing on towards my very own Nextcloud deployment beyond a proof of concept. Let’s get started!

Preparing an Account

Last week, I learned that Rootless Podman comes with certain challenges when trying to access a network drive. ButtonMash’s Rocky 8 Linux hard drive is tiny, so I don’t want to use it for storage. That leaves my retired laptop, “RedLaptop” for the job.

I use dedicated user accounts to isolate processes. The first order of business was to create a new account and log in through the Cockpit web interface. I had a bit of trouble when the underlying SSH couldn’t connect because Debian doesn’t like capital letters in its usernames, but the nonexistent, capital-containing username was flushed out while diagnosing user groups per advice on Discord. The account was additionally locked to prevent normal logins via password.

Preparing Containers

I’ve been developing a couple scripts for running Nextcloud on ButtonMash in a pod, so I copied them and the secrets directory (password storage) over to RedLaptop and tried them out. Errors galore! I downloaded the Nextcloud and MariaDB container images, but the older version of Podman (3.0.1 on Debian vs. 4.4.1 on Rocky 8) meant missing features – namely Podman secrets and mounting volumes on pod creation.

The secrets were easy to revert. I just had to remove the –secret flags and reinstate the passwords in surrounded by quotation marks. Unlike best practices in later versions, volumes mounted nicely into their respective containers. I double checked one with

podman inspect <container ID> | grep -i volume

and found references to the correct volume.

New Territory

At long last, I made it to the Nextcloud installation screen where I need to make an admin account. From here on, all I know comes from the little scouting I did way back by using an SQLite database hosted offered from the Nextcloud container as opposed to a more capable one.

I got an error about failing to connect to the database. I found a username with a capitalization error and I didn’t actually have a named database created within MariaDB’s container. But even with both of those fixed, I kept getting kicked back to Nextcloud’s create-an-admin screen.

And here my research for the week runs out of time. MariaDB’s log shows two errors I suspect to be the root issue:

<timestamp> 0 [Warning] mariadbd: io_uring_queue_init() failed with errno 1
<timestamp> 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF

Takeaway

I so want to be done with this project. That is all.

Final Question

I am left without a solution for now. For all I know, it may be low RAM, which I’m already maxed out on for my model of laptop – meaning I’d need to start over again with something else. If anyone has any ideas, I’d be happy to hear about them in the comments below or on my Socials.

Podman-Nextcloud: Climb Shorter Mountains

Good Morning from my Robotics Lab! This is Shadow_8472 and today it’s bully or be bullied as I take another swing at getting my Nextcloud instance even partially usable. Let’s get started!

If there’s one long term project of mine that just loves humiliating me, it’s getting Nextcloud operational. My eventual goal is to have it running in a rootless Podman container with a way to quickly move it to an auxiliary server. My strategy thus far has been to prepare three Podman volumes hosted on GoldenOakLibry (NAS) over NFS while accounting for the speed needs of the MariaDB and Nextcloud volumes with an SSD and vs the capacity needs of the PhotoTrunk volume with a RAID 5 array of HDD’s.

NAS: Network Attached Storage

NFS: Network File System

SSD: Solid State Drive

HDD: Hard Disk Drive

Lowering Expectations

I’ve lost count of how many times NFS has given me grief on this project, so I eliminated it. I moved the SSD from where it was on GoldenOakLibry to ButtonMash, my main server computer. I added it to /etc/fstab – bricking Rocky Linux. ButtonMash is dual booted, so I booted to Debian for repairs.

Rocky’s installation system uses an LVM2 format, which Debian can’t read by default. An LVM2 package exists, and I installed it. LVM2 partitions show up in lsblk as sub-partitions of an actual partition, and it is these sub-partitions that get mounted, for example:

sudo mount /dev/rl_buttonmash/root /mnt/temp

to mount the sub-partition that shows up as rl_buttonmash-root. While I did explore for a quick fix, it’s a very good thing when each side of a dual booted machine can repair the other. Mounting a file system is a very important tool in that kit.

Upon closer inspection, a contributing factor to bricking Rocky was the root account being locked. The computer booted into an emergency mode and got stuck in a loop ending with “Press ENTER to continue…” Unlocking it didn’t get me anywhere when I looked at the logs per the loop’s recommendation, but the command lsblk -f clued me in that I was mounting the drive using the wrong file system type, an error which was soon remedied after I discovered it.

Project Impossible

The move hardly seemed to fix anything as hoped. I didn’t solve much. I kept getting NFS related errors when trying to run the pod, even after moving to a new mountpoint I’d never touched with NFS automounts. I even tried mounting the volumes using hard links pointed at the mounted “data drive” and I still couldn’t get a working Nextcloud instance. Somewhere in my shambling among the apparently limited content available regarding this topic online, I found the following warning on Oracle’s Podman documentation:

Caution: When containers are run by users without root permissions, Podman lacks the necessary permissions to access network shares and mounted volumes. If you intend to run containers as a standard user, only configure directory locations on local file systems [1].

Rootless Podman lacks network share permissions. OK, so NFS out unless I can selectively give Podman network permission without going full root. Until then, Podman is limited to the local disk, and if I’m understanding this warning correctly, mounted drives are also off the table. My plans for a Photo Trunk upgrade may be grounded indefinitely, and with ButtonMash’s Rocky drive being only 60GB, I’m not looking to burden it with anything resembling bulk storage.

Takeaway

The next logical innovation would be to rebuild the project on a computer with more storage. Barring a full makeover of ButtonMash, I do have my Red Laptop as an auxiliary server. I made a new account, but in all reality, this inspiration came after my research cutoff. It’s a project for another week once again.

Final Question

My project directory is messy with scripts to the point where I started a README file. Have you ever had a project so involved as to need one?

I look forward to hearing about it in the comments below or on my Socials.

Work Cited

[1] Oracle, “Configuring Storage for Podman,” oracle.com, [Online]. Available: https://docs.oracle.com/en/operating-systems/oracle-linux/podman/podman-ConfiguringStorageforPodman.html. [Accessed July 27, 2023].