Server Rebuild With Quadlet

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am continuing my work on my home’s network. Let’s get started!

State of the Homelab

Bitwarden. I self host using Vaultwarden, a 3rd party server. Done properly, it fits nicely in a larger homelab stack, but its OCI container can stand alone in a development environment. Due to skill limitations, I’ve been using it in this configuration. My recent network work has invalidated my manually self-signed certificates, and I’d rather focus my efforts on upgrades instead of re-learning the old system to maintain it.

Today, I am working on my newest server, Joystick (Rocky Linux 9). I compiled some command-by-command notes on using `podman generate systemd` to make self-starting, rootless containers, but just as I was getting the hang of it again, a warning message encouraged me to use a newer technique: Quadlet.

Quadlet

Quadlets? I’ve studied them before, but failed to grasp key concepts. It finally clicked though: they replace not just running `podman generate systemd` once I have a working prototype setup, but also everything I might want to do leading up to that point including defining Podman networks and volumes. Just make your Quadlet definitions once, and the system handles the rest.

The tutorial I found that best matches my use case can be found at mo8it.com [1]. Follow the link under Works Cited for the full text. It’s very detailed; I couldn’t have done a better job myeslf. But it doesn’t cover everything, like how `sudo su user` isn’t a true login for `systemctl –user …`. I had to use a twitchy Cockpit terminal for that (Wayland-Firefox bug).

Caddy

Caddy is the base of my dream homelab tech tree, so I’ll start there. My existing prototype calls for a Podman network, two Podman volumes, and a Caddyfile I’m mounting as a volume from the file system. I threw together caddy.container based on my script, but only the supporting network and volumes showed up. Systemd picked up on “mysleep.container,” an example from RedHat.

As it turned out, caddy.container had missed a capitalization. I found the problem by commenting out lines, reloading, and running `systemctl –user list-unit-files` to see when it didn’t load. Likewise, my Caddyfile volume had a file path bug to squash.

Vaultwarden

Good, that’s started and should be stable. On to Vaultwarden. I updated both ButtonMash and Joystick’s NFS unit files to copy over relevant files, but Joystick’s SELinux didn’t like my user’s fingerprints (owner/group/SELinux data) on the NFS definitions. I cleaned those up with a series of cp and mv commands with sudo and then I could enable the automounts.

Vaultwarden went up with simple enough debugging, but the challenge was in accessing it. I toyed with Cerberus/OPNsense (hardware firewall) DNS overrides until Caddy returned a test message from <domain.lan>:<port#>.

Everything

My next battle was with Joystick’s firewall: I forgot to forward tcp traffic from ports 80 and 443 to 8000 and 44300, respectively. Back on Cerberus, I had to actually figure out the alias system and use that. Firefox needed Caddy’s root certificate. Bouncing back to the network Quadlet, I configured it according to another tutorial doing something very similar to what I want [2]. I configured mine without an external DNS. A final adjustment to my Caddyfile to correct Vaultwarden’s fully qualified domain name, and I was in – padlock and everything.

Takeaways

I come out of this project with an intuition of how to manage Systemd files – especially Quadlet. The Quadlet workflow makes Podman container recipes for Systemd, and a working container will work forever – baring bad updates. I would still recommend prototyping with scripts when stuck though. When a Quadlet fails, there is no obvious error message to look up – it just fails to show up.

Even though it is still new, a lot of my time on Joystick this week was diagnosing my own sloppiness. Reboots helped when I got stuck, and thanks to Quadlet, I didn’t have to worry about spaghetti scripts like how I originally organized ButtonMash and never stabilized this victory I re-achieved today.

Final Question

NextCloud is going to be very similar, which I will make a pod along with MariaDB and Redis containers. But I am still missing one piece: NFS. How do I do that?

I look forward to your answers below or on my Socials.

Works Cited

[1] mo8bit, “Quadlet: Running Podman containers under systemd,” mo8it.com, Jan. 2-Feb. 19, 2024. [Online]. Available: https://mo8it.com/blog/quadlet/. [accessed: Sept. 13, 2024].

[2] G. Coletto, “How to install multi-container applications with Podman quadlets,” giacomo.coletto.io, May 25, 2024. [Online]. Available: https://giacomo.coletto.io/blog/podman-quadlets/. [accessed: Sept. 13, 2024].

Responsibility of the Network

Good Morning from my Robotics Lab! This is Shadow_8472, and today I have a doozy of a network week to cover. Let’s get started!

Meet the Computers

  • Cerberus – the main star today. It is our new hardware firewall running OPNsense
  • Red Router – a tp-link gaming Wi-Fi router fancied up beyond what it should have been
  • LAB – my homelab with a few servers
  • LAN – everything else connected via Wi-Fi or Ethernet

Network Implosion

It all started with revisiting a .lan domain. Cerberus’ extensive webUI left me with the hunch I’d need one machine in charge of DHCP assigning dynamic IP addresses. Red Router’s “operation mode” to work as an access point was hidden in literally the last menu to click through.

It was afternoon and no one would be using the network for the 30 seconds to 5 minutes I estimated switching the LAB and LAN Ethernet cables from Red Router over to Cerberus would take. Nope. No traffic made it through. DHCP mis-configuration? Cue a slow back and forth, bopping a setting from a workstation and trying a different physical configuration. Eventually, Cerberus ended up on my desk with Red Router talking directly to our ISP’s gateway/modem.

Order of events is a bit fuzzy from here, but when the Wi-Fi stopped working, I was without a good access to online answers. I worked the problem into the night. Around 1:00 AM, I knew I had done too much for a clean reversion. For two hours, I worked in loops hoping to spot something different. So much waiting! Cerberus would behave on my desk, then fail when redeployed. Worse: when I re-connected all the wires to Red Router, it started dancing between 192.168.0.1 and 192.168.1.1 every 45 seconds or so. I configured its IP manually, but gave up on Internet by morning at 3:00 AM, preparing to concede to my father’s suggestion earlier about hiring a professional to untangle my mess.

The Next Day

Newtork Loop. In my brain fog, I had Red Router talking to itself on a cable leftover from removing Cerberus for the night. With the house to myself for several hours, I alternated between bursts of intense diagnostics and mental processing. Somewhere in there, I rebooted the ISP’s modem.

Around noon, I realized the extra ports on Cerberus aren’t a switch as is Red Router’s default configuration, but were following firewall rules – which explained its behavior the previous day when I tried a computer from LAB without anything in-between. At around 3 PM, I got a Discord notification while mentally checked out, letting me know the network was back on.

6 PM on the second day: I situated my workstation in Cerberus’ LAN port and a Raspberry Pi in one I named LAN2. I’d previously copied firewall rules from LAN to LAN2 and LAB, but to no obvious effect – until I had the two computers ping each other. LAN2 failed as expected, but LAN’s ping was returned. I corrected the interfaces’ rules to allow them to reach out, and that was it.

Fallout

Without going into too much detail, a subnet shift like this is a major undertaking for networks with static IP servers on them. Not only do the network and computers need to be adjusted, but all traces of the old subnet need to be corrected. NFS clients needed to be told where the server was now, and the NFS server shares needed to be updated about what IP’s were allowed to mount them. I also still have Bitwarden to clients to update at my leisure.

Takeaway

OPNsense is a heavy weight in terms of configuration options. It has a learning curve compared to products simple enough to for Grandma and Grandpa to use. I may have solved my own emergency, but it may be wise to get someone looking at it professionally anyway to grade my work and give me some pointers on rootless Podman mounting NFS shares, or other long-term places where I’ve gotten stuck.

Final Question

I admit: networking is more fun than I gave it credit for before I knew basically anything. I still find it a bit taxing to mentally reach around my mental map, but I manage. How do you visualize networks?

Joystick Server Reinstall

Good Morning from my Robotics Lab! This is Shadow8472, and today I am rebuilding Joystick, Button Mash’s twin Rocky Linux server. Let’s get started!

Project Goals

This is a revival of my photo scanning project, first and foremost. I need to get it working, and while I will afford myself some time to experiment with doing it right, it’s more important I get it done. Once I have it working better than Button Mash, I can move my production flag over to Joystick and remodel Button Mash “properly.”

And by “properly,” I mean rootless Podman access to a network attached storage. I have explored NFS (Network File System) extensively, but Podman does things with userspace that NFS doesn’t support. I may need to be open to an alternative or using root. SSHFS lays a file system over SSH, but

Installation

I did my usual stuff when installing Linux. I remembered ahead of time that Joystick doesn’t work with Ventoy, so I flashed a USB (Balena Etcher) large enough for the full 10 GB image I downloaded. I did a clean install over its previous Rocky Linux installation. I also adjusted Joystick’s boot order so Rocky 9.4 loads by default. The system is dual booted with Linux Mint, where I did an apt update/upgrade.

Booted to Rocky, I enabled Cockpit web interface for remote management, a feature I selected for installation with the system. I created a limited user for running Podman, enabled lingering on it, got Cockpit logged into it as a “remote host,” and disabled normal password interactions. I pulled Podman containers for Caddy, Nextcloud, MariaDB, Redis, BusyBox, and Vaultwarden, and gave each a directory with a start.sh and a mountpoint. I excluded Pi-Hole and Unbound from my planned lineup because those functions are now handled by our router and would have required messing with special ports.

Troubleshooting

NFS fought me hard though. It didn’t help that Cockpit’s terminal kept glitching out on me. I noticed this behavior on ButtonMash’s Cockpit as well. Rebooting and refreshing didn’t do anything, but I found a control by navigating my browser away and back. I eventually got to thinking about what other parts it might be, and I came up with a bad version of Firefox, or Wayland. I ran Firefox over SSH on a machine still running Xorg server; it was Wayland.

And at this point, my workstation had a record bad meltdown, which ate my remaining blog time. Be sure to read about it next week!

Takeaway

Projects get interrupted. Something comes up and grand plans wait in the background. I’m tired of server doing just that, but it’s happened again.

Final Question

My longest enemy remains: Rootless Podman vs. Network Storage. How do I do it? I look forward to hearing from you in the comments below or on my Socials!

Learning OPNsense: DNS Adblocking

Good Morning from my Robotics Lab! This is Shadow_8472 with another short update my progress with my OPNsense firewall. Let’s get started.

I have done a substantial amount of work with DNS on my home network, but as noted in a previous post, it’s sub-optimal to exclusively manipulate your Domain Name System access from a power-hungry desktop when you sometimes have to ration electricity in your Uninterruptible Power Supply (UPS). I like PiHole’s web interface with all its fancy, moving graphs and charts, but our new firewall, Cerberus, can replicate the functionality I need.

I primarily use PiHole for DNS ad blocking, but I also explicitly blacklist a few URL’s while hosting local DNS records for servers on my Local Area Network (LAN), though the later is a work in progress.

OPNsense→Services→Unbound DNS→Blocklist→Type of DNSBL offers a drop-down checkbox menu of block lists. This is in contrast to PiHole→Adlists, which lets you import lists from arbitrary sources (edit the day after posting: OPNsense Unbound does have a URLs of Blocklists field). Either way, it should go without saying that sites and ads only need to be blocked once; it will only slow your DNS service down if given a bunch of redundant lists. From what I remember installing OISD Big onto PiHole, they aggregate several of these lists and remove the duplicates. PiHole also picked up a list named StephenBlack with a comment, “Migrated from /etc/pihole/adlists.list.” It sounds like a system default, but in any case, I found it had stuff not on OISD Big. OPNsense Unbound has the option for it, so it got migrated.

Migrating singled-out blacklist items was as simple as adding each entry to a comma-separated list (where PiHole wants separate entries). I’m going to wait on migrating my LAN domain names though. I believe I found the place to do it, but ButtonMash isn’t running Caddy to recognize subdomain requests right now.

One last step was to get into the red gaming router we’ve been using and point its DNS at Cerberus the Firewall. I then pointed its secondary DNS alternative at ButonMash.

To summarize, we should have the exact same protection as before on a smaller battery footprint and within the firewall’s default attack surface to boot!

Encrypted DNS

One of my eventual goals is to have my own recursive DNS server, which seeks out an a URL’s authoritative DNS record if it doesn’t have it cached. This will increase privacy, but I haven’t figured it out at a production grade yet. Instead, I looked up the best free and privacy respecting DNS, and so far as I can tell, that’s Cloudflare at 1.1.1.1.

From OPNsense, it wasn’t much more trouble to encrypt using DNS over TLS. I would prefer DNS over HTTPS, does the same thing but camouflages DNS requests as normal web traffic. For now, I’m assuming Unbound can’t do this and working properly. Please tell me if I’m wrong.

Takeaway

It’s slow going, but I am moving into Cerberus. While looking around, I found a module for NUT (Network UPS Tools), a utility for shutting down computers gracefully as their UPS runs down. I wanted to get it working, and for a moment after a reboot I did, but for reasons beyond me besides the driver on BSD not agreeing the best with CyberPower UPS systems, I’m at a loss. At this point, I am thinking to install a small Linux box to do the job at a future date, even though that will be yet another thing on the UPS.

Final Question

From above: Do you know of a way for OPNsense’s Unbound module to run DNS over HTTPS? I look forward to hearing from you in the comments below or on my Socials!

Hardware Firewall Up!

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project of the week. Let’s get started!

I left off last week having made attempts on four separate nights trying to get the hardware firewall online in a production context. When I tested it between my upstairs workstation and its OpenWRT+Raspberry Pi router/Wi-Fi adapter, it worked fine. Put it back in production between our ISP’s gateway and our existing gaming router, and no one gets Internet.

The solution: pull the gateway’s plug for 30 seconds and let it reboot. Internet solved.

Longer explanation: my ISP box is in some sort of bridge mode, where it’s supposed to pass the external IP address to a single device (usually a router, but can be a normal computer). In this mode, it didn’t like this device getting swapped out – possibly as a security measure. It still reserves the address 10.0.0.1 as itself through out the network, a behavior I took to be half-bridge mode, but my surprise this week while fiddling with settings was that it did in-fact pass on the external address.

Takeaway

I expected the struggle to continue a lot longer, but I actually figured it out pretty quickly once I started researching the symptoms online. I explored the settings a bit more. I’d like to move the functions of PiHole over, but the web interface has a drop-down menu for block lists instead of a text box. I’ll look into it another time. Instead, I spent a good chunk of the week weeding grass and getting a sunburn.

Final Question

Have you ever found you were rebooting the wrong thing? I look forward to hearing from you in the comments below or on my Socials!

Unboxing: Hardware Firewall (Protectli Vault)

Good Morning from my Robotics Lab! This is Shadow_8472 and today I have on my desk between my keyboard and monitors a new Protectli Vault running OPNsense. Let’s get started.

After at least a couple years tentatively researching hardware firewalls, it’s here. Let me tell you: it’s both a relief and a bit of pressure. I’m glad I’m no longer starting from scratch over and over again, but now I feel time pressure to deploy it despite my parents’ assurance that it’s much better to go at a responsible pace. And unless you’re a full time network specialist, that pace is longer than a week.

My Current Network and Its Weaknesses

At present, my home network starts with a box owned and controlled by my service provider. This gateway feeds into a gaming router before going out to a couple switches and Wi-Fi. One of my desktops has OpenWRT on a Raspberry Pi 4. ButtonMash, my home server, runs Podman containers for Vaultwarden (Password vault storage) and PiHole (DNS ad blocking). We have a Network Attached Storage by the hostname of GoldenOakLibry. Everything minus a couple workstations has battery backup in case the lights go out.

And when the lights do go out, the first big flaw comes out. While the network closet may last several hours, Power-hungry ButtonMash and GoldenOakLibry chew through their shared battery in around half an hour before I added ButtonMash’s twin, Joystick, as a development platform. When ButtonMash goes down, the network loses DNS so we can’t resolve URL’s.

Additionally, I’d like to move to a non-default set of internal IP addresses, like 10.59.102.X instead of 10.0.0.X or 192.168.0.X. While computers getting automatic IP’s over DHCP will essentially take care of themselves, I have invested quite a bit of time into static IP’s on NFS (Network File System), and when I move GoldenOakLibry’s IP, I’ll need to adjust the automounts for all systems accessing it, and that’s just a pain. I want to learn how a home domain works.

I also have a number of network-related projects I’ve done research for, but burned out on before solving. From memory, here’s a checklist of partial/incomplete/need-to-redo projects:

  • Feline Observation Pi (First prototype tested, needs overhaul)
  • Website for family photo archive (Needs hardware firewall, rootless Podman/NFS, booru/wiki)
  • Nextcloud (Early prototype successful, needs rootless Podman/NFS before production)
  • Beowulf cluster (Early research)
  • Rootless Podman/NFS (Heard from a developer and solution may not exist [yet])
  • UPS battery monitoring/shutdown before power failure (Research phase)
  • Caddy (First prototype in production, needs overhaul)
  • Unbound (Incomplete prototype)
  • Reverse VPN [mobile traffic] (Need Hardware Firewall)
  • Podman systemctl –user (In production, but I cannot reproduce at will)
  • Domain/Domain Controller (Background research incomplete)

Keep in mind that the notes on each item suggesting a direction are just the direction I’m leaning in at the moment without reflecting the new hardware. Replacing GoldenOakLibry with a server beefy enough to handle running Podman would solve my current need for rootless Podman/NFS. I may find a replacement for Caddy that also works as a Domain Controller. Does Caddy even do that? Let me check… Inconclusive; probably not. I don’t know enough about what to look for in a Domain Controller besides the name. Most of my time focused on researching Demilitarized Zones.

Demilitarized Zone and Roadmapping

Originally, I had a goal of deploying this new firewall/router configured with a demilitarized zone network structure. With hardware in hand, I learned a lot! But as I learned, I realized I needed to learn that much more to do the job right. A DMZ is basically a low security area of your network for serving stuff over an untrusted network (usually the wide open Internet) while protecting your Local Area Network. Ideally your LAN would have a separate physical router in case the one servicing the DMZ is ever compromised, but a homelab environment should be a small enough target that branching off from a single hardened router should be fine. My trouble is that I can’t fully tell where to put what.

I already know I want to move PiHole, Unbound, and similar projects related to internet traffic, and other projects I want lasting a bit longer into power outages onto the new router. OPNsense is a distribution of BSD and not Linux, so I expect I will need to look into a Linux Virtual Machine if BSD-based containers aren’t available. The gaming router I’m using now will still be our Wi-Fi access point, but I’d prefer to retire it from DHCP duty.

ButtonMash and Joystick are my enigma. I had plans of clustering them, but I may need one in the DMZ and one on the LAN. GoldenOakLibry belongs on the LAN so far as I can tell – as do all workstations.

Takeaway

There will be more thought to give it another week. I went ahead and hooked it up in place, but it didn’t work despite how I had previously had it working between my upstairs workstation and its rPi router. I’ve reverted the setup to how it was before, and I’ll need to take a closer look and do some further testing.

Final Question

What was the last piece of tech you unboxed?

Study Month: Week Four

Good Morning from my Robotics Lab! This is Shadow_8472 with a small update on what I learned this week. Let’s get started!

Not much progress happened this week. I won’t be clustering this month, but I did study the High Availability Add-On. It has a package name in there called Pacemaker, but I don’t know if it’s independent or a common library or what. Furthermore: I don’t know if a high availability cluster is right for me or not. I circled back around to Beowulf clusters after over a year of stuggle, but couldn’t conclusively tell if Red Hat’s implementation is one or not. Best I can figure is that you Beowulf once you have your task working on the first node and need to expand.

With that said, I did have some trouble along the way. Joystick, my new Rocky Linux 9 installation won’t boot – the computer keeps going to Mint no matter what I say in the BIOS. I tried running update-grub on Mint, but that just tidied away Debian with no new Rocky option. I vaguely remember something like this happening before, but I could be misapplying a memory. I want to try a GRUB disk; if that doesn’t solve it, I may need to reinstall. I still have the USB stick.

On a tangentially related note, I checked in on ButtonMash when Joystick didn’t boot and found it unupdated. It also thought it was July, 2018 and that everyone’s SSL encryption certificates were bad until I’d completely fixed the time. I suspect the motherboard’s system clock battery may be experiencing issues.

In conclusion for this month, I’ve been burned out. It’s been nice not having the near-constant stress of finishing a project or taking a walk of shame to split it off into [yet] another week, but at the same time, I didn’t get much done at all.

I still have a goal of combining multiple computers into a single, more powerful one behind a unified Cockpit experience. If I think about it though, I might be after some sort of Podman clustering technique, and I’ll still be no better off at getting Podman working over NFS than when I started!

Rocky Server Stack Deep Dive: 2023 Part 5

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am learning more about Podman Quadlets for my homelab. Let’s get started!

Systemd and Quadlets

From my incomplete research going into this topic, I already know Quadlets is a system for efficiently integrating Podman containers in with Systemd. It was merged into Podman v4.4, and I had a small pain of a time trying to find a distribution with both that and legacy BIOS support along with a list of other requirements.

But what is Systemd? In short: Systemd is the init process –a process that manages other processes– used by most Linux distributions that aren’t trying to optimize for a low RAM or storage footprint. As it turns out, I’ve already had minimal exposure to it while writing unit files for NFS [auto]mounts and a static IP address on Debian. Systemd in turn bases units off these unit files to manage the operating system.


While Systemd unit files defining Podman containers can be written by hand, Quadlets can automate their creation based off simpler unit files of its own: .container, .network, .volume, and .kube. The first three look similar enough to concepts I’m familiar enough with that I figure I could hack an example into doing what I need.

But I’m interested in pods. With .pod unit files only a controversial feature request at best, that leaves me to explore .kube files, which run Kubernetes YAML files. I know nothing about writing Kubernetes YAML files from scratch, and I refuse to cram for them Thanksgiving week.

My project died here for a few hours. One Systemd tutorial brought up Syncthing in an example, and I spent a while on a tangent looking at that, but it too is too large to cram for this week. I unenthusiastically browsed back to Kubernetes, and found:

podman generate kube

Looks like I just might get away with adapting my scripts after all this week. With this in mind, I copied over my files from my laptop’s Debian drive to its new-last-week Rocky 9 installation. Focusing on Nextcloud, I cleared out my dead-end work with Fuse, abstracted volumes, and other junk before realizing BusyBox was likely a more suitable testing grounds.

My First Kuberneties File

I came up with the following bash script for such a pod:

podman pod stop busyBoxPod
podman pod rm busyBoxPod
podman pod create busyBoxPod
podman create \
--pod busyBoxPod \
--name BusyBox \
--volume fastvolume:/root/disk \
-it \
--rm \
busybox

And here is

# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.6.1
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2023-11-23T01:29:45Z"
  labels:
    app: busyBoxPod
  name: busyBoxPod
spec:
  containers:
  - image: docker.io/library/busybox:latest
    name: BusyBox
    stdin: true
    tty: true
    volumeMounts:
    - mountPath: /root/disk
      name: fastvolume-pvc
  volumes:
  - name: fastvolume-pvc
    persistentVolumeClaim:
      claimName: fastvolume

I saved this output as busyBoxPod.yml and returned to Nextcloud.

Nextcloud put up a small tantrum getting re-updated for Podman 4.6.1. I had to look up how to Podman Secrets, and apply :z to volumes to satisfy SELinux. Redis however, refused to accept a password from Podman Secrets, so I rolled back that change. The pod should insulate it anyway. I got it to a point where it needed a domain name.

Branching out to bring up Pi-Hole and Caddy, I learned how the default Unbound configuration for the container I used only forwards DNS requests to Cloudflare. I’ll want to fix this later. I used firewall-cmd to forward ports for HTTP, HTTPS, and DNS to underprivileged ports for rootless containers.

Takeaway

UNCLE! I find more and more of my time supposedly working on server is procrastinating and stressing over either minutia or blankly staring at my screens when I muster enough focus to ignore distractions. There’s no way around it; I’m officially burned out on this project. I’ll maybe come back to it after the new year. I really wanted to get my .kube files working for at least Pi-Hole and Caddy, but it’s going to be a hard pass at the moment.

Final Question

I’m considering covering a free/open source game or few over December. What are your recommendations?

I look forward to hearing from you on my Socials!

I Have a Cloud!!!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am celebrating my Project of the Year. I have Nextcloud installed! Let’s get started!

My NextCloud Experience in Brief

This week, I achieved a major milestone for my homelab – one I’ve been poking since at least early March of this year when I noted, “Nextcloud has been a wish list item since I gave up using Google’s ecosystem,” [1].

As I learned more about the tech I’m working with, I added specifications for storing scanned pictures on high capacity, but slow hard disks while making smaller files available with the speed of solid state – only to learn later how rootless Podman is incompatible with NFS. I studied “Docker Secrets” to learn best practices for password management – only to move the project to an older version of Podman without that feature. One innovation to survive this torrent of course corrections is running the containers in a pod, but even that was shifted around in the switch from Rocky 8 to Debian.

Two major trials kept me occupied for a lot of this time. The first involved getting the containers to talk to each other, which they weren’t willing to do over localhost, but they were when given a mostly equivalent IPv4 loopback address.

The second was the apparent absence of a database despite multiple attempts to debug my startup script. Nextcloud was talking to MariaDB, but it was like MariaDB didn’t have the database specified in the command to create its container. For this, I created an auxiliary MariaDB container in client mode (Thank-you-dev-team-for-this-feature!) and saw enough to make me think there was none. I wasn’t sure though.

One Final Piece

#remove MariaDB’s “dirty” volume
podman volume rm MariaDBVolume
#reset everything
./resetVolumes
./servicestart

There was no huge push this week. I came across an issue on GitHub [2] wondering why there was no database being created. By starting a bash shell within the MariaDB container, I found there were some files from some long-forgotten attempt at starting a database. All I had to do was completely reset the Podman volume instead of pruning empty volumes as I had been doing.

Future Nextcloud Work

Now that I have the scripts to produce fresh instances, I still have a few work items I want to keep in mind.

I expect to wipe this one clean and create a proper admin account separate from my main username, a practice I want to better get into when setting up services.

Adjacent, but I’ll want access on my tablet, which will entail getting the Bitwarden client to cooperate with my home server.

I still want my data on GoldenOakLibry. I’m hoping I can create a virtual disk or two to satisfy Podman which in turn RedLaptop can relay over NFS to network storage.

Final Question

Have you taken the time to set up your own personal cloud? I look forward to hearing about your setup in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “I Studied Podman Volumes,”letsbuildroboticswithshadow8472.com,March 6, 2023. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2023/03/06/i-studied-podman-volumes/. [Accessed Oct. 2, 2023].

[2] rosshettel, thaJeztah, et. all, “MYSQL_DATABASE does not create database,” github.com, July 9, 2016. [Online].Available: https://github.com/MariaDB/mariadb-docker/issues/68. [Accessed Oct. 2, 2023].

My Nextcloud Sees its Database, but…

Good Morning from my Robotics Lab! This is Shadow_8472, and today is another Nextcloud shot. Let’s get started!

It almost needs not an introduction at this point. I’ve lost track of how long I’ve been after a Nextcloud instance – and even the number of times I’ve complained about the same. I’ve demoed it using an integrated database, and I’ve since gotten its container into a Podman pod. My task today is getting the paired MariaDB container to behave.

Let’s start with my first error. Nextcloud is starting for the first time and it’s time to make an admin account.

Failed to connect to the database: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory

The container has been launched with flags providing environment variables pointing Nextcloud at the database and specifying the username/password to get in. I found someone having the exact same problem 4 years ago on Stack Overflow, and if I get user2880156’s explanation, “localhost” is not the same as spelling out a loopback IP address, like 127.0.0.1 [1]. While some networked applications don’t care how they interact as server and client when on the same computer [namespace?], Nextcloud and MariaDB are not such a pair. It has something to do with types of network sockets, but I don’t understand beyond that. What matters to me now is that replacing “localhost” with “127.0.0.1” got me to my next error.

MySQL username and/or password not valid You need to enter details of an existing account.

I’ve checked my environment variables more times than I can count. The required usernames and passwords were copy-pasted between the respective environment variables. The best lead I have is that the container may be bugged. People online have reported success with manually creating the full permissions/non-root account the database container is supposed to create on initial creation.

This insight led me to seek a manual fix from within the pod – specifically from a MariaDB command line as mentioned in the container’s documentation. After trying several different ways to modify a terminal command to run with Cockpit, I gave up, re-added a couple flags I was avoiding, and got the interface I was after rendered to both bash and Cockpit. I additionally split the command back into a separate script.

The MariaDB command line is unlike any I’ve used before. My first impressions are that commands are in all caps and variables are lower case. My final achievement before running out of time was to extract a list of user names (root, healthCheck, and databaseUser), where they’ve logged in from (variations on locations within the pod), and a partial hexadecimal output of their passwords.

Takeaway

Solving localhost vs 127.0.0.1 addresses has given me insight into Cockpit’s curious behavior when remoting into to itself with localhost. Command line access to the database from within the pod is also huge.

Final Question

I’m not taking bets, and I’m afraid to ask, but what additional roadblocks should I expect?

Work Cited

[1] akrea, et. all, “nextcloud and mariadb (both) on docker: SQLSTATE[HY000] [2002] No such file or directory,” stackoverflow.com Mar. 10, 2019-Mar. 22, 2023. [Online]. Available:https://stackoverflow.com/questions/55088828/nextcloud-and-mariadb-both-on-docker-sqlstatehy000-2002-no-such-file-or-d. [Accessed Sept. 4, 2023].