Roadmapping Switching to Linux Phones

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am dusting off my studies of Android in response to happenings around the family. Let’s get started.

Phones and Privacy

This round started with my father’s phone screen burning out. He got it repaired, but my sister’s phone screen was cracked, and I ended up giving her my otherwise unused one of the same model.

Consumer privacy has been an increasingly important question as computers integrate more tightly with our everyday lives. We now live in a time and place where most everyone old enough to read is coerced by convenience into loading themselves down with every tracker known to Man unless restrained by applicable law. And this issue will only continue to grow as technologies such as consumer VR and clinically installable neural interfaces mature and facilitate the collection of ever more intimate personal data. Legislation isn’t keeping up, but a minority of people unsatisfied with this status quo are developing open source alternatives that either let end-users mitigate abuse of technology by obfuscating their data or preventing its collection entirely.

My goal is to one day have the family on Linux phones. OK, what are the main obstacles? From previous research, cell network compatibility has been big, but support from our current carrier is nil the moment I read off my prototype PinePhone’s IMEI number (an individual cell modem identifier). Additional general knowledge research turned up that banking apps have a tendency to really hate working on Android devices without Google’s seal of approval.

Phone Carriers

Most major cell companies will prefer you buy a phone on credit that’s been locked to their network for a couple years. Per contract, you don’t get admin privileges until it’s paid off and unlocked – assuming you still have a unit at the end. Even if a service has a bring-your-own-phone program, it may be limited to a short list of models, even if your unapproved unit is compatible with the wireless technology[ies] their network is built on (which I believe was the case last time we switched carriers). Even then, a phone may only be compatible between any combination of calls, texting, data, or other wireless functionalities.

Complicating the matter for me specifically, I cannot even comfortably look at pictures where a part of the screen has been sacrificed for a camera “island” per the modern trend. I need a phone with like goodies in the bezel where they belong. After doing some research, I narrowed my choices down to the Librem 5 and the PinePhone Pro. General research on each for this week turned up year-old criticism about the American made Librem 5 refunds to weigh against the PinePhone being assembled in China like most other personal devices. I looked up each and found a compatibility chart for each and made a better informed recommendation to my parents for when we switch carriers.

Android On Linux

One high priority feature my parents are after in their phones is mobile deposit. That’s not happening on a week’s notice, even if I had a phone already to try it on. From a software point of view though, a Linux desktop is essentially identical to a Linux phone minus a cellular modem, SIM card, and miscellaneous other peripherals.

Many tools exist specifically to run Android apps on desktop. This week, I explored running BlissOS custom ROM on QEMU/KVM. QEMU and KVM took a while to straighten out, so I might be mistaken here, but QEMU is an emulator/hypervisor and KVM is a Linux kernel module for virtualization QEMU can optionally use for direct access to hardware for improved performance. Along the way, I was recommended in the direction of using an AMD graphics card instead of Nvidia. That meant using Derpy Chips, a computer from before Intel came up with the special circuitry needed for this kind of virtualization…

…Well, it looks like I’ve been carrying some bad info around, because this virtualization circuitry has been around for a lot longer than I thought! I navigated Derpy’s BIOS (Yeah, I know I’ve made a stink about them being UEFI and not BIOS, but they’re labeled BIOS despite being UEFI.) to turn on virtualization and and I got a proof of concept going. I tried using its command line installer, but couldn’t figure out how to work anything related to the hard drive. I could consistently get to –I assume– a live session run directly off the disk image. I successfully accessed the Internet from within the VM, but installing apps or even trying to make an entry on the file system remains an unsolved issue. Nevertheless: this is major progress.

Takeaway

Android at its core was made open source by Google to catch up to the iPhone. Now that it’s ahead, they’ve had seasons of moving as much of the definitive experience out of the public eye, but that’s not kept people from making custom ROM’s. While my next major goal in this project is to install an app, the another blockade on my developing road map is SafetyNet. When I get there, I’ll want to look up Magisk and Shamiko, two names I came across pertaining to the Android custom ROM community.

I’ll also note that I still have additional options to try, like something based on containerization. While writing this up, I took a second look at WayDroid, which I dismissed assuming it wouldn’t work in an X11 environment, and it just might.

Final Question

Do you have any experience with Linux phones? I would be most interested in hearing from you in the comments below or on my Socials!

Server Rebrainstorming

Good MMorning from my Robotics Lab! This is Shadow_8472 and today I am rebrainstorming how I approach my home server. let’s get started!

Disclaimer : as I write this on Sunday evening, I’m in a power outage, and I only have 44 minutes of DNS service on ButtonMash’s UPS. GoldenOakLibry is shut down, my progress is hung up on DerpyChips without a UPS, and I’m writing as fast as I can on my tablet, which is not cooperating well.

Over this blog, my one elusive goal has been rootless Podman with mounted NFS file storage. NFS doesn’t understand namespaces, a trick Podman uses to keep containers separate from each other, but still belonging to the host’s user. This project has proven resistant to my weekly goal oriented format, so I want to spend this month just writing about my studies. No Takeaways, no Final Questions. If I get something done, awesome. If not, oh well. On weeks without power outages, I want to still aim for 300 to 1,000 words.

Fish/SSHFS

These two protocols may be useful as an alternative to NFS, but use encryption needlessly if employed over a local network. That said, you can use a weak algorithm for old CPU’s. From memory, these protocols extend SSH with a hidden file, and require no special client.

Red Hat Cluster Suite

I eventually want to have multiple servers I can manageas one bigger one. This angle will only work if I can squeeze a node on GoldenOakLibry to access the file system directly.

Rocky Server Stack Deep Dive: 2023 Part 3.1

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project on my home server. Let’s get started!

Not My Problem

In the greater context of setting up Pi-Hole (network ad blocker) on my home server, ButtonMash, I’ve learned a thing or two about how the Domain Name Service (DNS) works and what happens when I break it locally. Normally, when a device connects to a network, a DHCP server (often on the router) advertises a DNS server. When resolving a URL, this DNS server either has the answer cached or ask another DNS server until a match is found or the system gives up.

My Internet Service Provider (ISP) has been having some trouble with its site (including our main e-mail). Not once, but twice my family asked if I was doing something with the Internet. Both times, I used a terminal application called “traceroute” to display [most of] the hops my requests went. Ones handled by ButtonMash were very short – (I tested buttonmash.lan and a known entry on my blocklist), while others took up to 30 hops. My ISP’s site fell in the later category.

However: one cell phone in the family was still reaching our ISP’s site while on cell data. This meant that the site was fine, but the failure was with the larger DNS system (or most of their servers were down behind their reverse proxy, but I thought of that later). In either case, I looked at a couple “Is it down?” type sites, and concluded the outage was most certainly not my problem.

Unbound

But I had a project to try. Unbound is a tool for increasing digital privacy by setting up a recursive DNS server. Every domain is registered at its authoritative DNS server. When Unbound receives a request, it finds the domain’s authoritative DNS server and caches it for later. This reduces digital footprints in DNS server logs, making you harder to track and reducing your vulnerability to a hacked/confused DNS servers.

I’ve been interested in building my own Unbound OCI “Docker” container for a while as there’s no officially maintained one, but I went ahead and downloaded an image from docker.io based on the quality of documentation. I spun up a container in Podman and pointed Pi-Hole at it. It worked first try with no special configuration.

It just so happened that when I brought the fix online, we were on a call with tech support, and I was able to pass my diagnosis back to our ISP to help them restore service to their customers in my community – however large or small that number may be.

Takeaway

What’s with no persistence volumes on this container? If it resets, it will have to start over on caching. If/when I come back in a few months, I may take a closer look. Otherwise, this has been a big shortcut I can live with.

Final Question

Have you worked with Unbound before? Would it even benefit from a persistence volume?

I look forward to hearing from you on my Socials!

Rocky Server Stack Deep Dive: 2023 Part 3

Good Morning from my Robotics Lab! This is Shadow_8472 and today is part 3 of this year’s big server push. Let’s get started!

Tuesday

Podman NFS

I had a slow start to the week, but I decided to poke at if Podman over NFS was actually as impossible as I thought. It wasn’t. A VERY special thank you to ikkeT, whom I made contact with over the official Podman Discord server, who pointed out, “nfs doesn’t know about selinux.”

Backing up a bit, I had a few minutes waiting for supper today, so I decided to start a clean attempt to mount a directory over NFS. My trials leading up to creating a persistence volume worked until I tried mounting one from GoldenOakLibry over NFS. Once I removed SELinux’s :z permissions flag per ikkeT’s advice, the container started and I found a flag I’d previously left in that directory.

Sadly, applying this quick fix did not solve my old scripts. I plan on re-building them this week.

In light of this discovery, I moved a 1TB solid state drive labeled “MineOS” from ButtonMash back to GoldenOakLibry. Around half a dozen micro-blunders later, I additionally removed its line from ButtonMash’s file system table (fstab) and noted that Debian 11 –which my RedLaptop is running– will be facing an End of Life (EoL) situation around a month after Rocky 8 on ButtonMash.

Vaultwarden Migration

It was a bit messy, but I did a “Take II” on migrating Vaultwarden into Caddy’s reach. I used root to copy the vw-data directory from Vaultwarden’s dedicated user to PiHole’s. I copied it again as PiHole’s user to correct root’s ownership of all the sub-directories, and then removed root’s copy… confirming. one. file. at. a. time.

For the migration proper (as well as my NFS testing earlier), I learned a container with a minimalistic set of command line tools called BusyBox. I spun it up mounting both a newly created Vaultwarden volume alongside the old directory. Inside, I copied all files over.

I started Valulwarden using the new volume and Caddy for the reverse-proxy to access it, but I also had to stop the production grade Pi-Hole in favor of the development configuration. I navigated over to it with my local domain and found the migration was a success.

Android root.crt Import

When I got up this morning, I told myself I would install the root certificate from Caddy on my Android tablet. I copied the file over USB, and then went to Settings>Security>OTHER SECURITY SETTINGS>CREDENTIAL STORAGE>User certificates, where a pop-up menu found it.

Certificate in-place and environment pointed at my Vaultwarden subdomain, Bitwarden now signs in with little issue.

Wednesday

Goals for Today

Still so much to do. My main target for today will be re-building Nextcloud’s script to store data over NFS so it can be hosted by either ButtonMash or RedLaptop.

Likewise, it will be good to have Pi-Hole’s data hosted in a common space to be hosted in parallel, though I’ll want to consider how the two instances will fight or get along with each other. I’ll want to compose a help request to a Pi-Hole community after I have Nextcloud working.

Minor Cleanup Tasks

Vaultwarden: I migrated my Bitwarden clients on my daily drivers (DerpyChips and my Upstairs Workstation) to the new Vaultwarden server. For Derpy, I did concede to installing Caddy’s root certificate directly into Firefox without pressing for making it access the system’s trust anchors. I may or may not have forgotten about that issue when logging out.

Pi-Hole: I spotted a configuration with the upstream DNS pointed incorrectly. It must not have migrated correctly. It’s now pointed at its local upstream router. I additionally did a little research on a redundant Pi-Hole, and I think I’ll not fuss with joint memory after all.

Unbound: This is more of scoping out a project closely related to Pi-Hole. Unlike many of the other projects I’m using, Unbound’s developer’s, Nlnet Labs, do not appear to have an official Docker container.

RedLaptop’s Package Manager

Debian needs more attention than I pay it on RedLaptop. Judging by its errors, it looks stuck on invalid cryptography signatures related to repositories for OBS, a streaming program I never got working properly. I uninstalled the suite, but had difficulty determining which repository needed to be removed – if any. Follow up investigation strongly hinted involvement with Lutris, which I want to keep. I found a page on software.opensuse.org and puzzled out which command to apply for my own setup. This did not change anything.

I poked around farther in apt’s config files. I cleared a warning by re-enabling a WINE listing and pointing it at Debian 11’s repositories.

Podman Sorely Out of Date… Maybe?

3 strikes against RedLaptop’s Podman! It doesn’t support secrets. It doesn’t support “volume export” or “volume import” for tarballing volumes (as I found out while duplicating Pi-Hole from ButtonMash just now). But one mission-critical piece is networking between containers and pods.

Previously this month, I learned about how Podman 4.0 brought in a new networking protocol. RedLaptop has Podman 3.0.1 available per Debian’s “Stability first” mentality. When I failed to find Podman’s network sub-command by tab-to-complete, my first instinct was to try installing a newer version from a Debian backports/main or backports-sloppy/main repository. I found no such package exists for installation, and found a topic on Reddit confirming my observations.

Compiling a new version myself would be an option, but could take anywhere from an hour to a full week. Maybe a package intended for Debian 12 would work, but the path of least resistance toward is learning to work with the old standard, slirip4netns. Long story short, for my purposes, I didn’t need to change anything after all, though I did passively learn a little about how Podman networks have IP address ranges same as any other subnet.

Pi-Hole and Caddy

The plan here is for redundancy for planned downtime for ButtonMash. I have an end-of-year goal of starting to scan those family photos by the end of the year.

So, continuing to work on RedLaptop. Pi-Hole is already running with copied volumes from ButtonMash. For port forwarding, I installed firewalld, used the commands from last week for port forwarding, and accidentally locked myself out when reloading the firewall. SSH was still open though, so Cockpit on ButtonMash bailed me out without needing physical access or going straight terminal. While I was at it, I opened the ports I plan on using for now.

I confirmed my work by accessing the admin panel on RedLaptop’s Pi-Hole and testing its DNS lookup with nslookup. To finish the task, I told the home router to recommend RedLaptop as a secondary DNS option over DHCP and gave the two installations in-container hostnames based on their respective host machines.

Caddy needs to be a different story. RedLaptop is getting its own domain so I expect more headache trying to work out domain names than if I only adapt my Caddyfile.

Goals

I didn’t reach either of my stated goals per-se, but that’s OK. My understanding of running redundant Pi-Holes has grown considerably and I did partially address a few issues on RedLaptop. I’ll work on Nextcloud tomorrow for sure.

Thursday

Nextcloud or Bust

It’s Nextcloud day. In sequence to get Nextcloud running on RedLaptop, I made a subdomain in PiHole, added a reverse proxy entry in Caddy, corrected the IP’s in the Caddyfile, backed up my working Nextcloud deployment, networked a Nextcloud’s pod to Caddy’s and Pi-Hole’s, and added Caddy’s root certificate to my workstation’s trust stores.

Nextcloud then got back to me, “Access through untrusted domain,” pointing me toward config/config.php. I found the window from Cockpit-Podman a bit small for such a large file, so I mounted Nextcloud’s persistency volume in a BusyBox container… no nano text editor. While BusyBox had vi, but not vim, Nextcloud has none of the above. I pulled up a vi/vim cheat sheet.

There exists a meme I once saw about using a young programmer trying to exit vim as part of a random text generator. It’s not far off. Even with the cheat sheet, it took me a few minutes – especially with vim having some extra and equally unintuitive shortcuts to save and quit. For the record, I found :w to save and :q to quit.

After changing out RedLaptop’s static IP for nextcloud.redlaptop.lan as a “trusted_domain,” I reached the login screen. I adjusted my password in Bitwarden accordingly.

While trying to add another entry to my Caddyfile, I decided to cement how I’m going to organize it: alphabetical by domain, the successive subdomains. Top level domains point at individual machines, each service gets a subdomain, and admin consoles get a subdomain from there. Panels strictly for admin access get an admin subdomain. I’ll have to think about the merits of how I assign Podman network IP’s to containers. Right now, it’s in the order I first set them up, but I’m thinking a hash might be of merit.

Next, I followed up on my work to restore the solid state drive as a GoldenOakLibry NFS share. I re-enabled the automounts on ButtonMash and copied over the required files to RedLaptop. And then I did some last-minute tests on mounting volumes over NFS. I could create volumes no problem on an NFS share, but using them suddenly requires root permissions. I launched a help request to the Podman Discord and continued with mounting directories.

Continuing on with an old plan involving three mounts, I aimed MariaDB and the general Nextcloud data volumes at the small, but fast NFS share. But where exactly to host the bulk storage? I examined the old volume with BusyBox and found that users each have their own photos. I’m totally making a dedicated Photo Trunk account tomorrow and mounting it in there.

Redis is recommended for caching data to maintain performance as your database grows larger. On a passing inspection, it didn’t look all that bad to set up, so I downloaded a container and added it to the Nextcloud pod.

Thinking ahead about final presentation of the Photo Trunk project, I checked in on the image board I thought I wanted, Philomina, only to learn many such projects rely on Amazon Web Service, which is a hard pass for my open source, self-hosted approach.

Goals

I’d say I did a lot better, but I still have a bunch of cleanup to do. Tomorrow, I want to make this new Nextcloud installation, create accounts for admin, myself, and a special one dedicated for Photo Trunk.

Friday

I started work by navigating to nextcloud.redlaptop.lan and found it totally blank. The pod wasn’t running. Whatever I did after correcting a capitalization mismatch, Podman locked up on me. I rebooted. While Pi-Hole and Caddy went up manually, I found the Nextcloud pod spazzing out. From MariaDB’s container, one representative repetition of many:

2023-10-27 22:10:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.0.2+maria~ubu2204 started.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted

Nextcloud’s container was similarly complaining about blocks of chown permissions not working. I reached out for help again, and was recommended to look into root squashing. I tried a couple different things, updated GoldenOakLibry, and still got the same result. Further investigation will be required.

Goals

It grows close to sundown, so I must break it off here. My goal for Saturday night/Sunday is again Nextcloud over NFS. I’ve been having a lot of grief out of Podman locking up on me, so I might have to reset the whole thing. I’ve had to do it before, but with everything in the same place this time, I’ll want to write a script to pull all container images I use.

Saturday Night

New Automation Scripts

Podaman is still giving me issues with containers stuck in improper states. I prepared a script to re-download all my container images so I don’t have to rebuild manually in case this keeps happening. While this is a workaround, it may allow me to notice a pattern if I observe Podman failing enough times in the long run. Right now though, the only pattern I see is that rebooting semi-fixes Podman hanging for a few minutes of me working on it.

I also created a script to call each of the other startup scripts. I don’t anticipate needing it in the long term though. One thing of note for both this and the reload script is that I wrote it from my tablet, which has previously refused to load Cockpit through the browser I was using.

Podman Reset and Cleanup

STUPID! While going to do the heavy duty Podman reset, I specifically checked its warning:

$ podman system reset
WARNING! This will remove:
- all containers
- all pods
- all images
- all build cache
Are you sure you want to continue? [y/N]

NO MENTION OF VOLUMES!!! It nuked my volumes along with everything else. For what it’s worth, I can still rebuild relatively quickly. I’ll just design my system to work with “bind mounts” instead of relying so heavily on volumes this time around. I was NOT wanting a wipe this through!

I took this opportunity in the rough to polish a few of my file names. My biggest loss besides my (fallback Nextcloud installation) was the RedLaptop copy of Pi-Hole, which I’d modified a bit from its original. On my upstairs workstation at least, I also updated RedLaptop’s root certificate authority certificate.

Goals

A few suggestions to try came in over Sabbath. My goal for tomorrow is to look into them if nothing else.

Sunday

I spent my mental energy on an event today, but I got a little research in nonetheless. To recap: rootless Podman does things with namespaces which NFS doesn’t support.

GitHub user rhatdan, a maintainer on Podman, speculated I could use something called fuse file system. I don’t get the full concept, but it appears to be the better option should it develop.

Another GitHub user going by eriksjolund separately suggested manually managing my namespace and helped me clear out a fair number of errors. It took a few tries, but as of writing, I suspect this course of action may prove a dry end because it appears to conflict with how I’m interfacing my containers with Caddy. This suggestion too needs additional time to develop, but develop it sure did when I hit refresh and found a fresh reply. It was interesting –to say the least– working out what some of the recommended changes to my script did for myself.

This just in on the way to publication: it looks like manual UID/GID mapping has run its course unless someone new has a brilliant idea that’s simpler than compiling Podman from source or updating Debian.

To be honest, I’m getting burned out on server. It’s been an awesome run seeing things start coming to life, but I feel the need some time off from it. I’ll follow up with these two leads, but don’t be surprised if I do a Re-Learning Linux post next week.

Takeaway

I’m beginning to have moments where I see ButtonMash, RedLaptop, and GoldenOakLibry as parts of a greater whole. While working out the earlier two to operate in parallel, I’m creating a system with redundancy built in. I can work on one and improve the other when I’m confident in my abilities to do it correctly.

Final Question

Rootless Podman NFS. How?

Work Cited

[1] Shadow8472, D. Walsh “rhatdan,” E. Sjölund “eriksjolund,”“Rootless NFS Volume Permissions: What am I doing wrong with my Nextcloud/MaraiDB/Redis pod?,” github.com, Oct. 27, 2023-.[Online]. Available: https://github.com/containers/podman/discussions/20519. [Accessed Oct. 30, 2023].

Rocky Server Stack Deep Dive: 2023 Part 2

MAJOR progress! This week, I’ve finally cracked a snag that’s been holding me for two years. Read on as I start a deep dive into my Linux server stack.

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am continuing renovations on my home server, ButtonMash. Let’s get started!

The daily progress reports system worked out decently well for me last week, so I’ll keep with it for this series.

Caddy is an all-in-one piece of software for servers. My primary goal this week is to get it listening on port 44300 (the HTTPS port multiplied by 100 to get it out of privileged range) and forwarding vaultwarden.buttonmash.lan and bitwarden.buttonmash.lan to a port Vaultwarden, a Bitwarden client I use, is listening on over Buttonmash’s loopback (internal) network.

Tuesday Afternoon

From my Upstairs Workstation running EndeavourOS, I started off with a system upgrade and reboot while my workspace was clean. From last week, I remember Vaultwarden was already rigged to have port 44300, but straight away, I remembered its preferred configuration is HTTP coming into the container, so I’ll be sending it to 8000 instead.

My first step was to stop the systemd service I’d set up for it and start a container without the extra Podman volume and ROCKET arguments needed to manage its own HTTPS encryption. Getting my test install of Caddy going was more tricky. I tried to explicitly disable its web server, but figured it was too much trouble for a mere test, so I moved on to working with containers.

While trying to spin up a Caddy container alongside Pi-Hole, I ran into something called rootlessport hogging port 8000. I ran updates and rebooted the server. And then I realized I was trying to put both Caddy and Vaultwarden on the same port! I got the two running at the same time and arrived on Caddy’s slanted welcome page both with IP and via Pi-Hole-served domain_name:port_number.

Subdomains are my next target. I mounted a simple Caddyfile pointing to Vaultwarden and got stuck for a while researching how I was going to forward ports 80 and 443 to 8000 and 44300, respectively. Long story short, I examined an old command I used to forward DNS traffic to Pi-Hole and after a much background research about other communication protocols, I decided to forward just TCP and UDP. I left myself a note in my administration home directory.

DNS: Domain Name System – Finds IP address for URL’s.

sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=tcp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=8000:proto=udp:toport=8000 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=tcp:toport=44300 –permanent
sudo firewall-cmd –zone=public –add-forward-port=port=44300:proto=udp:toport=44300 –permanent

I still don’t get a reply from vaultwarden.buttonmash.lan. I tried nslookup, my new favorite tool for diagnosing DNS, but from observing Caddy’s cluttered logs, I spotted it rejecting my domain name because it couldn’t authenticate it publically. I found a “directive” to add to my declaration of reverse proxy to use internal encryption.

But I still couldn’t reach anything of interest – because reverse-proxied traffic was just bouncing around inside the Caddy container! The easy solution –I think– would be to stack everything into the same pod. I still want to try keeping everything in separate containers though. Another easy solution would be to set the network mode to “host,” which comes with security concerns, but would work in-line with what I expected starting out. However, Podman comes with its own virtual network I can hook into instead of lobbing everything onto the host’s localhost as I have been doing. Learning this network will be my goal for tonight’s session.

Tuesday Night

The basic idea behind using a Podman network is to let your containers and pods communicate. While containers in a pod communicate as if over localhost, containers and pods using a Podman network communicate as if on a Local Area Network down to ip address ranges.

My big question was if this was across users, but I couldn’t find anyone saying one way or the other. Eventually, I worked out a control test. Adding the default Podman network, “podman,” to the relevant start scripts, I used ip a where available to find containers’ ip addresses.Pi-Hole then used curl to grab a “Hello World!” hosted by Caddy on the same user. I then curled the same ip:port from Vaultwarden’s container and failed to connect. This locked-down behavior is expected from a security point of view.

On this slight downer, I’m going to call it a night. My goal for tomorrow is to explore additional options and settle on one even if I don’t start until the day after. In the rough order of easy to difficult (and loosely the inverse of my favorites), I have:

  1. Run Caddy without a container.
  2. Run Caddy’s container rootfully.
  3. Run Caddy’s container in network mode host.
  4. Move all containers into a single user.
  5. Perform more firewalld magic. (Possibly a flawed concept)
  6. (Daydreaming!!) Root creates a network all users can communicate across.

Whatever I do, I’ll have to weigh factors like security and the difficulty of maintenance. I want to minimize the need for using root, but I also want to keep the separate accounts for separate services in case someone breaks out of a container. At the same time, I need to ask if making these connections will negate any benefit for separating them across accounts to begin with. I don’t know.

Wednesday Afternoon

I spent the whole thing composing a help request.

Wednesday Night

The names I am after for higher-power networking of Podman containers are Netavark and Aardvark. Between 2018 and around February 2022 it would have been Slirp4netns and its plethora of plugins. Here approaching the end of 2023, that leaves a8 and onword is an outright betrayal round four years worth of obsolete tutorials to not quite two years with the current information – and that’s assuming everyone switched the moment the new standard was released, which is an optimistic assumption to say the least. In either case, I should be zeroing in on my search.

Most discouraging are how most of my search results involving Netavark and Aardvark end up pointing back to the Red Hat article announcing their introduction for fresh installs in Podman 4.0.

My goal for tomorrow is to make contact with someone who can point me in the right direction. Other than that, I’m considering moving all my containers to Nextcloud’s account or creating a new one for everything to share. It’s been a while since I’ve been this desperate for an answer. I’d even settle for a “Sorry, but it doesn’t work that way!”

Thursday Morning

Overnight I got a “This is not possible, podman is designed to fully isolate users from each that includes networking,” on Podman’s GitHub from Lupa99, one of the project maintainers [1].

Thursday Afternoon

Per Tuesday Night’s entry, I have multiple known solutions to my problem. While I’d love an extended discourse about which option would be optimal from a security standpoint in a production environment, I need to remember I am running a homelab. No one will be losing millions of dollars over a few days of downtime. It is time to stop the intensive researching and start doing.

I settled on consolidating my containers into one user. The logical choice was Pi-Hole: the home directory was relatively clean, I’d only need to migrate Vaultwarden. I created base directories for each service noting how I will need to make my own containers some day for things like games servers. For now, Pi-Hole, Caddy, and Vaultwarden are my goals.

Just before supper, I migrated my existing Pi-Hole from hard-mounted directories to Podman volumes using Pi-Hole’s Settings>Teleporter>Backup feature.

Thursday Night

My tinkerings with Pi-Hole were not unnoticed. At family worship I had a couple family members reporting some ads slipping through. At the moment, I’m stumped. If need be, I can remigrate by copying the old instance with a temporary container and both places mounted. My working assumption though is that it’s normal cat and mouse shenanigans with blocklists just needing to update.

It’s been about an hour, and I just learned that any-subdomain.buttonmash.lan and buttonmash.lan are two very different things. Every subdomain I plan to use on ButtonMash needs to be specified on PiHole as well as Caddy. With subtest.buttonmash.lan pointed at Caddy and the same subdomain pointed at my port 2019 Hello World!, I get a new error message. It looks like port 80 might be having some trouble getting to Caddy…

$ sudo firewall-cmd –list-all

forward-ports:
port=53:proto=udp:toport=5300:toaddr=

That would be only Pi-Hole’s port forward. Looking at that note I left myself Tuesday, and I can see I forwarded ports 8000 and 44300 into themselves! The error even ended up in the section above. Here’s the revised version:

sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=udp:toport=8000 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=44300 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=udp:toport=44300 --permanent

I also removed Tuesday’s flubs, but none of these changes showed up until I used

sudo firewall-cmd --reload

And so, with Pi-Hole forwarding subdomains individually and the firewall actually forwarding the HTTP and HTTPS ports (never mind that incoming UDP is still blocked for now), I went to https://vaultwarden.buttonmash.lan and was greeted with Firefox screaming at me, “Warning: Potential Security Risk Ahead” as expected. I’ll call that a good stopping point for the day.

My goal for tomorrow is to finish configuring my subdomains and extract the keys my devices need to trust Caddy’s root authority. It would also be good to either diagnose my Pi-Hole migration or re-migrate it a bit more aggressively.

Friday Afternoon

To go any farther on, I need to extract Caddy’s root Certificate Authority (CA) certificate and install it into the trust store of each device I expect to access the services I’m setting up. I’m shaky on my confidence here, but there are two layers of certificates: root and intermediate. The root key is kept secret, and is used to generate intermediate certificates. Intermediate keys are issued to websites to be used for encryption when communicating with clients. Clients can then use the root certificate to verify that a site’s intermediate certificate is made from an intermediate key generated from the CA’s root key. Please no one quote me on this – it’s only a good-faith effort to understand a very convoluted ritual our computers play to know who to trust.

For containerized Caddy installations, this file can be found at:

/data/caddy/pki/authorities/local/root.crt

This leads me to the trust command. Out of curiosity, I ran trust list on my workstation and lost count around 95, but I estimate between 120 and 150. To tell Linux to trust my CA, I entered:

trust anchor <path-to-.crt-file>

And then Firefox gave me a new warning: “The page isn’t redirecting properly,” suggesting an issue with cookies. I just had to correct some mismatched ip addresses. Now, after a couple years of working toward this goal, I finally have that HTTPS padlock. I’m going to call it a day for Sabbath.

My goal for Saturday night and/or Sunday is to clean things up a bit:

  • Establish trust on the rest of the home’s devices.
  • Finish Vaultwarden migration
  • Reverse Proxy my webUI’s to go through Caddy: GoldenOakLibry, PiHole, Cockpit (both ButtonMash and RedLaptop)
  • Configure Caddy so I can access its admin page as needed.
  • Remove -p ####:## bound ports from containers and make them go through Caddy. (NOT COCKPIT UNTIL AVAILABLE FROM REDUNDANT SERVER!!!)
  • Close up unneeded holes in the firewall.
  • Remove unneeded files I generated along the way.
  • Configure GoldenOakLibry to only accept connections through Caddy. Ideally, it would only accept proxied connections from ButtonMash or RedLaptop.
  • Turn my containers into systemd services and leave notes on how to update those services
  • Set up a mirrored Pi-Hole and Caddy on RedLaptop

Saturday Night

Wow. What was I thinking? I could spend a month in and of itself chewing on that list, and I don’t see myself as having the focus to follow through with everything. As it was, it took me a good half hour to just come up with the list.

Sunday

I didn’t get nearly as much done as I envisioned over the weekend because of a mental crash.

Nevertheless, I did do a little additional research. Where EndeavourOS was immediately recipient to the root certificate such that Firefox displayed an HTTPS padlock, the process remains incomplete from where I tried it on PopOS today. I followed the straightforward instructions found for Debian family systems on Arch Wiki, but when I tell it to update-ca-certificates, it claims to have added something no matter how many times I repeat the command without any of the numbers actually changing. I’ve reached out for help.

Monday Morning

I’ve verified that my certificate shows up in /etc/ssl/certs/ca-certificates.crt. This appears to be an issue with Firefox and KDE’s default browser on Debian-based systems. I’ll decide another week if I want to install the certificate directly to Firefox or if I want to explore the Firefox-Debian thing further.

Takeaway

Thinking back on this week, I am again reminded of the importance of leaving notes about how to maintain your system. Even the fog:head AM brain is better able to jot down a relevant URL that made everything clear where the same page may be difficult to re-locate in half a year.

My goal for next week is to develop Nextcloud further, though I’ll keep in mind the other list items from Friday.

Final Question

What do you think of my order of my list from Friday? Did I miss something obvious? Am I making it needlessly overcomplicated?

Let me know in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, Luap99, “How Do I Network Rootless Containers Between Users? #20408,” github.com, Oct. 19, 2023. [Online]. https://github.com/containers/podman/discussions/20408. [Accessed Oct 23, 2023].

[2]. Arch Wiki, “User:Grawity/Adding a trusted CA certificate,” archlinux.org, Oct. 6 2022 (last edited), [Online]. https://wiki.archlinux.org/title/User:Grawity/Adding_a_trusted_CA_certificate#System-wide_–_Debian,_Ubuntu_(update-ca-certificates). [Accessed Oct 23, 2023].

Rocky Server Stack Deep Dive: 2023 Part 1

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am revisiting my Vaultwarden server to better understand the process. Let’s get started!

Monday Evening (October 9, 2023)

OK, so here’s the deal from memory: I want to put Bitwarden on my tablet. My Bitwarden server is a 3rd party implementation called Vaultwarden. Official Bitwarden clients, such as the one I want to put on my Android tablet, make use of the same encryption scheme as your average browser that relies on a system of certificates going back to a root authority. I made my own authority to get my project working, so my computers will complain by default about the connection being insecure. My desktops have been content warning me every so often, but Android is fresh ground to cover. This past week, I noticed an expired certificate, so it’s time for maintenance.

It’s early in the week, but this has been a historically difficult topic. I’ll be using my December 13, 2021 post as a guide [1].

Monday Night

I’ve assessed this project’s state: “dusty.” Vaultwarden is containerized on ButtonMash in a locked down account configured to allow-lingering (non-root users otherwise get kicked eventually). The last time I touched Vaultwarden was before I learned the virtue of scripting as opposed to copy-pasting complex commands I will need again in months to years. I foresee the next topic in my Relearning Linux series.

My goal for tomorrow is to brush up on my terminology and generate a new certificate authority.

Tuesday Afternoon

While reviewing my above-mentioned post, I remembered Caddy Web Server and looked it up. Its self-signed security certificate functionality was a little much hassle for me to work out around June 6, 2022 [2]. I shelved it pending a better router before further experimenting with Let’s Encrypt for real certificates, which still hasn’t happened.

Also in my June 6, 2022 post, I owed my disappointment to difficulty with the official Caddy tutorials assuming loopback access vs. me running on a headless server being left without clear instructions to find certificate I need to install into my browser on another computer. One idea coming to mind would be to find a browser container to VNC into. But then I’d still be left without the automatic certificate unless I made my own container with Caddy+Firefox+VNC server. An excellent plan Z if I ever heard one because I am not looking to explore that right now, but it isn’t not an option.

VNC: Virtual Network Client, Remote desktop. A desktop environment hosted one place, but rendered and operated from across a network.

For a plan A, I’d like to try the documentation again. Perhaps the documentation will be more clear to my slightly more experienced eyes. For what it’s worth, Caddy is still installed from before.

Tuesday Night

I’ve explored my old Caddy installation, and my limited understanding is starting to return. My big snag was thinking I needed to use the default HTTP ports while running Caddy rootless. Subdomains may have been involved. I’m interested in trying Caddy containerized, where I have a better grasp on managing ports. Furthermore, if domain names are of concern, I believe involving Pi-Hole may be my best bet, and with that I fear this project just became a month long or longer.

Anyway, I’m going to look into Pi-Hole again. I have an instance up and passing domain names, but I’ve not gotten it to filter anything. Hopefully interpreting a host name won’t be too difficult a goal for tomorrow.

In the meantime, I noticed PiHole was complaining about a newer version available. I went to work on it, but the container was configured to be extremely stubborn about staying up. I force-deleted the image, and PiHole redownloaded using the updated container. I’ve left a note with instructions where I expect to find it next time I need to update.

Wednesday Afternoon

To refresh myself, my current goal is get PiHole to direct buttonmash.lan to Caddy on a underprivileged port for use with reverse proxy access within ButtonMash.

I found an unambiguous tutorial affirming my general intuitions with Pi-Hole itself, meaning my problems are with directing requests to Pi-Hole. For this I went to my router, which wanted an update from July, but had trouble downloading. I hit refresh and it found a September update. It installed, and I had to manually reconnect a Wi-Fi connection (OpenWRT on a Raspberry Pi).

I’ve blown a good chunk of my afternoon with zero obvious forward progress since the last paragraph. Where to start? I’ve learned nslookup, a tool for testing DNS. I validated that my issue with Pi-Hole is getting traffic to Pi-Hole with it. The real temptation to unleash a horde of profanities was my home router from Tp-Link. The moment I pointed DNS at ButtonMash on 192.168.0.X: “To avoid IP conflict with the front-end-device, your router’s IP address has been changed to 192.168.1.1 . Do you want to continue to visit 192.168.0.1?” That was a chore to undo. I needed a browser on a static IP – meaning my RedLaptop sitting in retirement as a server.

DNS: Domain Name Service. This is the part of the Internet that translates the domain names within URL’s into IP address so your browser can find a requested web page.

I’ve appealed to r/pihole for help [3].

Wednesday Night

I’ve gotten a few bites on Reddit, but my problem isn’t solved yet. I performed an update on my laptop and installed similar tool to nslookup (Debian didn’t come with it) to verify that OpenWRT wasn’t the culprit. I’ve found forum posts

No forward progress today, but I did make some “sideways progress.” At least I knew how to fix the netquake I caused.

My goal for tomorrow: I don’t know. IPv6 has been suggested. I’ll explore that. Maybe I’ll move my OpenWRT router for my upstairs workstation over to assign ButtonMash’s Pi-Hole.

Thursday Afternoon

Good day! Late last night I posted my appeal to r/pihole to r/techsupport’s Discord. While answering the responses I received last night, I ran my nslookup on a 192.168.0.X workstation receiving an IP over DHCP – and I got an unexpected success. Silly me had run my test over my statically addressed laptop. Also, leaving it overnight gave DHCP a chance to refresh. The test still failed upstairs workstation, so pointed OpenWRT at ButtonMash instead of the router for DNS. I’ll still need to cleanup my static IP’s, but I should be good to play with pointing PiHole at Caddy.

My next big task is configuring Caddy.

Thursday Night

My next big task is preparing Vaultwarden for use with Caddy.

Vaultwarden is one of the longest-running and most important services on ButtonMash. It was for Vaultwarden I started learning Podman. It was for Vaultwarden I learned how to allow-lingering on a server account. It was for Vaultwarden I researched NGINX and Let’sEncrypt to serve as a reverse proxy and HTTPS support, but later found Caddy to do the same things with hopefully less hassle. Vaultwarden has been with me so long, it was Bitwarden-rs (or some variant thereof) when I started using it and I didn’t follow the name when it was changed. With Vaultwarden, I’ve grown from dumping a raw command into a terminal to using a script on into turning that script into a rootless systemctl service that will refuse to stay down until stopped properly.

But problematically, Vaultwarden holds port 44300, which I want for Caddy. It’s also been running with an integrated HTTPS support module called ROCKET which the developers strongly discourage for regular deployments. This security certificate’s expiration is what is driving me to make progress like mad this week. I’ve spent the past couple (few?) years learning the building blocks to support Vaultwarden properly along with other services to run along side it. It’s time to grow once again.

I spent an hour or so re-researching how to set Podman up as a systemd service. And then I found a note pointing me towards the exact webpage that helped me conquer it last time. With that, I feel the need for a break and take Friday off. I’ll be back after Sabbath (Saturday night) or on Sunday to write a Takeaway and Final Question. I’ve got to give all this information some time to settle.

Of note: I did come across a Redhat blog post about an auto scale-down feature not present on RHEL 8, but it looks interesting for when I migrate to a newer version of Rocky Linux in seven months when active support ends. https://www.redhat.com/en/blog/painless-services-implementing-serverless-rootless-podman-and-systemd

Sunday Morning

This has been a very successful week, but I still have a way to go before I finish my server[’s software] stack: Nextcloud on Android. I researched a couple possible next steps as a possible encore. Caddy would be the next logical step, but Bitwarden has the port I want it listening on. Alternatively: I dug up the name “Unbound,” a self-hosted DNS server to increase privacy from Internet Service Providers, but I’d want more research time.

Takeaway

Again: what a week! I think I have my work lined up for the month. As of Sunday night, my goal for next week is Caddy. With Caddy in place, I’ll have an easier time adding new services while punching fewer holes in ButtonMash server’s firewall.

Final Question

Even now, I’m questioning if I want to migrate Rocky 9 or hold out and hope Rocky 10 shows up before/around Rocky 8’s end of life. Rocky’s upstream, Red Hat Enterprise Linux (RHEL) doesn’t have a clockwork release cycle like Ubuntu, and RHEL 8 appears to be planned for support much longer than Rocky 8. That feature I learned about does sound tempting. Upgrade or hold out?

I look forward to hearing from you in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “Self-Signed Vaultwarden Breakdown,” Dec. 13, 2021. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2021/12/13/self-signed-vaultwarden-breakdown/. [Accessed Oct. 16, 2023].

[2] Shadow_8472, “I Switched My Operations to Caddy Web Server,” June 6, 2022. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2022/06/06/i-switched-my-operations-to-caddy-web-server/. [Accessed Oct. 16, 2023].

[3] Shadow_8472, et. all, “Tp-Link isn’t sending traffic to containerized Pi-Hole (I think??),”reddit.com,Oct. 11, 2023. [Online]. Available: https://www.reddit.com/r/pihole/comments/175wp2l/tplink_isnt_sending_traffic_to_containerized/. [Accessed Oct. 16, 2023].

I Have a Cloud!!!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am celebrating my Project of the Year. I have Nextcloud installed! Let’s get started!

My NextCloud Experience in Brief

This week, I achieved a major milestone for my homelab – one I’ve been poking since at least early March of this year when I noted, “Nextcloud has been a wish list item since I gave up using Google’s ecosystem,” [1].

As I learned more about the tech I’m working with, I added specifications for storing scanned pictures on high capacity, but slow hard disks while making smaller files available with the speed of solid state – only to learn later how rootless Podman is incompatible with NFS. I studied “Docker Secrets” to learn best practices for password management – only to move the project to an older version of Podman without that feature. One innovation to survive this torrent of course corrections is running the containers in a pod, but even that was shifted around in the switch from Rocky 8 to Debian.

Two major trials kept me occupied for a lot of this time. The first involved getting the containers to talk to each other, which they weren’t willing to do over localhost, but they were when given a mostly equivalent IPv4 loopback address.

The second was the apparent absence of a database despite multiple attempts to debug my startup script. Nextcloud was talking to MariaDB, but it was like MariaDB didn’t have the database specified in the command to create its container. For this, I created an auxiliary MariaDB container in client mode (Thank-you-dev-team-for-this-feature!) and saw enough to make me think there was none. I wasn’t sure though.

One Final Piece

#remove MariaDB’s “dirty” volume
podman volume rm MariaDBVolume
#reset everything
./resetVolumes
./servicestart

There was no huge push this week. I came across an issue on GitHub [2] wondering why there was no database being created. By starting a bash shell within the MariaDB container, I found there were some files from some long-forgotten attempt at starting a database. All I had to do was completely reset the Podman volume instead of pruning empty volumes as I had been doing.

Future Nextcloud Work

Now that I have the scripts to produce fresh instances, I still have a few work items I want to keep in mind.

I expect to wipe this one clean and create a proper admin account separate from my main username, a practice I want to better get into when setting up services.

Adjacent, but I’ll want access on my tablet, which will entail getting the Bitwarden client to cooperate with my home server.

I still want my data on GoldenOakLibry. I’m hoping I can create a virtual disk or two to satisfy Podman which in turn RedLaptop can relay over NFS to network storage.

Final Question

Have you taken the time to set up your own personal cloud? I look forward to hearing about your setup in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “I Studied Podman Volumes,”letsbuildroboticswithshadow8472.com,March 6, 2023. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2023/03/06/i-studied-podman-volumes/. [Accessed Oct. 2, 2023].

[2] rosshettel, thaJeztah, et. all, “MYSQL_DATABASE does not create database,” github.com, July 9, 2016. [Online].Available: https://github.com/MariaDB/mariadb-docker/issues/68. [Accessed Oct. 2, 2023].

What are Relational Databases?

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am targeting Nextcloud for major milestone completion. Let’s get started.

Previously on my Nextcloud saga, I finally managed to run a pod with both Nextcloud and MariaDB in their own containers and get them to talk. The words they exchanged were along the lines of “Access Denied.” I also have a script to add a temporary third container for database command line access.

My next immediate job is learning enough MySQL to diagnose Nextcloud’s refusal to create an admin account. I found a few commands to try and learned that the database I expected to be there on container creation didn’t appear to be. Container logs didn’t report any errors. I need some more background research, even if that’s the only thing I do this week.

Most important things first: what is the relationship between MariaDB and MySQL? I’ve been treating the former as if it were an implementation or distribution of the later – like the difference between Debian and Linux. But according to MariaDB’s site, they’re a fork that avoids “vendor-lock” while maintaining protocol compatibility with MySQL [1]. So MySQL help should still work for MariaDB on a “close enough” basis, sort of like how Debian solutions may still work for Ubuntu systems. When available: use a matching guide.

Contrary to what I said in my closing words last time I handled Nextcloud, commands being written in all caps is only convention – the software is usually a lot more forgiving. SELECT equals sELeCT which equals select and SelEct as well as the other 60 possible combinations.

MariaDB is what is called a “relational database.” Fancy phrase to me, but here goes an explanation. Data is information. Information can be organized into tables for later retrieval. Zoom out one level, and now tables with information themselves become data that interacts with information in other tables. MariaDB can track these “relations.”

Takeaway

Needless to say, relational databases gets messy fast. Considering how my database is meant to be locked up tight within a pod on a server that blocks direct access, a containerized webUI I can expose will do nicely. MariaDB’s website has a lengthy list of graphical clients, but only phpMyAdmin showed up as having this feature when I used “find on page” [2]. Importantly, it also shows up in Docker.io as a Docker Official Image, which I can run on Podman.

Final Question

As of posting, I’m planning on spending a week or few each for learning phpMyAdmin and refining my MariaDB understanding. Otherwise, if someone can answer what my problem is that Nextcloud isn’t finding the expected database. The following is a command from my pod creation script.

podman create \
        --pod NextcloudRedLaptop \
        --name MariaDBContainer \
        -v MariaDBVolume:/var/lib/mysql:Z \
        -e MARIADB_ROOT_PASSWORD="<gibberish1>" \
        -e MARIADB_DATABASE=NextcloudDB \
        -e MARIADB_USER=nextcloudDbUser \
        -e MARIADB_PASSWORD="Gibberish2" \
        --restart on-failure \
        docker.io/library/mariadb:latest

And here is what I think is the important output from MariaDB:

MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.010 sec)

If I understand what I need correctly, I should have had one show up titled “NextcloudDB” on container creation.

And so, my final question is this: Why doesn’t it work? Am I even looking in the right spot?

If I oversimplified or got something wrong about relational databases, be sure to tell me all about it in the comments!

Works Cited

[1] MariaDB, “MariaDB vs. MySQL The difference between choice and vendor lock-in,” MariaDB, [Online]. Available: https://mariadb.com/database-topics/mariadb-vs-mysql/. [Accessed Sept. 18, 2023].

[2] MariaDB, “Graphical and Enhanced Clients,” MariaDB, [Online]. Available: https://mariadb.com/kb/en/graphical-and-enhanced-clients/. [Accessed Sept. 18, 2023].

Nextcloud Soon?

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am pressing on towards my very own Nextcloud deployment beyond a proof of concept. Let’s get started!

Preparing an Account

Last week, I learned that Rootless Podman comes with certain challenges when trying to access a network drive. ButtonMash’s Rocky 8 Linux hard drive is tiny, so I don’t want to use it for storage. That leaves my retired laptop, “RedLaptop” for the job.

I use dedicated user accounts to isolate processes. The first order of business was to create a new account and log in through the Cockpit web interface. I had a bit of trouble when the underlying SSH couldn’t connect because Debian doesn’t like capital letters in its usernames, but the nonexistent, capital-containing username was flushed out while diagnosing user groups per advice on Discord. The account was additionally locked to prevent normal logins via password.

Preparing Containers

I’ve been developing a couple scripts for running Nextcloud on ButtonMash in a pod, so I copied them and the secrets directory (password storage) over to RedLaptop and tried them out. Errors galore! I downloaded the Nextcloud and MariaDB container images, but the older version of Podman (3.0.1 on Debian vs. 4.4.1 on Rocky 8) meant missing features – namely Podman secrets and mounting volumes on pod creation.

The secrets were easy to revert. I just had to remove the –secret flags and reinstate the passwords in surrounded by quotation marks. Unlike best practices in later versions, volumes mounted nicely into their respective containers. I double checked one with

podman inspect <container ID> | grep -i volume

and found references to the correct volume.

New Territory

At long last, I made it to the Nextcloud installation screen where I need to make an admin account. From here on, all I know comes from the little scouting I did way back by using an SQLite database hosted offered from the Nextcloud container as opposed to a more capable one.

I got an error about failing to connect to the database. I found a username with a capitalization error and I didn’t actually have a named database created within MariaDB’s container. But even with both of those fixed, I kept getting kicked back to Nextcloud’s create-an-admin screen.

And here my research for the week runs out of time. MariaDB’s log shows two errors I suspect to be the root issue:

<timestamp> 0 [Warning] mariadbd: io_uring_queue_init() failed with errno 1
<timestamp> 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF

Takeaway

I so want to be done with this project. That is all.

Final Question

I am left without a solution for now. For all I know, it may be low RAM, which I’m already maxed out on for my model of laptop – meaning I’d need to start over again with something else. If anyone has any ideas, I’d be happy to hear about them in the comments below or on my Socials.

Podman-Nextcloud: Climb Shorter Mountains

Good Morning from my Robotics Lab! This is Shadow_8472 and today it’s bully or be bullied as I take another swing at getting my Nextcloud instance even partially usable. Let’s get started!

If there’s one long term project of mine that just loves humiliating me, it’s getting Nextcloud operational. My eventual goal is to have it running in a rootless Podman container with a way to quickly move it to an auxiliary server. My strategy thus far has been to prepare three Podman volumes hosted on GoldenOakLibry (NAS) over NFS while accounting for the speed needs of the MariaDB and Nextcloud volumes with an SSD and vs the capacity needs of the PhotoTrunk volume with a RAID 5 array of HDD’s.

NAS: Network Attached Storage

NFS: Network File System

SSD: Solid State Drive

HDD: Hard Disk Drive

Lowering Expectations

I’ve lost count of how many times NFS has given me grief on this project, so I eliminated it. I moved the SSD from where it was on GoldenOakLibry to ButtonMash, my main server computer. I added it to /etc/fstab – bricking Rocky Linux. ButtonMash is dual booted, so I booted to Debian for repairs.

Rocky’s installation system uses an LVM2 format, which Debian can’t read by default. An LVM2 package exists, and I installed it. LVM2 partitions show up in lsblk as sub-partitions of an actual partition, and it is these sub-partitions that get mounted, for example:

sudo mount /dev/rl_buttonmash/root /mnt/temp

to mount the sub-partition that shows up as rl_buttonmash-root. While I did explore for a quick fix, it’s a very good thing when each side of a dual booted machine can repair the other. Mounting a file system is a very important tool in that kit.

Upon closer inspection, a contributing factor to bricking Rocky was the root account being locked. The computer booted into an emergency mode and got stuck in a loop ending with “Press ENTER to continue…” Unlocking it didn’t get me anywhere when I looked at the logs per the loop’s recommendation, but the command lsblk -f clued me in that I was mounting the drive using the wrong file system type, an error which was soon remedied after I discovered it.

Project Impossible

The move hardly seemed to fix anything as hoped. I didn’t solve much. I kept getting NFS related errors when trying to run the pod, even after moving to a new mountpoint I’d never touched with NFS automounts. I even tried mounting the volumes using hard links pointed at the mounted “data drive” and I still couldn’t get a working Nextcloud instance. Somewhere in my shambling among the apparently limited content available regarding this topic online, I found the following warning on Oracle’s Podman documentation:

Caution: When containers are run by users without root permissions, Podman lacks the necessary permissions to access network shares and mounted volumes. If you intend to run containers as a standard user, only configure directory locations on local file systems [1].

Rootless Podman lacks network share permissions. OK, so NFS out unless I can selectively give Podman network permission without going full root. Until then, Podman is limited to the local disk, and if I’m understanding this warning correctly, mounted drives are also off the table. My plans for a Photo Trunk upgrade may be grounded indefinitely, and with ButtonMash’s Rocky drive being only 60GB, I’m not looking to burden it with anything resembling bulk storage.

Takeaway

The next logical innovation would be to rebuild the project on a computer with more storage. Barring a full makeover of ButtonMash, I do have my Red Laptop as an auxiliary server. I made a new account, but in all reality, this inspiration came after my research cutoff. It’s a project for another week once again.

Final Question

My project directory is messy with scripts to the point where I started a README file. Have you ever had a project so involved as to need one?

I look forward to hearing about it in the comments below or on my Socials.

Work Cited

[1] Oracle, “Configuring Storage for Podman,” oracle.com, [Online]. Available: https://docs.oracle.com/en/operating-systems/oracle-linux/podman/podman-ConfiguringStorageforPodman.html. [Accessed July 27, 2023].