Podman-Nextcloud: Climb Shorter Mountains

Good Morning from my Robotics Lab! This is Shadow_8472 and today it’s bully or be bullied as I take another swing at getting my Nextcloud instance even partially usable. Let’s get started!

If there’s one long term project of mine that just loves humiliating me, it’s getting Nextcloud operational. My eventual goal is to have it running in a rootless Podman container with a way to quickly move it to an auxiliary server. My strategy thus far has been to prepare three Podman volumes hosted on GoldenOakLibry (NAS) over NFS while accounting for the speed needs of the MariaDB and Nextcloud volumes with an SSD and vs the capacity needs of the PhotoTrunk volume with a RAID 5 array of HDD’s.

NAS: Network Attached Storage

NFS: Network File System

SSD: Solid State Drive

HDD: Hard Disk Drive

Lowering Expectations

I’ve lost count of how many times NFS has given me grief on this project, so I eliminated it. I moved the SSD from where it was on GoldenOakLibry to ButtonMash, my main server computer. I added it to /etc/fstab – bricking Rocky Linux. ButtonMash is dual booted, so I booted to Debian for repairs.

Rocky’s installation system uses an LVM2 format, which Debian can’t read by default. An LVM2 package exists, and I installed it. LVM2 partitions show up in lsblk as sub-partitions of an actual partition, and it is these sub-partitions that get mounted, for example:

sudo mount /dev/rl_buttonmash/root /mnt/temp

to mount the sub-partition that shows up as rl_buttonmash-root. While I did explore for a quick fix, it’s a very good thing when each side of a dual booted machine can repair the other. Mounting a file system is a very important tool in that kit.

Upon closer inspection, a contributing factor to bricking Rocky was the root account being locked. The computer booted into an emergency mode and got stuck in a loop ending with “Press ENTER to continue…” Unlocking it didn’t get me anywhere when I looked at the logs per the loop’s recommendation, but the command lsblk -f clued me in that I was mounting the drive using the wrong file system type, an error which was soon remedied after I discovered it.

Project Impossible

The move hardly seemed to fix anything as hoped. I didn’t solve much. I kept getting NFS related errors when trying to run the pod, even after moving to a new mountpoint I’d never touched with NFS automounts. I even tried mounting the volumes using hard links pointed at the mounted “data drive” and I still couldn’t get a working Nextcloud instance. Somewhere in my shambling among the apparently limited content available regarding this topic online, I found the following warning on Oracle’s Podman documentation:

Caution: When containers are run by users without root permissions, Podman lacks the necessary permissions to access network shares and mounted volumes. If you intend to run containers as a standard user, only configure directory locations on local file systems [1].

Rootless Podman lacks network share permissions. OK, so NFS out unless I can selectively give Podman network permission without going full root. Until then, Podman is limited to the local disk, and if I’m understanding this warning correctly, mounted drives are also off the table. My plans for a Photo Trunk upgrade may be grounded indefinitely, and with ButtonMash’s Rocky drive being only 60GB, I’m not looking to burden it with anything resembling bulk storage.

Takeaway

The next logical innovation would be to rebuild the project on a computer with more storage. Barring a full makeover of ButtonMash, I do have my Red Laptop as an auxiliary server. I made a new account, but in all reality, this inspiration came after my research cutoff. It’s a project for another week once again.

Final Question

My project directory is messy with scripts to the point where I started a README file. Have you ever had a project so involved as to need one?

I look forward to hearing about it in the comments below or on my Socials.

Work Cited

[1] Oracle, “Configuring Storage for Podman,” oracle.com, [Online]. Available: https://docs.oracle.com/en/operating-systems/oracle-linux/podman/podman-ConfiguringStorageforPodman.html. [Accessed July 27, 2023].

I Survived Self-Hosting a Wiki With Podman!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am setting up not one, but two personal wikis on my home network. Let’s get started!

A Personal Wiki

Wikis are the reference material of choice for the casual researcher in this day and age. The content of the subject encyclopedia is turbocharged by the power of the hyperlink when compared to a volume/page reference that can take minutes to weeks to “load” depending on circumstantial accessibility. Community contributions allow for information to be updated in a timely manner, while built-in version control helps admins quickly repair sabotage.

This technology can easily be deployed to a closed-off environment for personal, group, or enterprise use. I know I could use one to organize my role play games, and my sister is after one to help with her writing. My goal for this month’s large project is to get both these wikis operational within our home network.

Wiki Planning

The first list of open source wiki software listed Wiki.js as supporting deployment on Docker. If my Rocky Linux 8 experience with ButtonMash has taught me anything, it’s that OCI containers are good for easy cleanup of botched installations, though challenges can arise when using Podman instead of Docker.

I spent a day studying Wiki.js off and on. My basic understanding is that you need three things for a wiki: the web server, the database, and the wiki software itself. I already understand a bit about the relationship between website and web server. Database vs. website is a similar relationship to website vs. browser. It is an independent process that serves data a website garnishes before presenting to a browser. It’s even possible for multiple websites to share a common database. While Wiki.js currently supports a few SQL (Structured Query Language) databases, PostgreSQL is the only database they will support in future versions.

Sparing a thought for my photo trunk project, I believe a wiki has potential for distribution once we learn more about how to use one. ButtonMash is configured to use the scanner though on its Debian install and not the Rocky 8 one, so I’ll need a different machine to host. GoldenOakLibry comes to mind as its primary function is to host and serve files.

My First Operable Prototype Wiki

GoldenOakLibry is a Network Attached Storage (NAS) by Synology running a custom version of Linux called Disk Station Manager. I found MediaWiki, the wiki software powering Wikipedia.org, in its package manager and chose the easy route. I did not know what I was doing for the most part – just that I was glad I re-enabled the old Vaultwarden container to store new passwords I made as I passed through Password Purgatory: database, wiki root, wiki admin (a user), database user – and then I had to make names for them all before I understood what each one did or how many more were needed.

There was a slight snag when the wiki wanted the database password and I wanted a new one, but someone blissfully using the same password for everything wouldn’t have noticed. A less tech savvy individual wouldn’t have thought to try looking for where to copy the wiki’s configuration file via command line. Once I figured that out though, I landed on a fresh wiki.

The snag that caught me was the mission-critical “What You See is What You Get” editor. Whenever I tried saving changes I made with it, it returned “[<RANDOM HEX NUMBER>] Caught exception of type Error.” A help topic on MediaWiki.org [1] reported fixing his wiki by installing a package called php7-zlib. This package is not in the Synology-approved repository, and I found no other package managers I’m familiar with when I connected over SSH. That’s… understandable, I suppose. The product is aimed at homes and small business too small for dedicated IT, after all.

An Alternative MediaWiki Host

A couple weeks ago, I had the misfortune of breaking one of the hooks securing the bezel around my laptop’s screen. Without it, I have to be extremely careful opening and closing the lid. I’m in the market for a new laptop, but in the meantime, the machine’s mind as it were is intact and I can’t use it for computing just anywhere anymore.

I learned a lot on my first successful prototype wiki. Database-website distinction and multi-site databases come to mind as relevant to my use-case. I’m imagining a system where I run each website in an OCI container with Podman on my laptop, then they go to a database on GoldenOakLibry for content.

…Podman isn’t in the Debian 10 repositories. There is a way to install it that involves a lot of hububaloo, but https://pkgs.org/ says it is on Debian 11, and I’ve had the computer upgrade bug as of late. My recent experience upgrading Mint primed me to locate a tutorial and upgrade to Debian 10. The process was the same (Timeshift, shift repositories, upgrade), just a bit less automated [2]. I took the opportunity to clean up after a failed project or two that involved repositories, but I think I ran into issues with Lutris’s repository GPG key (it updated later so I’m not sure). I’m leaving it for now.

The packages podman, cockpit, and cockpit-podman went on easily. Getting a static IP for the laptop was another story. Its official position within the house is under the TV, out of range of any free Ethernet cables we have laying about. After a few hours trying to understand how its Wi-Fi is even connected, I chose to move it next to ButtonMash and configure a static IP that way.

I started and enabled Cockpit with systemctl. It complained without a proper config file, but a browser on another computer made it to laptop’s Cockpit login screen. I told ButtonMash to link Cockpits, and it gave me a command I’ve been looking for for years.

ssh-keyscan -t ecdsa-sha2-nistp256 localhost | ssh-keygen -lf -

Admittedly, this only hints at a formula, but I saved it to a special directory on GoldenOakLibry anyway.

My Second Operable Prototype Wiki

With a more malleable host than GoldenOakLibry that wasn’t ButtonMash, I scrapped what I could of my first setup and started over. MediaWiki lists four packages as dependencies, and I removed three of them that related directly to serving web pages. MariaDB 10 stayed because I know for sure that it is compatible.

Unlike my experience with Rocky Linux 8, Podman on Debian 11 did not come with any unqualified registries configured, so I was getting fast searches with no results when pulling an image in Cockpit. I took a break for Sabbath, even though I felt I could keep the progress coming. When I got back, I about immediately found a tutorial that recommended a couple Red Hat container registries to add in addition to docker.io [3]. I spotted registry.centos.org in ButtonMash’s registries.conf; with the warnings in the file headers about who you trust, I removed it over the slim chance it gets compromised in the future. Worst case scenerio: I have to re-add it later.

Acquiring docker.io’s official image was easy next to telling MariaDB to let it in. I spent around seven hours inching through assorted tutorials tangential to setting up MediaWiki in a Podman container with [important keywords here:] remote access to a MariaDB database on a Synology device. It was slow, I could have written a post about just this paragraph, but I learned enough to understand the provided instructions (key tutorial: [4]). I braved Vim to write a needed config file and learned about MySQL database CLI client to make a pseudo-root account. And of course this was after locking things down to the static IP addresses I set up earlier.

Once MediaWiki was happy with its access to MariaDB, setup was similar to my first time, though I paid a little closer attention this time around and included all the editors, the mistake that send me on this side quest in the first place. The containerized setup will still come in handy, so it was not all for nothing. As a final, problematic sendoff, MediaWiki’s setup file, LocalSettings.php, remembers the port number it was installed to: future wiki installation attempts will happen in the containers they’re meant to run in, not some baseline I’ll be keeping around.

It was cause for celebration when I made the first edit and it stuck.

Project Notes

Given the right circumstances, I would have to say it’s possible for about anyone to bumble his or her way into a working self-hosted wiki on a Synology NAS, as I sort of did. Don’t get me wrong: even this is not an impatient beginner’s project! This week I learned that databases stand alongside websites, not inside them – a very important distinction for a sysadmin to know.

I’ve seen the Cockpit functionality to switch hosts since first installing Rocky 8 on ButtonMash. It was a pleasant surprise to find it worked over SSH and had a ready command for generating SSH host key fingerprints. DSM sadly does not have that functionality.

My opinion of Synology’s DSM began strong after a slow start, but it’s been fading. Stray one command outside their intended use case and it has DON’T TOUCH THAT! signs waiting everywhere. It’s still production grade, and that I can respect. I just won’t be asking for a similar system in the future.

The database password was extremely difficult to get right. No errors were ever thrown when entering 100+ character jibberish from Bitwarden, but 79 appears to be the maximum MySQL can swallow.

Takeaway

My progress this project does not represent a production-ready environment. I fully expect to have to tweak things before I have each wiki sequestered to its own user while still running happily. Website administration will be a whole other matter to conquer, but that is an exercise for another week.

Final Question

What kind of information might you organize with a wiki?

Works Cited

[1] Winel10, “Caught exception of type Error when saving changes in VisualEditor,”MediaWiki.org, Feb. 4, 2019 and June 8, 2022. [Online]. Available: https://www.mediawiki.org/wiki/Topic:Uuk96xjvh0ukaci2. [Accessed June 27, 2022].

[2] AM, “How to upgrade to Debian 11 from Debian 10,” AtechTown.com, 2022. [Online]. Available: https://www.atechtown.com/upgrade-debian-10-to-debian-11/. [Accessed June 27, 2022].

[3] J. Arthur, “How to Install Podman on Debian 11,” LinOxide.com, Sept. 20, 2021. [Online]. Available: https://linoxide.com/install-podman-on-debian/. [Accessed June 27, 2022].

[4] TechNotes “How to run Mariadb in Docker and Connect Remotely,”YouTube.com, Dec. 15, 2020. [Online]. Available: https://youtu.be/OabTOPOU2RU. [Accessed June 27, 2022].

I Switched My Operations to Caddy Web Server

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am rebuilding my home server, Button Mash, from the operating system up. Let’s get started!

Caddy Over Nginx

I spent well over a month obsessing over Nginx, so why would I start over now? As far as I am concerned, Caddy is the piece of software for my use case. While I am sure Nginx is great at what it does, I keep slamming into its learning curve – especially with integrating Let’s Encrypt, a tool for automating SSL encryption (HTTPS/the green padlock). Caddy builds that functionality in while still doing everything I wanted Nginx for.

The official Caddy install instructions [1] for Fedora, Red Hat, and CentOS systems are as follows:

$ dnf install ‘dnf-command(copr)’
$ dnf copr enable @caddy/caddy
$ dnf install caddy

First of all, new command: copr. Background research time! COPR (Cool Other Package Repositories) is a Fedora project I feel comfortable comparing to the Arch User Repository (AUR) or Personal Package Archive (PPA): it lets users make their own software repositories.

Installation went smoothly. When I enabled the repository, I had to accept a GPG key that wasn’t mentioned in the instructions at all. From a user point of view, they appear to fill a similar purpose here to a SSH keys: special numbers use math to prove you are still you in case you get lost.

Caddy uses an HTML interface (a REST API – Don’t ask, I don’t understand myself) on the computer’s internal network known as loopback or localhost on port 2019. Caddy additionally serves everything over HTTPS by default. If it cannot convince Let’s Encrypt to give it a security certificate, it will sign one itself and tell the operating system to trust it. In other words, if I were not running ButtonMash headless (without a graphical interface), I’d be able to try connecting to localhost:2019 with a favorite browser, like at least one of the limited supply of Caddy tutorials did.

IP Range Transplant

I should have just done my experimentation on DerpyChips or something. Instead, I pressed on with trying to point a family-owned domain name at Button Mash. This side adventure sprouted into last week’s post. In short: ButtonMash’s static IP kept was in conflict with what my ISP-provided equipment kept trying to assign it, resulting in an estimated 50% downtime from a confused router. Upgrading to the next gateway may have allowed us to free up the IP range for the gaming router’s use, but it’s not out for our area yet. My father and I switched our network connections over to a “gaming router” we had laying about and enabled bridge mode on the gateway to supposedly disable its router part. I have my doubts about how it’s actually implemented.

Most of our computers gladly accepted the new IP range, but GoldenOakLibry and ButtonMash –having static IP’s– were holdouts. I temporarily reactivated a few lines of configuration on my laptop to set a static IP so I could talk with them directly and manually transfer them over to the new IP range, breaking NFS shares and Vaultwarden on them respectively.

In the confusion, ButtonMash lost its DNS settings; those were easy enough to fix by copying a config line to point those requests to the router. GoldenOakLibry took a bit longer to figure out because the NFS shares themselves had to accept traffic from the new IP range with settings buried deep within the web interface. Once that was sorted, I had to adjust the .mount files in or around /etc/systemd/system on several computers. Editing note: While trying to upload, I found I could not access GoldenOakLibry on at least a couple of my machines. Note 2: I had to change the DHCP settings to the new IP range on my Raspberry Pi reverse Wi-Fi router. Systemd on both goofed systems needed a “swift kick” to fix them.

sudo systemctl start <mount-path-with-hyphens>.mount

Repairs Incomplete

That left Vaultwarden. I was already in a it’s-broken:-fix-it-properly mentality from the modem/router spinoff project. I got as far as briefly forwarding the needed ports for an incompletely configured Caddy to respond with an error message before deciding I wanted to ensure Bitwarden was locked down tightly before exposing it to the Internet. That wasn’t happening without learning Caddy’s reverse proxy, as I put Vaultwarden exclusively onto a loopback port.

Speaking of loopback, I found the official Caddy tutorials lacking. They –like many others after them– never consider a pupil with a headless server. I have not yet figured out how to properly convince my other computers to trust Caddy’s self-signed certificates and open up the administration endpoint. That will come in another post. I did get it to serve stuff over HTTP by listing IP’s as http://<LAN address>, but Bitwarden/Vaultwarden won’t let me log in on plain HTTP, even over a trusted network and confine the annoying log to a file.

As far as I can tell, the administration API on port 2019 does not serve a normal web page. Despite my efforts, the most access I have gotten to it was for it to error out with “host not allowed.” I haven’t made total sense of it yet. I recognize some of the jargon being used, but its exact mechanics are beyond me for the time being.

Takeaway

Caddy is a powerful tool. The documentation is aesthetically presented and easy enough to understand if you aren’t skimming. But you will have a much better time teaching yourself when you aren’t trying to learn it over the network like I did.

Final Question

Do you know Caddy? I can tell I’m close, but I can’t know for sure if I’m there in terms of the HTTP API and just don’t recognize it yet. I look forward hearing from you in the comments below or on my Discord server.

Works Cited

[1] “Install,” Caddy Documentation. [Online], Available: https://caddyserver.com/docs/install. [Accessed: June 6, 2022].

Installing NUT UPS Driver on Rocky Linux 8

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am installing the Network UPS Tool on my Rocky Linux 8 Button Mash server. Let’s get started!

A Package Exists

In a previous push on my Button Mash server, I talked about getting an Uninterruptible Power Supply (UPS) so ButtonMash could shut itself down in case of a power failure. If memory serves, I also talked about an open source driver called Network UPS Tools (NUT). At the time, I was under the impression it was exclusively available via source code and I would have to compile it to make it work.

I’ve recently suffered no fewer than four power outages since installing the UPS. A couple long ones while everyone in bed would have outlasted the UPS’s endurance had someone not noticed been aware each time to gracefully shut things down manually. I want the process automated.

And so I started the grind. The first thing the installation instructions tell me is to check for a package. Sign me up!

dnf search nut

I got several results, but with such a simple package name, the letters n-u-t turned up many false positives. NUT’s companion packages come with names of the form: ‘nut-*’, so I often filtered with ‘nut-’. My refined searches remained empty.

Installing EPEL and NUT

If the backbone of a distribution is its package manager, repositories would be its ribs. Not every piece of software gets compiled and packaged for every architecture/package manager. I get that. It was a lesson I had to learn last time I played with optimizing MicroCore Linux and why I’m going with Arch if there ever is a next time.

When I learned NUT was widely available in package form, I went looking again on Rocky Linus dnf: still nothing. Debian has a nice package viewer[1], so I looked for something similar for Red Hat distos. I wanted to be sure I wasn’t missing something before concluding the nonexistence of a package for me. One exists, but I’d need to make an account. However, I found something even better for my purposes.

pkgs.org[2] is a website that lists packages organized by several different major distributions. I was quickly able to find NUT in the CentOS 8 section for the Intel CPU architecture, but not anywhere under Rocky Linux.

A closer look after hours of confusion introduced me to the EPEL repository (Extra Packages for Enterprise Linux). Apparently, it’s held in high regard among the Red Hat branch. Many enterprise Linux users consider it almost mandatory to offset the smaller offering by default repositories. I was uneasy about it at first because it showed up for the now depreciated CentOS RHEL downstream, but EPEL is maintained by the Fedora community, which isn’t going anywhere for the foreseeable future: I’m calling it safe to use.

sudo dnf install epel-release
dnf search nut

NUT was then as simple to install as any other program from a repository.

Side Project

Podman pranks again! While testing my Bitwarden login from my laptop, I got myself permanently logged out. I traced the problem back to my Podman container on ButtonMash corrupting during one of those power outages from earlier. I sent a discouraging error off to the search engine and I found my exact issue on the Podman GitHub (see Works Cited) [3]. I wasn’t happy with the explanation, but it was the best one I found: systemd didn’t like an under-privileged user doing things without at least a recent login, so it messed with Vaultwarden’s Podman container. The messed up container had to be forcefully deleted and remade. I also needed to remember to specify https:// when looking for the server via browser. To make sure it doesn’t happen again, I followed a piece of advice found later in the discussion and permitted the login to linger.

Takeaway

I honestly expected this week’s progress to take at least a month. When I first looked into NUT, all I saw was source code ready to download and compile and honestly, I’m having trouble getting excited about mastering the art of compiling other peoples’ code. If there’s a way to install via a compatible repository, I’m all for it.

I am especially thankful for pkgs.org [2]. They helped me reduce my problem to one I’ve at least blindly followed a tutorial for. You typically won’t find the full, non-free version of Chrome on Linux, so when I was setting up Mint for my father, I had to explicitly add a repository.

While NUT may be installed, configuration is not happening this week if I expect to understand my system when I’m done. I blitzed the first expected month of work and only stopped because the next bit is so intimidating. Here’s to a quick understanding within the next month.

Final Question

NUT has proved difficult to locate assistance for, as I haven’t figured out how use their internal system. Do you have any idea where I can find support for when I need it?

Works Cited

[1] Debian, “Packages”Debian, July, 2019,Available: https://packages.debian.org [Accessed: Jan. 10, 2022].

[2] M. Ulianytskyi, “Packages for Linux and Unix”pkgs.org, 2009-2022, Available:https://pkgs.org/ [Accessed: Jan. 10, 2022].

[3] balamuruganravi “rootless podman ERRO[0000] error joining network namespace for container #6800” github.com, Jun 2020. Available:https://github.com/containers/podman/issues/6800 [Accessed: Jan. 10, 2022].

ButtonMash’s Solid Foundation on Rocky Linux

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am still working on my Rocky Linux server. Let’s get started!

Project Selection

One would think it shouldn’t take a month to set up a server, but the vast bulk of that is research. What all do I want the server to do? What services do I need to set up to include it? When I know more than one way to do something, which way do I want to do it? The questions don’t end until work has progressed beyond the point of answering differently.

My goal for today is to get a few things running: I want to mount the GoldenOakLibry NFS server. I want to update-grub so I can properly dual boot with Debian. I want to install BitWarden. These three things are probably the most important end-goal tasks remaining for configuring ButtonMash’s Rocky install.

Package Managers

Before I can really work on my target goals, I need to know some of the basic specifics. Every major branch has its own compatible package managers. Debian has DPKG and Apt (Snap for the Ubuntu sub-family) while Arch has Pacman and AUR. Wrappers and cross-compatibility tools exist as a zoo of possibilities that will not be fully sorted out here, today.

My first impression as I research the Red Hat branch’s solution are the striking parallels to Debian, though it is also experiencing a stir. RPM (Redhat Package Manager) is like DPKG in that it is used for directly interfacing with the repository. YUM (Yellow dog Updater, Modified) was the package manager the likes of Apt I’ve been hearing about associated with the branch. It is now replaced by DNF (DaNdiFied YUM) for installing Package X and everything Package X needs to run (called “resolving dependencies”). Both YUM and DNF are present on my install, though.

Cockpit

I’ve had a chance to look over this web interface that came with Rocky Linux. By default, there doesn’t appear to be much to it after logging in beyond information readouts, an interactive firewall, and most importantly: an in-browser terminal. There appears to be a whole ecosystem to learn about, but it’s beyond me for now. I will want to look deeper into this subject when I move in to disable password authentication over the network.

Note about the terminal: it’s a little quirky from sharing its inputs with the browser. Nano’s save command also tells FireFox to “Open” and copy-paste commands don’t always work the same.

NFS Mount

From experience, I know that NFS is a royal pain to learn how to set up. On top of that, I know of at least two ways to automount network drives: during boot with fstab, and dynamically with systemd. Mounting it with fstab is annoying on my laptop because it halts boot for a minute and a half before giving up if GoldenOak is unreachable. More annoying is that this appears to be the more well documented method between the two. For an always-on server, though, it may not be a concern.

Not helping systemd’s case is/are the additional way/ways I’m discovering to set its automount functionality up. I don’t even know the proper name for the method I’ve used before – just that I didn’t mess with /etc/fstab whereas another systemd method does. It is a great challenge finding a source that compares more than a single mounting method. The good news is that aside from installation, I should be able to disregard what distro the tutorial was intended for.

While researching this section, I rediscovered autofs, and saw mention of still other automount methods. I’m avoiding autofs because because the more I read about it, the move complex it appears. In this instance, it would behoove me to just leave a line in /etc/fstab because I don’t expect to be booting this server outside the context of the GoldenOak NAS, but as this is more or less the centerpiece of my home’s network, I’m going with systemd mount files, as per the blog by Ray Lyon I referenced last February when I first learned about it. I’ll leave a link to his post in my Works Cited[1].

NFS Automount is tricky stuff, but each time I study it, I retain a little more. I can barely remember how to mount a share manually – let alone configure systemd automounts. It took me several days to find a copy of the files I needed, even after looking back at my above mentioned post from February[2]. My best guess is that I got lost in my own filesystem. I’m taking notes and organizing them in my home directory on this new install.

Update-Grub

When I installed Rocky Linux, I was all nice and safe by not letting it see any drives it wasn’t installing over, but the host machine still has a job to do on the photo trunk project; I need it to dual boot. I read up on a command called update-grub I could just run once everything was installed and physically reconnected. First of all, update-grub is a script, and second of all, it’s notoriously absent.

A variety of help topics exist on what command to run on RHEL instead of update-grub. From what I can tell, it’s pretty universally present on Debian-based systems and when I checked Manjaro (Arch family) just now, it was there too.

Update-grub itself is pretty simple. It’s three lines long, and serves as an easy-to-remember proxy command to actually update your Grub boot loader. The exact command may differ between computers depending on if they’re using BIOS or a newer, less common equivalent called UEFI. I assume it is generally generated during package installation.

Once I had my bearings, it was fairly easy to update grub on my own. I found my configuration file at /boot/grub2/grub.cfg because I am using BIOS. An effectively empty directory stump existed for the UEFI branch, cluing me in that this operation is one you should understand before using copy-paste into terminal. This StackExchange forum has several individual explanations, including reference to what I take to be a catch-all I am not using. Link[3]

So… I go to verify everything is working, and it’s not. A simple reboot loaded Rocky’s GRUB, but the Debian kernel refused to load over the USB 3 PCI card. So much for that idea. I moved the Debian drive to a motherboard USB port and BIOS found it and loaded Debian’s GRUB, which doesn’t know about Rocky Linux. I tried running update-grub in Debian and… it didn’t work. I wasn’t looking to spend even more time on this part of the project, so after confirming that Rocky’s GRUB could boot Debian, I got into BIOS and told them to prefer the internal Rocky drive over anything on USB.

BitWarden False Alarm

I’m super-excited about putting my self-hosted BitWarden server back up. I’ve already started researching, but the topic still feels like it’s expanding when I need to be getting ready for publishing this already lengthy post full of amazing progress. BitWarden will need to wait until I can better teach myself how to properly take care of it.

Takeaway

The Red Hat branch of Linux is in a notable state of flux. Key fundamentals elements of the family like CentOS and YUM are everywhere in old tutorials, and that is bound to make for a frustrating time trying to learn Red Hat for a while to come – especially if you’re new to Linux. Here, more than anywhere else, learning the history of the branch is vital to teaching yourself how to sysadmin.

Side Project

A while ago, I thought Derpy’s RAM was failing because Kerbal Space Program kept crashing the whole system. I’ve been running the three 4 gigabyte sticks on my Manjaro workstation for a month or two, and they appear fine. In the meantime, my father ordered up a pair of 8gb sticks. This week, I installed them, displacing one of the 4gb sticks. Passive testing will now commence.

Final Question

Have you ever had a project take a discouragingly large amount of research time then suddenly come into focus in a single day?

Works Cited

[1] R. Lyon, “On-Demand NFS and Samba Connections in Linux with Systemd Automount,” Ray Against the Machine, Oct. 7, 2020. (Edited Aug. 8, 2021). [Online]. Available: https://rayagainstthemachine.net/linux%20administration/systemd-automount/. [Accessed Nov. 7, 2021].

[2] Shadow_8472, “Stabilizing Derpy Chips at Last,” Let’s Build Robotics With Shadow8472, Feb. 22, 2021. [Online]. Available:https://letsbuildroboticswithshadow8472.com/index.php/2021/02/22/stabilizing-derpy-chips-at-last/. [Accessed Nov. 7, 2021].

[3] “Equivalent of update-grub for RHEL/Fedora/CentOS systems,”StackExchange, Aug. 26, 2014-Oct. 10, 2021 [Online]. Available:https://unix.stackexchange.com/questions/152222/equivalent-of-update-grub-for-rhel-fedora-centos-systems. [Accessed Nov. 7, 2021].

Squashing All My Computers into One: Part 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am centralizing storage across my several computers. Let’s get started!

Computer Drift

One of my favorite things about Linux is exploring the possibility space of designs for what a computer operating system can look like. But maintaining multiple workstations can and will leave you wondering where that one picture is saved or what ever happened to that document you know you saved under blog drafts. I have no fewer than three computers –four or more if you count my laptop and ButtonMash as separate given their common install and/or my dual booted machines– it’s high time I consolidate my computers’ respective identities to reflect me as a single user given my access to GoldenOakLibry, the family network storage.

Project Overview

One would think the process would be as simple as dumping everything in a central location and spreading everything around be it garbage or not. Alas, subtle differences in installed programs or versions of programs make this approach unsuitable.

My best bet will be to think backwards. Not everything will be shuffled around; directories supporting install-specific programs should stay on their specific computer. Backups for such files are fine, but I can accidentally damage other instances if I’m not careful. I’ll need to tailor a number of Rsync commands and schedule them to run automatically with Cron. As this topic is basically day-of filler while I work on a larger project, the full job is a little out of my scope for today.

My goal for today is to make a backup I can operate manually and later automate. If things go well, I can see what I can do about Rsync, but Cron will need to wait for another day.

GUI File Transfer

The terminal is an important skill to have when managing a Linux ecosystem of multiple computers. However, there are some things, such as managing picture files, that inherently work better over a graphical file manager. While preparing for writing today, I noticed places like my respective Downloads directories are quite messy after a few years of Linux.

I wasn’t the biggest fan of jumping workstations all day, so I searched for a way to have the Dolphin file manager operate over SSH. The first result to catch my attention was called FISH (Files Transferred over SHell protocol). Similarly, SFTP (SSH File Transfer Protocol) appears to fill a similar computing niche. Each would be an interesting research topic, but for my purposes today, they both work equally well as long as SSH is configured to use authentication keys.

Derpy’s Backup

The easiest place to start would be my DerpyChips workstation as that’s the one I’m working from starting off. Documents was fairly easy to clean out. I had some Blog drafts and some other stuff I sorted into proper places on the drive.

The dreaded Downloads directory was relatively tame on Derpy. Nevertheless, I still spotted elements from at least four distinct projects ranging from incomplete to long done or abandoned. I even found an instance of GraalVM I may have been running straight from Downloads. My goal is an empty directory. If it will update before I need it again or I won’t need it ever again, it’s gone. If I’m unsure, I’ll find another home for it. I similarly emptied out any directory intended for file storage. Pictures was simple this time, but I expect I’ll need a more elaborate structure once I start trying to organize additional computers’ worth of memories.

ButtonMash’s Backups (Debian and MineOS)

Things were a little more interesting when I started moving things over from ButtonMash. At first, I set a Dolphin instance up with ButtonMash’s home on the left and its view GoldenOak on the right, but when I got a warning about not being able to undo a delete, I thought twice. I did have a deletion accident last phase and used an undo action, so it’s Derpy’s view of it on the right.

I was right about needing to take pictures slowly on this one. Some pictures fit better in with my blog while mems I felt worth saving went in their own directory within the more general Pictures one. But I don’t need copies of everything everywhere if I can just access the drive. Possibly just my favorite desktop and my avatar, if that. I made a directory for those two and any others I may want to spread around.

File manager over SFTP understandably has limitations. Not all files can be directly accessed –particularly audio files– and some graphical files don’t render shortcuts. When I try to preview an archive, it must first be copied over as a temp file.

I had another accident while moving some old Python projects over. For whatever reason –be it permissions or simple corruption– some files didn’t copy over cleanly. I fished around with it a little more and gave up and deleted both source and destination, as I expect another copy was made when I cloned my laptop to its internal drive.

Thanks to this blunder, though, I was more careful when it came to the family’s Minecraft servers from when we were running MineOS. I encountered an error while copying, so I reverted to rsync directly from ButtonMash. Even then, I had to elevate permissions with sudo to finish the job.

Takeaway

I’d like to say I’m somewhere around half way with my goal for today, but if I am to take this task seriously, I’ll need to go back farther and reintegrate any old backups I may have laying around, and by that count, I at least eight computers to consider – more if I count Raspberry Pi’s and any recursive backups I may find.

In some ways, this project is not unlike my experience with synchronizing Steam games manually, but on a larger scale. I’m having to re-think my structure for what I want backed up where as well as how I’m planning to access it. This is not a simple grab and dump.

Final Question

Have you ever made an comprehensive and accessible backup of all your computers, present and surviving?

Stabilizing Derpy Chips at Last

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m addressing an annoying trio of issues I’ve had with Derpy Chips since I installed PopOS on it. Let’s get started!

The Problems

I have a number for gripes to myself about Derpy. I frequently have to stare at an ugly, gray login screen for to a minute and a half before I can select a user account. Tabs sometimes crash in FireFox, but only while I’m using it. Discord sometimes blinks, and I lose any posts in progress – possibly representing minutes of work.

Additionally, my mother uses a separate account to play Among Us on Derpy, and I have my account set up with a left-handed mouse she can’t use easily. Unfortunately, Derpy tends to crash whenever I try switching users, so I’ve been using a full power cycle. And that means we need another long, featureless login screen before the actual login. Some day, I really want to figure out how to change a login screen. Aside from how long this one takes, I’d much rather use the KDE one over GNOME 3.

The Plan

Of the three issues I’m setting out to address, long login is the most reproducible. Fickle FireFox and Ditzy Discord happen often enough to make Derpy frustrating to use as a daily driver, but sporadically enough to resist debugging on-demand. So I am planning on spending up to the full week on Derpy ready to catch the errors when they happen.

Going off what I have to start with, I’m assuming my FireFox and Discord issues are related. Both use the Internet for their every function, and the glitching tends to happen at times when a packet is logically being received: for FireFox, when a page is either loading or reloading, and Discord when someone is typing or has sent a post. If I had to hazard a guess, I would have to say Lengthy Login is directly caused by my NFS being mounted in /etc/fstab, and I’m not sure if there’s anything to be done about it except working the surrounding issues.

For this week, I an reaching out to the the Engineer Man Discord and a Mattermost community I found for PopOS. I don’t know much about the latter, but I heard the PopOS dev team frequents that forum.

The Research

I started by posting about my issues. Help was super-slow, and I often got buried. I don’t remember any self research making any sense. Anyone helping me in the PopOS support chat seemed obsessed with getting me to address Blank Login first, even though it was the least annoying of my three chosen issues, if only other stuff didn’t bug out on me.

Someone gave me a journalctl command to check my logs, and I did so shortly after a target glitch. It came back with a segfault error of some kind. I added this to my help thread and humored them about disabling my NFS fstab lines.

RAM or Motherboard?

When researching further for myself, I came across a number of topics I didn’t understand. I didn’t make any progress until someone told me to try memtest86+. What a headache! I installed the package, but had to dip into GRUB settings so I could boot into the tool. Even then, it kept crashing whenever I tried to run it with more than one stick of RAM at a time, as in the whole thing froze within 8 seconds save for a blinking + sign as part of the title card.

I was hoping at this point it was just a matter of reseating RAM. Best case: something was in there and just needed to be cleaned off. Worst case: a slot on the motherboard might have gone bad, meaning repair might be one of tedious, expensive, or impossible.

I tried finding the manual of Derpy’s motherboard, but the closest was the one for my personal motherboard, a similar model. Both come with 4 slots of RAM: two blue, two black. I used the first blue slot to make sure each stick of RAM passed one minute of testing, followed by a full pass of testing, which typically took between 20 and 30 minutes. I wasn’t careful with keeping my RAM modules straight, in part because I helped clean my church while leaving a test running.

I identified the fourth stick from a previously tested one I’d mixed it up with by how it lit up the error counter, starting just past one minute in. I tried reseating it several times, with similar results: the same few bits would sometimes fail when either reading of writing. If I had more time, I would have a program note the failing addresses and see if they were the same each pass as they kept adding up.

Further testing on the motherboard involved putting a good stick of RAM into each slot. Three worked, but one of the black slots refused to boot, as did filling the other three slots. I landed with leaving one blue slot empty for a total of 12 out of 16 gigs of RAM.

NFS Automount with Systemd

I still want relatively easy access to the NAS from a cold boot. “Hard mount in fstab has quite a few downsides…” cocopop of the PopOS help thread advised me. Using the right options helps, but ‘autofs’ was preferred historically and systemd now has a feature called automounts. I thought I might as well give the latter a try. cocopop also linked a blog post On-Demand NFS and Samba Connections in Linux with Systemd Automount.

I won’t go into the details here, but I highly recommend the above linked blog. It didn’t make sense at first, but after leaving it for a day, my earlier experiences with fstab translated to this new method within the span of about an hour total. I missed an instruction where I was supposed to enable automounting once configured, but it felt almost trivial.

Results

I haven’t had any problems with Discord or FireFox since setting the defective RAM aside in the anti-static bag it came in. As a bonus, switching users works correctly now as well.

NFS mounting is now much more streamlined with systemd. While I cannot say which method would have been more challenging to learn first, the tutorial I was following made this new method feel way more intuitive, even if file locations were less obvious. I didn’t even need any funny business with escape characters or special codes denoting a space in a file share name.

Takeaway

It really should go without mention that people will only help each other with what they know. I find myself answering rookie questions all the time when I’m after help with a more difficult one. Working side by side this week on a future topic, I had such a hard question, people kept coming in with easier questions, and I ended up asking mine enough times someone commented about the cyclic experience. The same thing kept happening with the easy part of my question about login.

Final Question

Do you ever find yourself asking a multi-part question, only to have everyone helping you with just the easiest parts you’ve almost figured out?

Family Photo Chest Part 9: NFS

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am exploring the my preferred method of accessing network drives. Let’s get started!

Storage of any kind is only as good as your ability to access it. On your typical modern, end-user computer, long-term storage is typically limited to a hard drive of some kind, and possibly some sort of cloud storage solution.

On another inbound tangential subject, file transfers within my family’s home network have thus far has been limited to using a USB thumb drive or bouncing it off an online host. But thumb drives are often already laden with data, and size limitations plague e-mail and chat services. SSH and SCP have helped, but they are a bit of a pain to get working smoothly.

File sharing has been around almost as long as computers could communicate. Different protocols have different strengths and weaknesses, and the best one for you can differ depending on your situation. I’m largely dealing with Linux, and NFS speaks Linux/Unix natively, or so I hear. The other easy choice would be SMB, a protocol with more overhead that Microsoft wants its customers to upgrade to Pro or Server to avoid having to use for file sharing. And according to data gathered over at Furhatakgun, I am drawing my own conclusion that SMB has more overhead per file than NFS.

If I would just follow a tutorial, I could have a much faster time with a lot less understanding. My target project was to backup my laptop’s home directory in preparation for migrating my drive from external to a newly installed internal drive.

I would have to say enabling NFS was easy only in the shallowest of terms. After enabling the protocol overall, I found my way over to the appropriate network share and had to resort to whitelisting my IP to mount that share (as root). And at that, I literally had no permissions to read, write, or execute that share — even as root. chmod!

All I know is that I am on the way to understanding, but I have much to learn before I can properly report back on it. For example, I’ve read that I need to have my NAS account name match my local user name. I’ve also read some about hard vs soft mounting, and how setting it up right can minimize the chance of data corruption.

Final question: Have you ever recognized that you know something, but not well enough to teach it?

Family Photo Chest Part 8: NAS Software

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I only noticed last minute that it’s time for this month’s edition of Family Photo Chest, so I’ll probably be on the short and wordy side. Let’s get started!

While I explored around a little in the Network Attached Storage (NAS) documentation last time, I rightly came to the conclusion that this system has more features than I will ever need. I knew I would need time –days perhaps– to scout the documentation.

Fortunately, I found a YouTube channel, mydoodads, with an awesome overview of Synology’s NAS operating system, Disk Station Manager. I highly recommend his series I’ve been watching for today’s post: How to Setup and Configure a Synology NAS. Topics are broken into 6 to 12 minute videos, and more importantly to me, his audio is clean and understandable. My one complaint is that he doesn’t always act like Linux is a thing. If you’re here to follow along, just go check his videos. He’s more set up for actual instruction than I am.

In the meantime, my vision for this system was to just have a simple external hard drive I can see from whatever file browser I like, sort of like the “K drive” at my university. My first impression upon seeing the login over a browser and seeing a full desktop was that that was the only way to use it. Watching mydoodads’ tutorial, I learned about the Server Message Block (SMB) protocol, which looks very much like my memories from the K drive.

There are other ways to access the device, and I’m still deciding how I’m going to get everyone using it. Right now, it’s connected to my personal subnet that won’t let anyone outside look at it, and I’ve given it a static IP of 10.0.1.2. I’ll need to see if it will automatically adjust to the different netmask (I hope I’m using that word correctly).

The one thing I haven’t come across in this video series is setting up RAID, which I already did before. He did go over shared folders, but I still need to set up other basic stuff, like user accounts and groups. Part of why I was getting lost was because I found and started messing with storage pools, which appear to be for when you’re dealing with multiple logical NAS setups on the same network. I still have so much to learn.

Final Question: Have you ever used a network storage? If so, did you understand at the time?