Alternative 3D Slicing Arrangements

Good Morning from my Robotics Lab! This is Shadow_8472, and today, it’s been a busy week. Derpy needed work, but I still wanted to 3D print. Let’s get started!

New Computer Screen

A couple weeks ago, Derpy’s monitor failed. I pulled the graphics card and brought in a cathode-ray tube based monitor from the garage. Integrated graphics aren’t good enough for slicing 3D prints though.

Between bugging out from construction noise next door this week and taking evenings off to recover, a good chunk into the week passed before I made a move to switch back. My father and I took the opportunity for a deep dusting. With the graphics card physically removed, it wasn’t much more hassle to remove the outer cover… The radiator decoupled from the GPU chip in the heart of the graphics card, and we had couldn’t find any thermal paste to put it back together properly.

With some electrically non-conductive paste on order, I felt my desk was overdue for a cleaning, Special thanks to my whole family for helping out in some capacity or another. I sorted the stuff into Keep, Trash, and ??????!. Keep stayed with the desk, Trash got sorted for disposal, and the rest tagged for dispersal according to where it belonged. My father and I further took the opportunity to remove my monitor shelf, dust the desktop thoroughly, and polish it.

I also addressed the cable management situation. With no electronics at Derpy’s workstation, I was free to swap out the power strip for something with a bit longer of a cord (I swiped it from the 3D printer, which had plenty of cord). As I re-assembled my desk, I was sure to tuck my cables at least somewhat out of sight.

A Light Mode Program on a Dark Mode Theme

This section on down to the side project started off as the combined 3D printing corner/side project sections before it grew to half the post at one point.

With Derpy down for service this week, I couldn’t 3D print. Instead, I explored the option of using PrusaSlic3r on my Manjaro workstation.

My graphical package manager presented me with three versions: an out of date version lacking features I’m after from official repositories, an AUR (Arch User Repository) beta version from git, and one AUR entry with a good version number but included GTK-2 in the name. I chose the GTK-2 one, and it appeared to work perfectly, save for that it defaulting to light mode.

There was no dark mode override in the options. The documentation said Linux versions of Slic3r hook into the global theme, which I most definitely have set to dark. No matter how much I played KDE’s themes in System Settings, Slic3r refused to play nicely. I tried switching to the AUR version labeled git, but it didn’t even compile (two attempts).

Assistance With Diagnostics

During research, I came across some discouraging bits about themes not always working on bleeding edge systems. I was about to give up when I brought this matter up on Engineer Man’s Discord server.

Server regular localhost took notice and suggested I install lxappearance, a theme manager designed with the LXDE desktop environment in mind. It too popped up running light mode, but was exposing its own theme selector on the Widget tab. Hitting Apply didn’t have any apparent system-wide effects until I restarted the graphical package manager into my chosen dark theme.

I rebuilt the GTK-2 version of Slic3r, and it greeted me with dark mode. During the 70 minute wait though, I researched the why of the situation. I pinned the culprit to GTK. Aside from its association with GNOME, I had the hardest time piecing together its purpose with certainty. I figured it was some incompatibility with KDE, and I was half-right.

It Works, but Why?

Lxappearance was actually my biggest clue. With both it and KDE’s theme manager open side by side, I noticed a button under System Settings > Global Theme > Application Style called Configure GNOME/GTK Application Style. It led to an unassuming dropdown menu titled GTK theme with a nondescript preview button. The dropdown had the same list from lxappearance.

Researching GTK has no shortage of results, but what does GTK do? As far as I can tell, GTK stands for GNOME ToolKit. I know KDE doesn’t use GTK at its core, so I conducted my research looking for whatever counterpart I it did use: Qt.

I ran a combined search on GTK vs Qt and learned about their combined history. Qt is to KDE what GTK is to GNOME. Qt is older, but GTK was fully opened up first and became more widely adopted. GTK and Qt serve the purpose of drawing the parts of user interfaces you use, but don’t think about – from buttons to save windows. When developers use them, they provide a unified appearance an end-user can easily configure.

Side Project (Blog Site Building)

I’ve begun research into improving the presentation of this blog. It’s hard to know where to begin. I’m working on four and a half years on the job and I have still to formalize my niche. There are blogs out there for privacy. There are blogs out there for technology and Linux. There are fewer blogs out there for home robotics and AI. This blog is all of those, to an extent. That’s why I’m consolidating my niche to Home Computing for the Privacy-Minded Roboticist. I don’t expect things to change all of a sudden, because I’m still technically covering the supporting technologies: namely Linux. But I want to aim to be doing more with actual robotics from here on.

Over the next several weeks, I intend to make small changes to the site, starting with the “About the Author” page. I’m also working on a community Discord server so we can finally get the conversation going in a place where I don’t need the patience to get WordPress forums going at this present time.

Takeaway

Even though I was out and about, I still managed to find stuff to write about. That said, modular systems can be a bit of both a blessing and a curse at times. Feature A may be implemented any number of times – each can work equally well and provide redundancy in case one project goes unmaintained/unforked, but clashing systems can lead to confusion when an end-user finds himself diagnosing the wrong backbone without realizing there are multiple in the first place.

Final Question

What do you think of my stated niche: Home Computing for the Privacy-Minded Roboticist?

Work Cited

[1] B. King, “What’s the Difference Between GTK+ and Qt?,” makeuseof.com, Feb 20, 2019. [Online]. Available: https://www.makeuseof.com/tag/difference-gtk-qt/. [Accessed Febrewary 21, 2022].

Containerized Development Environment

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am building my own dice bot for Discord using containers. Let’s get started!

Overview

In last week’s post, I learned how difficult a Discord dice roller is to find that is either already in a container or suitable to install into one. However, I did develop my skills along the way, and I believe I’m ready to make a bot of my own from scratch.

Development Environment

None of my machines had Python on them (except perhaps excluding the odd Raspberry Pi I have laying around), so the first order of business was to set up my own environment inside a container. There are a number of base images for running Python, but for development, I chose the latest official Python.

Podman pull python:latest

I went back and forth several times with the geometry of my bot project. I ended up stashing my work last week into a directory and making another to hold this round. When I was done, I had three files of consequence: Dockerfile, requirements.txt file, and a directory called home.

When I tell Buildah to make an image, it first goes to the Dockerfile:

FROM python:latest
COPY requirements.txt .
RUN python -m pip install -r requirements.txt

This takes the Python image, copies in requirements.txt, and references it to install any dependencies that don’t come with Python by default, namely as discord.py. As such, requirements.txt had a single line:

discord

Container base images leave out a lot of quality of life programs one might expect on a normal Linux install, like a text editor. That’s where the home directory comes in.

podman run -t -v /home/shadow8472/<bot project>/home:/root:Z -i dice_development:latest /bin/bash

This command has Podman use that image to make a container, mount that home directory into the container as /root (:Z is to tell SELinux it’s all good), and start a bash shell. From here, I can

podman attach <container name>
cd /root

and run the bot’s .py file. With the shared directory, I can edit it from the relative comfort of the host computer, ButtonMash, and go straight to running it inside the container. Best of all, the container will keep running with the bot if my connection to the host is broken, like if the network drops out or if I willingly close the terminal window. When I want to keep working, I can reattach the container.

Login token as environment variable: will my mental innovation of using the dockerfile to assemble it into a format Python understands work?

Development

I don’t have many comments about the actual development of the bot. It’s been a while since I actually programmed, and it’s been longer still since Python. I learned the basics of async programming a few years ago, and discord.py is building on that introduction, but I’m not setting out to master it just yet.

On my way to first stable “release,” I was coding up a loop meant to fill an array with die rolls. A bit of C++ grammar slipped in, and only after asking around did I identify it on my own and fix it. Another time, I coded an echo! function to practice passing command extensions into my code.

Most of my time went into constantly overhauling the internal logic. My early prototype was hardcoded to build its output string one die at a time, but I had to rework the back end to store the dice as they came in so I could sum them up and display that separately. All this was done while trying to keep it in a near-functional state.

But there are times a feature needs invasive re-working. I have some code in my comments that eliminates string manipulation from a function that should only be dealing with numbers. I’ve tried learning a popular version control system called Git in the past, but I can feel that I might actually stick with it this time. I’ve started reviewing commands, but I have yet to start using it as of writing.

Side Project

My week started off rough. I found my Derpy Chips workstation with a blank screen when I went to pull up last week’s post for some last minute proofreading. Initial diagnostics showed the screen blanks 0-30 seconds after making a picture from waking up. I got some help pouring over journalctrl and Xorg logs, but everything looked normal. The problem persisted through rebooting, pulling the GPU (graphics card), booting to an external drive, driving the monitor with my laptop, and even using a DVI cable instead of an HDMI. Final diagnostics turned up a dying backlight. I re-enlisted a personal museum piece –a vintage cathode ray tube model– to fill in until a suitable replacement can be procured.

3D Printing

Turns out that while I can play Minecraft just fine on integrated graphics, Slic3r pushes a few too many polygons for a smooth experience. I can’t run the old monitor off the graphics card because the card doesn’t have a VGA connector. Until I get a replacement monitor –or at the very least an adapter– I’ll be stuck with printing pre-made .stl objects or else installing Slic3r on another workstation.

What I did instead this week was make my own profile picture for the dice bot. I found a few dice I used to role play with and set them up in my photo booth. I even used a bead from printing the Z-braces to support a die in the back. I tweaked the picture a bit in GIMP to make it look more like the actual dice before uploading it for my friends’ enjoyment. It was not fun to fight the .jpg compression the entire way.

Takeaway

Programming is a completely different side of technology than systems management. It’s been a rewarding experience assembling my own environment to work in. It will need even more tweaking if/when I need to adapt it for use with an IDE, but those are for people who either already know everything that’s going or or those who don’t care to learn what’s going on at a basic level.

Final Question

Async functions are weird. I am after a small function to both log and send a response message. How do I pass an async call to another function? Am I even asking the right question?

I Buildah Discord Bot

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am working on a dice bot for a role play on Discord for a few friends. Let’s get started!

Seeking a Container

Many projects are like a tree in terms of having a general sense of progression up from the root and the fine details take most of the work to perfect. This week, my project was a rose bush. I have an end-goal in mind, but I want to run it from a container, a technology I still haven’t fully introduced myself to.

My plan started out simple: find a Discord dice bot already in a container and get it on the server. DockerHub, a sizable repository of OCI (Open Container Initiative) container images. (Most places call them Docker Containers due to its popularity over the past ten+ years. Searches for help should be worded accordingly.) I located about the only one that met all my simple criteria and made myself a bot account for it.

Discord bot accounts are more similar to human accounts than I gave them credit for. The only big differences are that bot accounts log in with a token instead of a username/password and the interface is different to accommodate a program instead of a human. Just as you only interface with your account to make posts, so too does a bot interface with its account to make posts. Give the login to someone or something else, and Discord shouldn’t care.

Long story short, the bot was set up to use an environment variable when using the image to make a container. Once I got it logged in, it ran into other problems I wasn’t prepared to diagnose.

Learning to Build Containers

With my only obvious option a bust, I set my sights on a Discord dice roller I’ve used in the past made by SkyJedi for the Star Wars tabletop RPG’s and a generic spinoff called Genesys. The game I’ll be running is GURPS, which focuses on using 6-sided dice. Plus it has an interesting turn counter system for combat I’d like to try using. The rest is fluff to me this time around, so long as I get it working.

A quick tell of an intense week is when I spend a whole day watching tutorials on the same subject over and over again until I find one at my level. This was one of those weeks. I remember there being one breakthrough video right at my beginner level that let me go back and follow the logic of more advanced tutorials that previously dazed me. During this intense study session, I puzzled together how to use Buildah at a beginner level, what a Dockerfile does, and went from muck to murk in my understanding of Javascript.

Buildah is a for making OCI containers Podman can later run and another tool I haven’t yet explored can edit. Like Podman, Buildah’s corner of their ecosystem aims to replace Docker commands one for one. It’s a win for people switching from Docker, and a win for newcomers like me who can look back at pertinent Docker documentation.

Docker –at least– takes a layered approach to making containers. Using a Dockerfile (one word, also its file name with no extension) lets you build start with a base image and build it up layer by layer, one command at a time. Parts can then be swapped out and the image rebuilt as needed.

The Star Wars dice bot is written in Javascript. If there’s one thing I learned about Javascript this week, it’s that there are several “frameworks” for both front-end development (used to make things look nice for users) and back-end development (used for internal logic). More specifically: I needed to look into NodeJS, a back-end framework. Node has a few different base images on DockerHub, so I worked with the full one with the intent to swap it out for smaller ones for a much smaller image.

Assembling My Own Containers

It took a very special set of circumstances during my self-study for containers to really click. First, I’ve been working through the Podman plugin for Cockpit. Individual images and containers are presented in an intuitive way. Each container has tabs for info, logs, and console access. The life cycle of a container is tied to executing a single command, but if that command is to start a shell, I can interact directly with the container like I would at any other command line. I could test commands from there, and if I got something right, I could add it to the Dockerfile for my next iteration.

I was actually able to find SkyJedi’s Discord server and contact him directly. He wasn’t familiar with containers enough to help me, but he did direct me to a much simpler bot of his I was rapidly able to package into a Node container – and later a Node:Alpine container.

The difference was login token. No matter what I tried, the full dice bot would not accept its login token. I was not able to figure this out, though I came quite close.

Side Project

I’m considering a new weekly segment where I work with my 3D printer a little bit each week. This week, I printed up a calibration cube using everything I’ve learned so far. It’s still far from perfect. I still need to tune in my Z-braces, as I can tell from some faint ringing I can more easily feel by running my thumbnail across it. I still need to calibrate my E-steps, a process I may need to repeat a few times as I come up with interesting, new ways to improve my printer.

Takeaway

After all that, I looked ahead at a the database I’d need to deal with once I get the bot logging in, and I’m almost certain now that my time will be better served making my own bot in Python. Of minor interest, I was able to learn the curl command better. Rocky Linux apparently doesn’t come with wget, the terminal program I usually use to download files. Curl –on the other hand– copies the contents of links to the stream. When directed to a file, it does the same job as Wget. A cool trick I pulled off involved manually following a redirect link.

Final Question

I feel like a rose bush gardener with nothing to show for his work but the tools he’s learned to use along the way. What projects have you farmed up a bunch of dead ends on?

Installing NUT UPS Driver on Rocky Linux 8

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am installing the Network UPS Tool on my Rocky Linux 8 Button Mash server. Let’s get started!

A Package Exists

In a previous push on my Button Mash server, I talked about getting an Uninterruptible Power Supply (UPS) so ButtonMash could shut itself down in case of a power failure. If memory serves, I also talked about an open source driver called Network UPS Tools (NUT). At the time, I was under the impression it was exclusively available via source code and I would have to compile it to make it work.

I’ve recently suffered no fewer than four power outages since installing the UPS. A couple long ones while everyone in bed would have outlasted the UPS’s endurance had someone not noticed been aware each time to gracefully shut things down manually. I want the process automated.

And so I started the grind. The first thing the installation instructions tell me is to check for a package. Sign me up!

dnf search nut

I got several results, but with such a simple package name, the letters n-u-t turned up many false positives. NUT’s companion packages come with names of the form: ‘nut-*’, so I often filtered with ‘nut-’. My refined searches remained empty.

Installing EPEL and NUT

If the backbone of a distribution is its package manager, repositories would be its ribs. Not every piece of software gets compiled and packaged for every architecture/package manager. I get that. It was a lesson I had to learn last time I played with optimizing MicroCore Linux and why I’m going with Arch if there ever is a next time.

When I learned NUT was widely available in package form, I went looking again on Rocky Linus dnf: still nothing. Debian has a nice package viewer[1], so I looked for something similar for Red Hat distos. I wanted to be sure I wasn’t missing something before concluding the nonexistence of a package for me. One exists, but I’d need to make an account. However, I found something even better for my purposes.

pkgs.org[2] is a website that lists packages organized by several different major distributions. I was quickly able to find NUT in the CentOS 8 section for the Intel CPU architecture, but not anywhere under Rocky Linux.

A closer look after hours of confusion introduced me to the EPEL repository (Extra Packages for Enterprise Linux). Apparently, it’s held in high regard among the Red Hat branch. Many enterprise Linux users consider it almost mandatory to offset the smaller offering by default repositories. I was uneasy about it at first because it showed up for the now depreciated CentOS RHEL downstream, but EPEL is maintained by the Fedora community, which isn’t going anywhere for the foreseeable future: I’m calling it safe to use.

sudo dnf install epel-release
dnf search nut

NUT was then as simple to install as any other program from a repository.

Side Project

Podman pranks again! While testing my Bitwarden login from my laptop, I got myself permanently logged out. I traced the problem back to my Podman container on ButtonMash corrupting during one of those power outages from earlier. I sent a discouraging error off to the search engine and I found my exact issue on the Podman GitHub (see Works Cited) [3]. I wasn’t happy with the explanation, but it was the best one I found: systemd didn’t like an under-privileged user doing things without at least a recent login, so it messed with Vaultwarden’s Podman container. The messed up container had to be forcefully deleted and remade. I also needed to remember to specify https:// when looking for the server via browser. To make sure it doesn’t happen again, I followed a piece of advice found later in the discussion and permitted the login to linger.

Takeaway

I honestly expected this week’s progress to take at least a month. When I first looked into NUT, all I saw was source code ready to download and compile and honestly, I’m having trouble getting excited about mastering the art of compiling other peoples’ code. If there’s a way to install via a compatible repository, I’m all for it.

I am especially thankful for pkgs.org [2]. They helped me reduce my problem to one I’ve at least blindly followed a tutorial for. You typically won’t find the full, non-free version of Chrome on Linux, so when I was setting up Mint for my father, I had to explicitly add a repository.

While NUT may be installed, configuration is not happening this week if I expect to understand my system when I’m done. I blitzed the first expected month of work and only stopped because the next bit is so intimidating. Here’s to a quick understanding within the next month.

Final Question

NUT has proved difficult to locate assistance for, as I haven’t figured out how use their internal system. Do you have any idea where I can find support for when I need it?

Works Cited

[1] Debian, “Packages”Debian, July, 2019,Available: https://packages.debian.org [Accessed: Jan. 10, 2022].

[2] M. Ulianytskyi, “Packages for Linux and Unix”pkgs.org, 2009-2022, Available:https://pkgs.org/ [Accessed: Jan. 10, 2022].

[3] balamuruganravi “rootless podman ERRO[0000] error joining network namespace for container #6800” github.com, Jun 2020. Available:https://github.com/containers/podman/issues/6800 [Accessed: Jan. 10, 2022].

Self-Signed Vaultwarden Breakdown

Good Morning from my Robotics Lab! This is Shadow_8472 and today, I am going over creating a self-signed certificate for my Vaultwarden. Let’s get started!

I’ve spent a long time trying to figure out proper HTTPS, but slapping on a solution and going without understanding the underlying workings doesn’t feel right. I don’t even have that. As long as I learn something each attempt, that should be good enough. I’ll be following the tutorial from Vaultwarden [1] with commentary from censiClick’s video [2]. My commentary here will be largely guesswork based off those and associated manual pages [that I have no idea how to properly cite but are available by typing man <command> in most Linux terminals].
https://github.com/dani-garcia/vaultwarden/wiki/Private-CA-and-self-signed-certs-that-work-with-Chrome
https://www.youtube.com/watch?v=eCJA1F72izc

Step 1: Generate Key

openssl genpkey -algorithm RSA -aes128 -out private-ca.key -outform PEM -pkeyopt rsa_keygen_bits:2048
openssl genpkey

This base command generates a private key for OpenSSL.

-algorithm RSA -aes128

RSA and aes128 are encryption algorithms for generating the key. RSA is a public/private key system and aes is a powerful single-key algorithm. Here, they can be seen working together to create a powerful encryption without having to find a relatively private back alley to exchange keys.

-out private-ca.key -outform PEM

These flags specify where to save the key after it’s generated and what format to save it as.

-pkeyopt rsa_keygen_bits:2048

(Private KEY OPTion) This flag lets you manage options for key generator algorithms, in this case: using the 2048 version of RSA.

Step 2: Generate Certificate

openssl req -x509 -new -nodes -sha256 -days 3650 -key private-ca.key -out self-signed-ca-cert.crt
openssl req

(REQuest) This command obtains certificates. In this case, it’s generating one itself, but as the name implies, it’s aimed more at requesting them from an authority.

-x509 -new -nodes -sha256 -days 3650

-x509 specifies that this root certificate will be self-signed. The -days flag will set it to expire in ten years minus leap days. The -new flag has the user fill in some additional information for the certificate, -nodes leaves private keys unencrypted, and -sha256 is a hash function.

-key private-ca.key -out self-signed-ca-cert.crt

These final flags are I/O. key loads the key from the previous command, out names the certificate.

Step Three: Preparing to Sign

openssl genpkey -algorithm RSA -out bitwarden.key -outform PEM -pkeyopt rsa_keygen_bits:2048
openssl req -new -key bitwarden.key -out bitwarden.csr

These commands are similar to before but for Bitwarden. They lack components needed to make the root certificate authority. There’s also some sort of special configuration file I’m not looking to break down, but is around under Vaultwarden’s GitHub [1].

Step Four: Signing the Certificate

openssl x509 -req -in bitwarden.csr -CA self-signed-ca-cert.crt -CAkey private-ca.key -CAcreateserial -out bitwarden.crt -days 365 -sha256 -extfile bitwarden.ext

Finally, it’s time to bring everything together to sign the certificate. Many of these flags are familiar from previous commands. Reading through it, it feels like the last stop to make sure all your papers are in order. Some operating systems are rightfully cautious about certificates signed for an overly lengthy time.

From here, it’s a matter of starting the Vaultwarden container with its new certificate and assuring whichever browsers you’re using that you trust the new certificate authority [2].

Practice to Practical

I’m glad I took the time to study this a little more closely than blindly following instructions this time. When making using openssl req, I was able to confidently regress by deleting a few files so I could give a different common name to the root CA and Vaultwarden’s certificates respectively.

The next challenge was successfully launching the Podman container. Following along with the censiCLICK tutorial, I had three new flags relative to last time I was working with Podman. One was to restart the container unless stopped (no elaboration provided).

The second flag tripped me up. I confused a pair of default ssl certificates for the of self-signed ones required later on, bitwarden.crt and bitwarden.key, created in earlier steps. I copied those two files into their own Podman-mountable directory. Once again, I added the :Z flag to tell SELinux it’s OK.

-e ROCKET_TLS='{certs="/ssl/bitwarden.crt",key="/ssl/bitwarden.key"}'

The final flag sets an environment variable as the container finishes starting. This particular one is configured to tell Vaultwarden where the files are to encrypt HTTPS. If they aren’t there –as I found out while I was still sorting the system certificates– something inside the container shuts it down; it was not a fun combo with the restart unless manually stopped flag as I had trouble removing the container so I could create a new for my next attempt. I knew I was done when podman ps returned a container running for longer than a second or two…

…or so I thought. I went to import my root certificate authority to Firefox, and I still can’t connect even when specifying https://<ButtonMashIP>:44300.

Long Story Short:

podman run -d --name vaultwarden --restart unless-stopped -v /home/vaultwardenUsr/<path/to/vw-data>/:/data/:Z -v /home/vaultwardenUsr/<path/to/private/certs>/:/ssl/:Z -e ROCKET_TLS='{certs="/ssl/bitwarden.crt",key="/ssl/bitwarden.key"}' -p 44300:443 vaultwarden/server:latest
Edit Jan. 6 2022: Vaultwarden listens on port 80, so I'm using -p 44300:80 now. And when you go to verify in a browser, be sure to use https:// or you get "The connection was reset".

This is my current command to generate a Vaultwarden container with Podman and no root privileges. In the end, the only major differences with Docker containers are the paths to mount the volumes Vaultwarden needs from the host machine and the :Z flags for SELinux. Currently, I’m not able to establish a secure connection. I have a help request out, and will edit if I get an update later today, otherwise, I already know what next week’s side project will be.

Side Project

Thursday held a startling surprise as a new zero-day exploit appeared affecting Minecraft, among other things. I must have found out within a few hours of it going public. After doing my research and checking sources, I concluded it was real and with the help of tech support, I was on a patched version of Paper within an hour or so of finding out.

Log4Shell (as this one has come to be called) is scary both because an attacker can take full control of a vulnerable computer and how common vulnerabilities are. On the other hand, once such exploits go public, things get updated pretty fast.

Here is the best article I’ve seen as of about ten hours of the exploit going public: https://www.lunasec.io/docs/blog/log4j-zero-day/

The moral of this story is to keep your software up to date, especially if you see any big stories about computer security.

Takeaway

All the HTTPS literature I found appears to be aimed at the curious pedestrian or the seasoned system administrator. This made it very difficult to be someone in an in-between level of understanding. On a personal note, I learned that pressing the / key while in a man page lets me search the document, a feature I really wished I knew about two years ago.

One important critique I’d offer the censiCLICK video is that the tutorial was dumped straight into the home directory and no effort was given to change default usernames/passwords, which I would consider very important for a monolithic tutorial.

Final Question

Have you ever had a project fight you to the bitter end?

Works Cited

[1] “Private CA and self signed certs that work with Chrome”github.com, [Online]. Available:https://github.com/dani-garcia/vaultwarden/wiki/Private-CA-and-self-signed-certs-that-work-with-Chrome. [accessed Dec. 13, 2021].

[2] censiCLICK, “Full Guide to Self-hosting Password Manager Bitwarden on Raspberry Pi,” on YouTube, Nov 15, 2020. [Online video]. Available: https://www.youtube.com/watch?v=eCJA1F72izc. [Accessed Dec. 13, 2021].

I’m Learning Vaultwarden and Podman!

Good Morning from my Robotics Lab! This is Shadow_8472, and today –with a heap of luck– I’ll be putting a Bitwarden server on ButtonMash (or getting so close I can’t help but finish next week). Let’s get started.

Vaultwarden

I’ve already talked about the importance of password strength before. Longer is better, but a unique password per login is more important in case one gets compromised. But who has the attention span to remembering fifty passwords across every obscure site, app, or game he’s ever interacted with? A good password manager solves this by organizing your passwords so you can easily access them from a client, but anyone without your key can’t.

I started researching for this project by revisiting the first time I switched to using Bitwarden and I decided to self-host a server from a Raspberry Pi [1] following a straightforward tutorial by censiCLICK [2]. My SD card corrupted one day, and I’ve been out a password server ever since, despite efforts to repair it. I’ve been covering my exploration of Rocky Linux, a RHEL family OS, on my ButtonMash server/workstation, and now I’m ready to start putting it to work.

The tutorial by censiCLICK was well presented. It takes you from Raspberry Pi 3B+ and layers on Raspberry Pi OS, Docker, and finally Bitwarden_RS all while giving basic introductions to skills you’ll need along the way like SSH and security certificates. It is unfortunately out of date. Around six weeks after I started using it, the project leader announced that there was some confusion over trademark[3] so he was renaming it to Vaultwarden…

Odd… Looking through my posts shortly after the name change, I was already having issues with my Bitwarden server. It could still have been card corruption or me trying to play with Git. I guess I’ll never know…

…In any case, ButtonMash is ready for the next step.

Docker or Something Else?

Docker is a technology I still haven’t fully visualized. While researching instructions to install it on Red Hat systems, I stumbled across a mention of Podman. Online hosting solution Liquid Web provided a decently clear explanation [4]: containerization essentially makes single-purpose VM’s without the overhead of full operating systems. Docker has a master process that runs Docker containers. Podman runs containers separately, doesn’t require root, but requires a separate piece of software called Buildah to create containers to run and doesn’t have available professional support.

Further research confirms that RHEL now endorses Podman over Docker, so Podman I will use. Even so, I had to install it separately along with a Cockpit plugin to manage it. From there, I made just a few well-researched clicks to download Vaultwarden. The Docker-Podman plugin had a lot of fields I didn’t recognize, so I installed the Docker HelloWorld container to play with. I had to run it from terminal, but it appeared to work. I expect running a Vaultwarden container will be my side project next week.

Side Project

Last week for my side project, I set up a Wi-Fi gaming router to hopefully reduce downtime on my Wi-Fi catcher Pi. This week, I made the two get along. First, I thought it might be Wi-Fi drivers, so I updated, getting myself into a tedious cycle of incomplete updates failing when the file system flipped to read-only against the background of Wi-Fi dropouts. I had to flip the power switch because the reboot command broke and reconfigure packages to clean things out before continuing.

My real problem was the static IP landing outside the router’s 192.168.X.X range. Attempts to manually change IP kept failing, so I backed up a known good config file on top of the file I actually needed to go back to dynamic IP and spent many hours piecing it back together. In the end, I was finally able to connect.

Takeaway

PPolished computer tutorials are great for catapulting students of tech over barriers of entry, but they’re each anchored to a fixed point in time: lessons of the recent past compiled for the near future. As much of an accomplishment making a definitive guide to subject X might be, it will only be but a single focus point for future users to look back on when compiling their own procedures.

Final Question

Have you ever gone back to old project notes for insights for follow up projects?

Works Cited

[1] Shadow_8472, “BitWarden: My New Password Manager,” Let’s Build Robotics With Shadow8472, March 15, 2021. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2021/03/15/bitwarden-my-new-password-manager/ [Accessed Nov. 22, 2021].

[2] censiCLICK, “Full Guide to Self-hosting Password Manager Bitwarden on Raspberry Pi,” on YouTube,Nov 15, 2020. [Online video]. Available: https://www.youtube.com/watch?v=eCJA1F72izc [Accessed Nov. 22, 2021].

[3] d. garcia, “1.21.0 release and project rename to vaultwarden #1642” on GitHub, Apr. 19, 2021. [Online forum]. Available: https://github.com/dani-garcia/vaultwarden/discussions/1642 [Accessed Nov. 22, 2021].

[4] Liquid Web, “Podman vs Docker: A Comparison,” Liquid Web, Sept. 10, 2021.[Online]. Available: https://www.liquidweb.com/kb/podman-vs-docker/ [Accessed Nov. 22, 2021].

Building A Fake Computer to Split

Good Morning from my Robotics Lab! This is Shadow_8472 and today, I am building a Linux virtual machine for my mother and sister to split. Let’s get started.

Machines Within Machines

Switching operating systems is like moving to a new house. It’s intimidating. Things are arranged in different spots. The pattern of your daily life will shift and there will be an uncomfortable adjustment period.

But at least with computers, anyone with a semi-recent CPU and enough other system resources can host a “guest” operating system for evaluation. While I previously have no experience with this method, for others, it serves as a sandbox where they can try things with Linux without the pressure of learning everything at once or else risk being out a computer if a problem commands a chunk of research time.

VirtualBox

I’ve done my share of research on Virtual Machines (VM’s) in the past. VirtualBox is a well-respected name, and I can see why. Once I installed it on my sister’s Windows machine, I didn’t have to research anything about it specifically until I was looking at a desktop and my sister wanted the VM to use both screens. Otherwise, the experience was intuitive.

PopOS is quickly becoming my go-to easy mode for Linux. Their downloads come with shasum verification hashes, which I made use of. In one way, it was even easier to install because I could just install straight from the disk image without any physical install media. I did have one problem during installation where the installer window rendered larger than the screen resolution. Instead of brute forcing a virtual screen size from VirtualBox, I just used Super (Window key)+click&drag as I learned to do while working with GIMP on a boxy tube monitor with a similarly nostalgic resolution.

Dual screens had me stumped on their instruction set. From what I can tell, I had to insert a virtual CD that came with VirtualBox and install it into PopOS’s virtual disk drive. A bit of computer wizardry happened that involved some sudo password prompts that crashed and duplicates thereof happened and I seemingly needlessly rebooted the VM several times before I unlocked the necessary options to enable dual screen. I will want to pay more attention next time.

The default desktop environment for PopOS was based on GNOME 3, but it’s not for us. System76, the makers of PopOS provided an awesome command by command guide for installing a large selection of alternate desktop environments, so I loaded a few my mother and sister should feel most comfortable with. KDE is my favorite, but Cinnamon and MATE are other names I recognize.

Speaking of KDE, If Linux is the OS of customization and decision fatigue, KDE compliments it perfectly. I spent more of my blog project time this week trying to chase down the color settings than I would have liked. I was hoping for some sort of base color picker that would then populate the rest of the theme with different shades, but I found some options to pick each shade individually. Unfortunately, you’d have to be an artist to make something that looks decent. I was able to find a user-submitted theme with an acceptable color palate.

Side Project

My Manjaro workstation has been getting its Internet through a Raspberry Pi for a while now, but lately I’ve been getting periods of having the Pi’s Wi-Fi connection drop randomly. My father picked up a special gaming Wi-Fi router and I set it up today after months of other projects constantly taking priority. Long story short: I was easily able to use my laptop to connect and arrange default configurations on the router, but I have yet to get it to agree with the Pi 4. I’ve tried looking into possible inherit compatibility issues, but all the guides for finding information on Wi-Fi from Linux assume the presence of tools that aren’t present in Raspian. I thought this was small enough for a side project, but it appears I was wrong.

Takeaway

Setting up a new computer and getting it tweaked properly takes a while and a VM is no exception. One point I didn’t go into was how our NFS drive didn’t admit the VM on account of its IP address. I also learned that one of the intended host machines sits a little too heavy on its existing RAM, so it will need an upgrade for comfortable VM operation. I expect a follow up to this project at a later date.

Final Question

Do you have any tips for working with virtual machines?

ButtonMash’s Solid Foundation on Rocky Linux

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am still working on my Rocky Linux server. Let’s get started!

Project Selection

One would think it shouldn’t take a month to set up a server, but the vast bulk of that is research. What all do I want the server to do? What services do I need to set up to include it? When I know more than one way to do something, which way do I want to do it? The questions don’t end until work has progressed beyond the point of answering differently.

My goal for today is to get a few things running: I want to mount the GoldenOakLibry NFS server. I want to update-grub so I can properly dual boot with Debian. I want to install BitWarden. These three things are probably the most important end-goal tasks remaining for configuring ButtonMash’s Rocky install.

Package Managers

Before I can really work on my target goals, I need to know some of the basic specifics. Every major branch has its own compatible package managers. Debian has DPKG and Apt (Snap for the Ubuntu sub-family) while Arch has Pacman and AUR. Wrappers and cross-compatibility tools exist as a zoo of possibilities that will not be fully sorted out here, today.

My first impression as I research the Red Hat branch’s solution are the striking parallels to Debian, though it is also experiencing a stir. RPM (Redhat Package Manager) is like DPKG in that it is used for directly interfacing with the repository. YUM (Yellow dog Updater, Modified) was the package manager the likes of Apt I’ve been hearing about associated with the branch. It is now replaced by DNF (DaNdiFied YUM) for installing Package X and everything Package X needs to run (called “resolving dependencies”). Both YUM and DNF are present on my install, though.

Cockpit

I’ve had a chance to look over this web interface that came with Rocky Linux. By default, there doesn’t appear to be much to it after logging in beyond information readouts, an interactive firewall, and most importantly: an in-browser terminal. There appears to be a whole ecosystem to learn about, but it’s beyond me for now. I will want to look deeper into this subject when I move in to disable password authentication over the network.

Note about the terminal: it’s a little quirky from sharing its inputs with the browser. Nano’s save command also tells FireFox to “Open” and copy-paste commands don’t always work the same.

NFS Mount

From experience, I know that NFS is a royal pain to learn how to set up. On top of that, I know of at least two ways to automount network drives: during boot with fstab, and dynamically with systemd. Mounting it with fstab is annoying on my laptop because it halts boot for a minute and a half before giving up if GoldenOak is unreachable. More annoying is that this appears to be the more well documented method between the two. For an always-on server, though, it may not be a concern.

Not helping systemd’s case is/are the additional way/ways I’m discovering to set its automount functionality up. I don’t even know the proper name for the method I’ve used before – just that I didn’t mess with /etc/fstab whereas another systemd method does. It is a great challenge finding a source that compares more than a single mounting method. The good news is that aside from installation, I should be able to disregard what distro the tutorial was intended for.

While researching this section, I rediscovered autofs, and saw mention of still other automount methods. I’m avoiding autofs because because the more I read about it, the move complex it appears. In this instance, it would behoove me to just leave a line in /etc/fstab because I don’t expect to be booting this server outside the context of the GoldenOak NAS, but as this is more or less the centerpiece of my home’s network, I’m going with systemd mount files, as per the blog by Ray Lyon I referenced last February when I first learned about it. I’ll leave a link to his post in my Works Cited[1].

NFS Automount is tricky stuff, but each time I study it, I retain a little more. I can barely remember how to mount a share manually – let alone configure systemd automounts. It took me several days to find a copy of the files I needed, even after looking back at my above mentioned post from February[2]. My best guess is that I got lost in my own filesystem. I’m taking notes and organizing them in my home directory on this new install.

Update-Grub

When I installed Rocky Linux, I was all nice and safe by not letting it see any drives it wasn’t installing over, but the host machine still has a job to do on the photo trunk project; I need it to dual boot. I read up on a command called update-grub I could just run once everything was installed and physically reconnected. First of all, update-grub is a script, and second of all, it’s notoriously absent.

A variety of help topics exist on what command to run on RHEL instead of update-grub. From what I can tell, it’s pretty universally present on Debian-based systems and when I checked Manjaro (Arch family) just now, it was there too.

Update-grub itself is pretty simple. It’s three lines long, and serves as an easy-to-remember proxy command to actually update your Grub boot loader. The exact command may differ between computers depending on if they’re using BIOS or a newer, less common equivalent called UEFI. I assume it is generally generated during package installation.

Once I had my bearings, it was fairly easy to update grub on my own. I found my configuration file at /boot/grub2/grub.cfg because I am using BIOS. An effectively empty directory stump existed for the UEFI branch, cluing me in that this operation is one you should understand before using copy-paste into terminal. This StackExchange forum has several individual explanations, including reference to what I take to be a catch-all I am not using. Link[3]

So… I go to verify everything is working, and it’s not. A simple reboot loaded Rocky’s GRUB, but the Debian kernel refused to load over the USB 3 PCI card. So much for that idea. I moved the Debian drive to a motherboard USB port and BIOS found it and loaded Debian’s GRUB, which doesn’t know about Rocky Linux. I tried running update-grub in Debian and… it didn’t work. I wasn’t looking to spend even more time on this part of the project, so after confirming that Rocky’s GRUB could boot Debian, I got into BIOS and told them to prefer the internal Rocky drive over anything on USB.

BitWarden False Alarm

I’m super-excited about putting my self-hosted BitWarden server back up. I’ve already started researching, but the topic still feels like it’s expanding when I need to be getting ready for publishing this already lengthy post full of amazing progress. BitWarden will need to wait until I can better teach myself how to properly take care of it.

Takeaway

The Red Hat branch of Linux is in a notable state of flux. Key fundamentals elements of the family like CentOS and YUM are everywhere in old tutorials, and that is bound to make for a frustrating time trying to learn Red Hat for a while to come – especially if you’re new to Linux. Here, more than anywhere else, learning the history of the branch is vital to teaching yourself how to sysadmin.

Side Project

A while ago, I thought Derpy’s RAM was failing because Kerbal Space Program kept crashing the whole system. I’ve been running the three 4 gigabyte sticks on my Manjaro workstation for a month or two, and they appear fine. In the meantime, my father ordered up a pair of 8gb sticks. This week, I installed them, displacing one of the 4gb sticks. Passive testing will now commence.

Final Question

Have you ever had a project take a discouragingly large amount of research time then suddenly come into focus in a single day?

Works Cited

[1] R. Lyon, “On-Demand NFS and Samba Connections in Linux with Systemd Automount,” Ray Against the Machine, Oct. 7, 2020. (Edited Aug. 8, 2021). [Online]. Available: https://rayagainstthemachine.net/linux%20administration/systemd-automount/. [Accessed Nov. 7, 2021].

[2] Shadow_8472, “Stabilizing Derpy Chips at Last,” Let’s Build Robotics With Shadow8472, Feb. 22, 2021. [Online]. Available:https://letsbuildroboticswithshadow8472.com/index.php/2021/02/22/stabilizing-derpy-chips-at-last/. [Accessed Nov. 7, 2021].

[3] “Equivalent of update-grub for RHEL/Fedora/CentOS systems,”StackExchange, Aug. 26, 2014-Oct. 10, 2021 [Online]. Available:https://unix.stackexchange.com/questions/152222/equivalent-of-update-grub-for-rhel-fedora-centos-systems. [Accessed Nov. 7, 2021].

Rocky Linux: Looking Around

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am installing Rocky Linux on ButtonMash. There’s a lot to learn and a bit more to do, so let’s get started!

Checklists and Notepads

A home server is useful. However, if you ask me what one is good for, and I’ll struggle to come up with an answer before the conversation stalls. I’ll come across as simply begging for another expensive toy, and you’re even less interested in one than before.

To remedy the stress of the moment, I opened a text buffer and slapped in a few uses I had in mind. Over the next several days, I added some more for a total of seven or eight so far. None of them were new per se, but it was the first time I had them all in the same place at once.

On the topic of brainstorming, I’m considering developing my own checklist for installing Linux no matter the distro. Watch for it in a future topic once I’m half-satisfied with it.

I left a document open for several days to add ideas for running on server

I am developing a personal Linux Install checklist

Installation

As stated in my last post, I already flashed a thumb drive with Rocky Linux. I was considering using optical media this time because of the expected long term support for this install, but even the minimal option I ended up downloading was too large for CD and we’re seemingly out of blank DVD’s. When I did make my download, I accepted Firefox’s offer to open it with Popsicle, a USB flasher utility that came with either PopOS or KDE (I have reasons to think either is likely). I overwrote the Debian install media from my Laptop.

Slated for overwriting was a previous ButtonMash SSD (Solid State Drive) with MineOS on it. I had already cleared stuff out from it, but after working on the family’s Minecraft server on Apex, I started having second thoughts. I sought out and found an even older and smaller MineOS SSD originally from DerpyChips. My father and I connected it up and booted to the install media.

By this point, I knew this Linux installation will be provisional at best – to my relief. Without the pressure of getting a “forever server” going, I can further refine my approach until I’m satisfied. In the meantime, I can load up some lightweight services.

The installer was one of the smoothest I’ve ever seen. All the usual elements like time zone, user accounts/passwords, and partitioning were linked from a main menu. My one complaint is the full screen slide animation blasting my eyes whenever I clicked on something. It’s not worth my time to recompile the installer, though.

There were a couple unfamiliar panels from the installer menu. One appeared to be some sort of privacy policy configuration screen. I had no idea what most of the options were about, but I could still recognize the value in it. The other screen had options for a selection of software to install. We read through each option, deciding weather or not I wanted each piece. Stuff like networking tools for SSH or NFS were included. Stuff a headless workstation doesn’t need, like GNOME, stayed off. If I didn’t recognize something, I left it alone. Some of the stuff I opted to include with installation were things I knew I’d be installing anyway, so that’s a little time and effort saved.

Configuration

SSH is an easy skill to learn, but difficult to master; I’ve poked at it this week, but I’ll need more time with it before I can consider myself safe using it on an unsecured line. I had a little trouble matching key fingerprints when SSH’ing into ButtonMash from my Manjaro workstation vs having the later SSH into itself with localhost. I quickly realized they were using different hash algorithms, but I had to give up on forcing them into alignment for now. I was able to verify the code on DerpyChips, though.

As soon as I got myself SSH’ed into ButtonMash, I received a prompt to launch a webUI called Cockpit. I don’t know much about it, but I recognized the name from my research last week and the interface feels familiar from some of my previous experiences with server management over browser. The interface came back online after a reboot, so there’s that. I will note that Firefox wasn’t happy about its self-signed security certificate. I have fixed that in the past, but I’m ignoring it for now.

Takeaway

I can feel like I’ve come a long way since when I first started Linux. Each major jump feels like I’m landing in a less unfamiliar place, though there are still surprises. To answer one of my own early “Final Questions:” results are not as important as learning why you got the results in the first place. Though there are plenty of places that make no assumptions about prior skill, general experience will still be of benefit when working with such systems.

Side Note

After I was done with last week’s post, I poked around a bit more at my Manjaro workstation’s spell check for LibreOffice Writer. I was able to get it working by installing a package called hunspell-en_us, as no language libraries were included by default.

Final Question

What would you do with a home server?

Picking Out a Red Hat Style

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am reconfiguring ButtonMash to run some Red Hat family distribution. Let’s get started!

My Early Impressions of Linux

When I was taking my first deep dive into the Linux operating system, I was amazed and overwhelmed with the sheer diversity and customization to be found. Between the soup of permissive licenses and modularity of GNU/Linux (pure Linux does not a complete operating system make), Linux isn’t one operating system: it’s thousands. And if that’s not good enough, you can always make a new one.

I quickly found representations of the Linux family tree listing several popular distributions spawned over time as people forked projects, swapped code, and in some cases ceased development. And while there are several names that have stood the test of time so far, I was introduced to three branches each revolving around a particular distribution: Debian, Arch, Red Hat Enterprise Linux (RHEL). Ubuntu is large enough to receive an honorary mention within the Debian family. Most of my computers run Debian or a derivative thereof. My flagship computer runs on Manjaro of the Arch family. I would like some experience on a RHEL family branch.

The Red Hat Family

The modern Red Hat branch feel different compared to Debian and Arch. The titicular distribution, RHEL, is sold on a subscription basis. Red Hat, the company, sponsors a distinct, community supported, upstream distro called Fedora where programs can be tested before being deployed to customers’ production environments where downtime can cost a lot of money. Per the permissive licenses of software going into RHEL, anyone can view, modify, and redistribute their source code – just respect the Red Hat trademark. Do know that actually subscribing comes with technical support.

Historical and editorial note: from what I can tell, Red Hat Linux used to be the branch root, if you will. Red Hat reorganized things in 2003, adopting Fedora while discontinuing Red Hat Linux in favor of Red Hat ENTERPRISE Linux. The way these three terms are used almost interchangeably made this section very frustrating to research, but I will try and use the proper terms: Red Hat is the company, Red Hat Linux was Red Hat’s flagship product sold on store shelves sold from the mid-90’s until 2003, and Red Hat Enterprise Linux (RHEL for short) is Red Hat’s modern OS users subscribe to.

Looking deeper into different distros based off RHEL source code, you will find that 100% binary compatibility is huge. You can develop something on a RHEL downstream and it should work for a paying RHEL subscriber. If you find a clever use for a bug –it has happened before in the tech world– that bug will be there in RHEL.

CentOS

CentOS has been an important name in Linux for a while. Had I done this week’s research for a Red Hat branch distro a year ago, I have no doubt it would have been my pick for use on a home server.

Despite CentOS’s long history as the go-to RHEL downstream, the CentOS I was looking forward to getting to know has a short future. Just as Red Hat Linux was discontinued in favor of RHEL, CentOS is to be discontinued in a couple months on this coming New Year’s Eve (December 31, 2021) and repurposed. The future CentOS Stream will sit between Fedora and RHEL, making it an unsuitable distro for a server I expect to run for at least the next few years.

The niche CentOS is vacating already has new distros vying to be the de facto replacement. The leading contenders are Alma Linux and Rocky Linux. Alma Linux has the backing of a large company, while Rocky Linux is being done by the guy who originally started CentOS. So far as I can tell, they’re a coin flip away from each other. If they both work out, more power to the end-users.

Even as I write, I’m unsure what I’ll be running a year from now. For no reason in particular, I’m leaning towards Rocky Linux. I’ve already flashed a thumb drive with the install media, but setup will have to wait until next week.

Takeaway

I picked a horrible time to get into free Red Hat distros. One chapter in its history is drawing to a close and the opening of the next is still going through revisions. However, I’m not looking to wait a year for that retrospect. I’ll be re-evaluating as needed.

Final Question

Have you ever started a project during a sub-optimal time?