I Studied Podman Volumes

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project of the week. Let’s get started!

Nextcloud has been a wish list item since I gave up using Google’s ecosystem (Drive, Calendar, Office, etc.). This open source almost-drag-and-drop alternative proved above my skill level at first, but I’ve learned a lot about server management and running OCI “Docker” containers in Podman in the years since.

Demo of Nextcloud

Nextcloud was relatively simple to demo: one non-privileged port forwarded. During my self-guided tour, I was amazed at the potential power there. In addition to its calendar, office, and file storage functions I expected, its recommended suite apps include email, chat, and contacts servers – with more apps available for download.

As much as I can see myself moving in right now, it’s important that I master how its persistent data is stored. Ideally, everything would live on GoldenOakLibry, my home network storage configured with RAID 5. But I hate waiting for HDD spin-up. If it had an SSD out the back (it has two USB ports), I could mount a directory there from a Nextcloud container and back it up to deep storage on a weekly or monthly basis. At the same time, I may want the capacity of the main disks if Nextcloud turns out to be good for hosting the family’s photo archive.

The solution: use two “volume” structures like I started looking at last week. They work similarly to directly mounting directories as I’ve been doing in theory, but are abstracted similar to containers. See my Tangent heading later on for more information.

With this purpose in mind, we ordered a USB–SATA adapter and dug up our MineOS SSD from once upon a time. I archived around 411 GB worth of Minecraft worlds to free up space. I got the poor idea to try compressing it to both .tar.gz and .zip, two widely used compression formats which turn out to be implementations of the same algorithm. GoldenOakLibry had ZIP, but not TAR, so I tarballed/compressed it to 393.5 GB over an hour vs the NAS struggled to produce a 393.77 GB .zip over a full day. With a savings that small, I’m probably best off curating it uncompressed – especially if I have family members interested in seeing it again.

RAID 5: A hard drive redundancy scheme resistant to a single drive failure. In my case: four matching disks with three drives’ worth of usable space.

TAR: Tape ARchive: An early archive tool often paired with the GNU Zip compression program.

To Do List

Regretfully, I have to split this topic just as it’s getting good. I was running a test to see if GoldenOakLibry can respond on the USB share without spinning up, but creating new network shares is not a skill I can perform reliably yet. I had it working once, rebooted, and now it won’t re-connect like the proven ones. If for whatever reason I can’t get GoldenOakLibry to share from MineOS’s SSD without spinning up, I’ll have to mount it internally to ButtonMash and play the BIOS game to disable booting to it.

Even if I was making good enough time with the USB share, I’d still need to study up on databases. The lightweight one included with the Nextcloud container I’m working with is meant for browser traffic only, and I want to try out its clients.

Tangent

I started with the working theory that volumes needed to be started and stopped like containers, and I would therefore need one of Podman’s signature pods to manage them alongside Nextcloud’s. It’s basically a container for organizing and running containers on a homelab scale. I rigged up a script to automate my attempts with it, but eventually realized that volumes are passive, and I had just learned the wrong tool for the job. I’m sure I’ll make use of it eventually.

Final Question

I’d like to open up the discussion. Two big choices remain: Where do I host my fast Nextcloud SSD (GoldenOakLibry or ButtonMash), and how do I host my archive pictures (Nextcloud, Mediawiki, something else)?

I look forward hearing your answers in the comments below or on my Socials.

My Podman Containers Boot With Systemd

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am reasonably sure my Podman containers won’t be randomly going down anymore. Let’s get started!

I enjoy using Podman as a stand-in for Docker, but its rootless approach to running containers inherently challenges sysadmins facing Docker’s help and tutorial legacy. The most problematic difference I’ve experienced has been keeping containers running long-term. Months ago, I learned how to enable account lingering. This allows Podman containers without something remaining logged in as their respective users. I’ve been living with manually restarting containers as needed. Well, since I decided to enable automatic security updates, starting containers automatically would be prudent before expecting other family members to rely on them.

Against all odds, my initial search this past Wednesday yielded a blog article from Red Hat about integrating Podman containers into Systemd [1] to start them at boot. It was posted the day before.

Podman and Systemd

I trust Red Hat to not post malicious commands, but it’s still a good idea to learn about strange commands before running them. Red Hat’s tutorial starts with making a new user, enabling linger, and running a containerized web server. The first important command I ran was

$ podman stop httpd && podman rm -a && podman volume prune

This command appears to thoroughly clean out Podman. I’ve mounted volumes from the host before to persist data, but there’s a more flexible volume structure I only learned about just now when researching for another section I had to spin off into a near-future post. I haven’t used them yet, but I’m sure they’ll be useful once I learn how to use them.

$ podman generate systemd --new --files --name httpd

This command makes a new systemd file. The –new option recreates the container fresh each time it’s brought online. –files sends the configuration to a file instead of the terminal. –name must be the name of a running container or pod.

$ cp -Z container-httpd.service ~/.config/systemd/user/

The file generated previously goes in a directory where systemd will find it when used with the –user flag. The -Z flag matches permissions with the destination directory. The tutorial finishes with a daemon-reload followed by starting and enabling the user’s service.

Takeaway

This is a resource for my bookmarks. That is all.

Final Question

I took the opportunity during this project to put a Minetest server on ButtonMash, but I’m having difficulty obtaining permissions. I can see its logs in Cockpit-Podman, but I don’t have access to the server command line. How am I supposed to get started with adminning Minetest?

I look forward hearing your answers in the comments below or on my Socials.

Work Cited

[1] A. Oliveira, “Configure a container to start automatically as a systemd service,”redhat.com,Feb. 21, 2023. [Online]. Available: https://www.redhat.com/sysadmin/container-systemd-persist-reboot [Accessed Feb. 27, 2023].

My PiHole is “Half Baked”

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am installing PiHole. With luck, I’ll have be configuring some of its other functions to augment my home’s network as well. Let’s get started!

PiHole, Take II

I can rant about the evils of Google ‘til boredom do its part. However, this search engine is between inconvenient and impossible to ignore, given its impressive list of “hobbies” from STEM projects to smartphones. It’s an open secret few care to think about that their empire is built off user exploitation. I installed ad blocker browser plugins over their aggression last presidential election cycle.

Earlier this month, I read about Manifest v3, the new browser-plugin interface library created by Google. Their precautions against spyware just so happen to cripple ad blockers, among other legitimate plugins. This walking conflict of interest is set to roll out January, 2023, and Firefox is going along with it.

When a browser loads a web page, it asks a DNS service to translate the page’s URL into an IP address. It then finds, loads, and renders the page at that IP. This may involve loading other pages –such as ads– as elements of the original page. Network ad blockers protect you by fudging bad URL’s addresses.

Objectives

My main goal this week is to kill ads across my home network. Follow-up objectives include advanced PiHole features and a private DNS for even better protection.

Night 1

My first attempt at PiHole was messy. I set up PiHole OCI/“Docker” containers across my two servers – ButtonMash and my old laptop. Like before, the main router skipped IP’s on me. I had it repaired within an hour thanks to my same laptop functioning as a workstation with a static IP. With the router upgrade to my upstairs workstation, I easily archived its settings and outfitted it with its own wider network static IP – complete with a netmask wide enough to chase down its rogue counterpart should it shift again (Did I have laptop’s static IP netmask configured incorrectly this whole time?!).

Surprise! The expanded subnet didn’t work because the rogue router had its own subnet mask I was outside of. The dance was too involved for a play by play, but I only really felt helpless while trying to avoid hiking around to different workstations to clean up after this failed networking spell. As I reassembled the router for normal operation, I reasoned out that my router’s firmware is hardwired not to consider a DNS coming from a LAN connection, like I’m trying to do.

Flashing open source firmware is out of the question. For one, I wouldn’t know how to fix it and don’t have a replacement. Two: apparently its chipset manufacturer isn’t a fan of open source – the help thread I spotted recommended contacting OP’s government representative if he wanted to do anything about it.

Night 2

I did a bit of research before dismantling the network again. DHCP settings include optional fields for DNS requests. This should let me direct computers straight to PiHole instead of relaying the request in a convoluted workaround involving a NAT table and possibly causing a network loop.

This means each router is now a separate task. The responsible thing to do now is ensure my subnet router can behave before working on the main one. It’s not long before I fry my DNS settings. Navigation around my local network remains unaffected, but I eventually resort to restoring my backup from yesterday, re-applying the static IP, and updating the backup.

My best bet from here is to finalize my PiHole install. My initial container creation was the absolute minimum: port 80 web interface, port 53/TCP+UDP. There’s a lengthy list of environment variables to browse.

A Few Days Later

Jackpot! My mind cleared enough before bed to skim PiHole Docker’s documentation on GitHub. It has a list of example deployments – including a shell script. I converted it for Podman, entered my environment variables, and –during debugging– axed the logic for relaying logs as it was causing problems and I can view them directly with Cockpit-Podman.

PiHole User

But where to land it? I’ll eventually integrate as I master Caddy. Leaving the container running as root lets it use the proper ports, but I know better. Thanks to discoveries I spun off into last week’s project, I can now make more underprivileged, Cockpit-enabled users than I will ever need by using loopback the address (127.0.0.1/8).

The run script was easy to copy over to my new PiHole user. I gave it the directories it wanted as mountable volumes and shifted ports around until I was happy. I took the time to tidy up my firewall, combining a couple related entries and reclosing the normal DNS port.

I remember having issues with Vaultwarden’s stability over the course of days to weeks. The problem was occasionally annoying as Bitwarden only requires its home server when modifying the password vault, but PiHole will be sorely missed the moment it goes down. The one place I found the solution was in the official Podman troubleshooting guide on their GitHub [1]:

loginctl allow-linger userName

I sadly could not verify this was my previous, solution to my Vaultwarden long-term issues, but it’s not entirely unfamiliar, and it’s my best-informed guess.

DNS Port Forwarding

With PiHole secured in its own, easily accessible account, I soon experienced how picky DNS requests are about using the privileged port 53. All my attempts at manually telling OpenWRT to use port 5300 failed. I expect the the story will be the same if I try with on my main router.

I found the solution where Woody from b-woody.com blogged about almost the exact same project last May [2]: port forward port 53 to port 5300. Paranoid about goofing my firewall over command line I ran my version of Woody’s commands past r/TechSupport’s Discord channel. Moderator Donjuanal confirmed my omission of a trailing “:toaddr=”, but questioned my blind use of tcp, explaining how DNS clients default to udp for speed.

sudo firewall-cmd --zone=public --add-forward-port=port=53:proto=udp:toport=5300 --permanent

Even with this measure in place, I had to access the web console and tick Settings>DNS>Interface settings>Potentially dangerous options>Permit all origins before my local requests made it through. This may need to be addressed later.

Takeaway

I am so glad to have PiHole installed, even if it doesn’t appear to be doing much more than the uBlock Origin Firefox plugin. I’m researching the next segment though, and I estimate another week or more worth of work before it is configured alongside a private DNS server. Worth noting is that Firefox is leaving in the features ad block requires, despite potential security concerns. This is as good enough stopping point.

Final Question

Do you use PiHole? I’d be happy to hear about your experience.

I look forward hearing your answers on in the comments below or on my Socials.

Works Cited

[1] eriksjolund, “Podman\ Troubleshooting\ A list of common issues and solutions for Podman,” github.com, Nov. 19, 2022. [Online]. Available: https://github.com/containers/podman/blob/main/troubleshooting.md [Accessed Jan. 30, 2023].

[2] Woody, “Run PiHole in a rootless Podman container,” b-woody.com, May 12, 2022.[Online]. Available: https://b-woody.com/posts/2022-05-12-pihole-on-a-rootless-podman-container/ [Accessed Jan. 30, 2023].

[3] Can You Block It, “CAN YOU BLOCK IT?\ AN SIMPLE AD BLOCK TESTER” canyoublokit.com, 2021. [Online]. Available: https://canyoublockit.com/ [Accessed Jan. 30, 2023].

I Glitched Cockpit and Discovered Multi-user Login

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project for the week. Let’s get started!

My mother needed an extra browser, so I installed Firefox hardened it a little. I took the liberty of adding the Bitwarden plugin, encouraging her to make an account on my self-hosted instance. Remembering my failure so far to diagnose the “Network Error” blocking log in, I spared the time to learn how new Bitwarden clients are slightly incompatible with old Vaultwarden servers.

I easily could have updated Vaultwarden with maybe a note on the blog Discord. Instead, I felt like adding VaultwardenUsr@localhost to Cockpit with “Add new host.” This stunt worked at the cost of forwarding shadow8472@ButtonMash to VaultwardenUsr@ButtonMash when to logging in. Relogging didn’t help, and the hosts list saw VaultwardenUsr as the primary login – disallowing me from removing it, and as a remote login – blocking my attempts to add my real primary account back in with the same stunt.

While exploring this bug, I logged into my old laptop server and linked its Cockpit back into ButtonMash without getting forwarded to VaultwardenUsr. At this point, I submitted a bug report to Cockpit’s GitHub. I soon found the malformed host list at /etc/cockpit/machines.d/99-webui.json. I backed it up, purged the malformed entry, and updated GitHub with my workaround.

Out of curiosity, I added VaultwardenUsr@192.168.0.— as an alternate host. This sends packets for an extra detour, but it works as required. Only after all this did I update my Vaultwarden image from Docker Hub and deploy a new container from it using the same command as the last two successful times.

Note: While working on next week’s project, I logged into VaultwardenUsr@127.0.0.1 and other loopback IP’s with no problems. It’s just name@localhost that causes problems.

Takeaway

1 day for the win! My push for PiHole and supporting network projects has been intense lately, so it’s great to have a smaller project where I still learn while by doing something important.

Final Question

Have you ever misused a software feature successfully? What challenges did you face before getting it to work how you had in mind?

look forward hearing your answers on in the comments below or on my Socials.

Joining the Let’s Encrypt Help Forum

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am finding help towards getting an SSL certificate from Let’s Encrypt. Let’s get started!

The Time to Get Help

Manually setting up an HTTPS secured service from your home is not beginner level by any stretch of the disillusioned imagination. In many ways, it reminds me of installing Linux for the first time. The system as a whole is irreducibly complex; multiple project-sized milestones rely on each other for usefulness, so I won’t see any results basically until I’m done.

So far, ButtonMash is running Rocky Linux 8. I have NGINX installed, but it can’t be properly configured to serve HTML over HTTPS until I have an SSL certificate. SSL certificates are available for free from Let’s Encrypt, but the process for getting and renewing them is reportedly labor intensive once you do know what you’re doing. ACME clients (Automatic Certificate Management Environment) can automate this work, but the installation options alone are exhaustive.

Joining Let’s Encrypt’s Community

I have made a good faith effort to self-educate, but I’ve slowed down to the point where I feel like I’m posting the same thing week after week with dribbles of progress. The documentation has far exceeded my attention span. It’s time to look for help.

Let’s Encrypt –like many well-respected technical projects– has a designated community support forum [1]. It’s just not on Discord or some other platform I’m already on. After weeks of self-research, I made an account and started looking around.

Unsurprisingly, the people I found in such a niche community are more knowledgeable about all things related to security certificates. The more I talk about my project there, the more important concepts are brought to my attention. For example, I keep coming across terms I keep seeing, but have so far remained clueless about. When those come up in conversation I look them up and only ask if I can’t find the answer in a reasonable amount of time.

3D Printing Corner

My brim decision is really backfiring now. I might even say it’s a worse idea than using a raft at this point. For what it’s worth, I made the time to glue a couple of those calibration cubes together. One drop, then press together. My father used a pencil on Sonic during a final dry fit to help for gluing the two halves together.

Side Project

My mother’s new sewing table has a fancy elevator platform to hide away her machine. This week, she got a power cord stuck in its mechanism where a couple clips jammed against it and each other. I was quick to find a 3D printed solution to keep it from happening again once we dislodged it[2]. I settled on a design aimed at holding phone chargers, but it was about the right size when I scaled it up to 200% and told it to use solid infill on the clip. My father and I installed it under the elevator and used a couple Velcro straps to lock the cords in so they don’t fall out.

Takeaway

I have never been excited about mastering a network backbone. It’s been one of those things that always feels simple enough to reach for, but complex enough to challenge my perseverance. I’m glad I’ve found a place that seems friendly enough.

Final Question

Certbot is the preferred ACME client, but there’s a list with tens of them on it [3]. Someone name-dropped Caddy, but I’ve been studying NGINX. Have you gone through Let’s Encrypt before? If so, what ACME client do you use?

Works Cited

[1] Internet Security Research Group, community.letsencrypt.org, [Online]. Available: https://community.letsencrypt.org/ [Accessed Mar 25, 2022].

[2] TJH5, “Cable Holder,” thingiverse.com,Aug. 13, 2017. [Online]. Available: https://www.thingiverse.com/thing:2481258 [Accessed Mar 25, 2022].

[3] Internet Security Research Group, “ACME Client Implementations,” community.letsencrypt.org, Mar. 6, 2022. [Online]. Available: https://letsencrypt.org/docs/client-options/ [Accessed Mar 25, 2022].

Misadventures in Studying NGINX

Good Morning from my Robotics Lab. This is Shadow_8472, and today I am getting lost while exploring SSL certificates… again. Let’s get started!

Installing NGINX

At last count, I had about six or seven projects I should hook into it, but most of them are on hold because I don’t want some stranger finding his way into my home network and rearranging things without permission. I set up Vaultwarden to manage its own HTTPS connections, and I learned a lot about what SSL is and how it works. But this is not a recommended configuration and I want to learn the proper, more advanced way of doing things.

I ignored plenty of guides’ advice on my path to a Vaultwarden server. They recommend some sort of ingress controller, and I’m currently exploring one called NGINX. I’ve come across quite the debate as to whether to use a container or install it native. The tutorials for the container edition all use Docker, but I’m using Podman and I’m uneasy about root permission nuances between the two projects making things needlessly more challenging, so I installed the package on ButtonMash.

sudo dnf install nginx

To confirm installation, I enabled the web server with a few systemctl commands and opened a port in ButtonMash’s firewall. NGINX now proudly displays its welcome page.

A Web of Dependencies

NGINX does not lend itself to solo study. It is a do-everything solution for networking. With so many use cases from serving HTML pages to load balancing containers, I have spent weeks pouring through tutorials without finding a keystone lesson for my use case. Some of that time was spent looking into some sort of web interface I falsely believed was included. See NGINX Proxy Manager vs. NGINX for details. I will stick with bare NGINX if for now mainly because NGINX Proxy Manager’s website ironically has an expired SSL certificate.

I got lost researching what I would need for a project this week. Proper HTTPS for Vaultwarden is a good choice of target. That will require an SSL certificate, and that means Let’sEncrypt. An SSL certificate requires either a domain name or a subdomain, so that means arranging one of those.

Somewhere along the way, I got lost and visited this blog’s host cPanel in the interest of moving its SSL to Let’sEncrypt. The experience was unexpectedly surreal, like I was paging through a book written in a language I’m trying to learn – there was a flood of jargon, but the bits I recognized made for moments of satisfaction.

3D Printing Corner

I want to glue the Sonic figure I printed, but I’d just as soon have some experience with gluing large, flat surfaces together before I go smashing a larger project together and hoping it sticks (literally). I had the idea to print up eight calibration cubes for practice. I tried some lower infill settings and got inferior, but adequate results. My biggest complaint was how many tries it took for the overall first layer. I had to settle with a couple curling corners, but a perfect print wasn’t the goal anyway. Gluing will have to wait until next week though.

Side Project

Also on the topic of 3D printing, my mother has been into quilting as of late and she commissioned some more bias tape makers like the ones I made during the early stages of the pandemic. I found what I thought was the model on Thingiverse and its description linked a revision that folds it in half again. I used a spreadsheet to scale the model to a couple different sizes.

Takeaway

I feel like I am assembling a jigsaw puzzle without the box. Each piece must be studied and understood before placing it. Half the challenge is knowing what pieces need to be in place before it’s time to begin studying others. Placing more than one at a time is very difficult, but the HTTPS piece interlocks with so many others, its ecosystem doesn’t lend itself to a project of the week format of study like what I have going on here.

Final Question

The most important lesson in tech is to know where to seek help. I had to seek out a new
Discord server familiar with NGINX this week when I should have looked them up a week ago. How long does it take before you look for specialized help?

Installing NUT UPS Driver on Rocky Linux 8

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am installing the Network UPS Tool on my Rocky Linux 8 Button Mash server. Let’s get started!

A Package Exists

In a previous push on my Button Mash server, I talked about getting an Uninterruptible Power Supply (UPS) so ButtonMash could shut itself down in case of a power failure. If memory serves, I also talked about an open source driver called Network UPS Tools (NUT). At the time, I was under the impression it was exclusively available via source code and I would have to compile it to make it work.

I’ve recently suffered no fewer than four power outages since installing the UPS. A couple long ones while everyone in bed would have outlasted the UPS’s endurance had someone not noticed been aware each time to gracefully shut things down manually. I want the process automated.

And so I started the grind. The first thing the installation instructions tell me is to check for a package. Sign me up!

dnf search nut

I got several results, but with such a simple package name, the letters n-u-t turned up many false positives. NUT’s companion packages come with names of the form: ‘nut-*’, so I often filtered with ‘nut-’. My refined searches remained empty.

Installing EPEL and NUT

If the backbone of a distribution is its package manager, repositories would be its ribs. Not every piece of software gets compiled and packaged for every architecture/package manager. I get that. It was a lesson I had to learn last time I played with optimizing MicroCore Linux and why I’m going with Arch if there ever is a next time.

When I learned NUT was widely available in package form, I went looking again on Rocky Linus dnf: still nothing. Debian has a nice package viewer[1], so I looked for something similar for Red Hat distos. I wanted to be sure I wasn’t missing something before concluding the nonexistence of a package for me. One exists, but I’d need to make an account. However, I found something even better for my purposes.

pkgs.org[2] is a website that lists packages organized by several different major distributions. I was quickly able to find NUT in the CentOS 8 section for the Intel CPU architecture, but not anywhere under Rocky Linux.

A closer look after hours of confusion introduced me to the EPEL repository (Extra Packages for Enterprise Linux). Apparently, it’s held in high regard among the Red Hat branch. Many enterprise Linux users consider it almost mandatory to offset the smaller offering by default repositories. I was uneasy about it at first because it showed up for the now depreciated CentOS RHEL downstream, but EPEL is maintained by the Fedora community, which isn’t going anywhere for the foreseeable future: I’m calling it safe to use.

sudo dnf install epel-release
dnf search nut

NUT was then as simple to install as any other program from a repository.

Side Project

Podman pranks again! While testing my Bitwarden login from my laptop, I got myself permanently logged out. I traced the problem back to my Podman container on ButtonMash corrupting during one of those power outages from earlier. I sent a discouraging error off to the search engine and I found my exact issue on the Podman GitHub (see Works Cited) [3]. I wasn’t happy with the explanation, but it was the best one I found: systemd didn’t like an under-privileged user doing things without at least a recent login, so it messed with Vaultwarden’s Podman container. The messed up container had to be forcefully deleted and remade. I also needed to remember to specify https:// when looking for the server via browser. To make sure it doesn’t happen again, I followed a piece of advice found later in the discussion and permitted the login to linger.

Takeaway

I honestly expected this week’s progress to take at least a month. When I first looked into NUT, all I saw was source code ready to download and compile and honestly, I’m having trouble getting excited about mastering the art of compiling other peoples’ code. If there’s a way to install via a compatible repository, I’m all for it.

I am especially thankful for pkgs.org [2]. They helped me reduce my problem to one I’ve at least blindly followed a tutorial for. You typically won’t find the full, non-free version of Chrome on Linux, so when I was setting up Mint for my father, I had to explicitly add a repository.

While NUT may be installed, configuration is not happening this week if I expect to understand my system when I’m done. I blitzed the first expected month of work and only stopped because the next bit is so intimidating. Here’s to a quick understanding within the next month.

Final Question

NUT has proved difficult to locate assistance for, as I haven’t figured out how use their internal system. Do you have any idea where I can find support for when I need it?

Works Cited

[1] Debian, “Packages”Debian, July, 2019,Available: https://packages.debian.org [Accessed: Jan. 10, 2022].

[2] M. Ulianytskyi, “Packages for Linux and Unix”pkgs.org, 2009-2022, Available:https://pkgs.org/ [Accessed: Jan. 10, 2022].

[3] balamuruganravi “rootless podman ERRO[0000] error joining network namespace for container #6800” github.com, Jun 2020. Available:https://github.com/containers/podman/issues/6800 [Accessed: Jan. 10, 2022].

Self-Signed Vaultwarden Breakdown

Good Morning from my Robotics Lab! This is Shadow_8472 and today, I am going over creating a self-signed certificate for my Vaultwarden. Let’s get started!

I’ve spent a long time trying to figure out proper HTTPS, but slapping on a solution and going without understanding the underlying workings doesn’t feel right. I don’t even have that. As long as I learn something each attempt, that should be good enough. I’ll be following the tutorial from Vaultwarden [1] with commentary from censiClick’s video [2]. My commentary here will be largely guesswork based off those and associated manual pages [that I have no idea how to properly cite but are available by typing man <command> in most Linux terminals].
https://github.com/dani-garcia/vaultwarden/wiki/Private-CA-and-self-signed-certs-that-work-with-Chrome
https://www.youtube.com/watch?v=eCJA1F72izc

Step 1: Generate Key

openssl genpkey -algorithm RSA -aes128 -out private-ca.key -outform PEM -pkeyopt rsa_keygen_bits:2048
openssl genpkey

This base command generates a private key for OpenSSL.

-algorithm RSA -aes128

RSA and aes128 are encryption algorithms for generating the key. RSA is a public/private key system and aes is a powerful single-key algorithm. Here, they can be seen working together to create a powerful encryption without having to find a relatively private back alley to exchange keys.

-out private-ca.key -outform PEM

These flags specify where to save the key after it’s generated and what format to save it as.

-pkeyopt rsa_keygen_bits:2048

(Private KEY OPTion) This flag lets you manage options for key generator algorithms, in this case: using the 2048 version of RSA.

Step 2: Generate Certificate

openssl req -x509 -new -nodes -sha256 -days 3650 -key private-ca.key -out self-signed-ca-cert.crt
openssl req

(REQuest) This command obtains certificates. In this case, it’s generating one itself, but as the name implies, it’s aimed more at requesting them from an authority.

-x509 -new -nodes -sha256 -days 3650

-x509 specifies that this root certificate will be self-signed. The -days flag will set it to expire in ten years minus leap days. The -new flag has the user fill in some additional information for the certificate, -nodes leaves private keys unencrypted, and -sha256 is a hash function.

-key private-ca.key -out self-signed-ca-cert.crt

These final flags are I/O. key loads the key from the previous command, out names the certificate.

Step Three: Preparing to Sign

openssl genpkey -algorithm RSA -out bitwarden.key -outform PEM -pkeyopt rsa_keygen_bits:2048
openssl req -new -key bitwarden.key -out bitwarden.csr

These commands are similar to before but for Bitwarden. They lack components needed to make the root certificate authority. There’s also some sort of special configuration file I’m not looking to break down, but is around under Vaultwarden’s GitHub [1].

Step Four: Signing the Certificate

openssl x509 -req -in bitwarden.csr -CA self-signed-ca-cert.crt -CAkey private-ca.key -CAcreateserial -out bitwarden.crt -days 365 -sha256 -extfile bitwarden.ext

Finally, it’s time to bring everything together to sign the certificate. Many of these flags are familiar from previous commands. Reading through it, it feels like the last stop to make sure all your papers are in order. Some operating systems are rightfully cautious about certificates signed for an overly lengthy time.

From here, it’s a matter of starting the Vaultwarden container with its new certificate and assuring whichever browsers you’re using that you trust the new certificate authority [2].

Practice to Practical

I’m glad I took the time to study this a little more closely than blindly following instructions this time. When making using openssl req, I was able to confidently regress by deleting a few files so I could give a different common name to the root CA and Vaultwarden’s certificates respectively.

The next challenge was successfully launching the Podman container. Following along with the censiCLICK tutorial, I had three new flags relative to last time I was working with Podman. One was to restart the container unless stopped (no elaboration provided).

The second flag tripped me up. I confused a pair of default ssl certificates for the of self-signed ones required later on, bitwarden.crt and bitwarden.key, created in earlier steps. I copied those two files into their own Podman-mountable directory. Once again, I added the :Z flag to tell SELinux it’s OK.

-e ROCKET_TLS='{certs="/ssl/bitwarden.crt",key="/ssl/bitwarden.key"}'

The final flag sets an environment variable as the container finishes starting. This particular one is configured to tell Vaultwarden where the files are to encrypt HTTPS. If they aren’t there –as I found out while I was still sorting the system certificates– something inside the container shuts it down; it was not a fun combo with the restart unless manually stopped flag as I had trouble removing the container so I could create a new for my next attempt. I knew I was done when podman ps returned a container running for longer than a second or two…

…or so I thought. I went to import my root certificate authority to Firefox, and I still can’t connect even when specifying https://<ButtonMashIP>:44300.

Long Story Short:

podman run -d --name vaultwarden --restart unless-stopped -v /home/vaultwardenUsr/<path/to/vw-data>/:/data/:Z -v /home/vaultwardenUsr/<path/to/private/certs>/:/ssl/:Z -e ROCKET_TLS='{certs="/ssl/bitwarden.crt",key="/ssl/bitwarden.key"}' -p 44300:443 vaultwarden/server:latest
Edit Jan. 6 2022: Vaultwarden listens on port 80, so I'm using -p 44300:80 now. And when you go to verify in a browser, be sure to use https:// or you get "The connection was reset".

This is my current command to generate a Vaultwarden container with Podman and no root privileges. In the end, the only major differences with Docker containers are the paths to mount the volumes Vaultwarden needs from the host machine and the :Z flags for SELinux. Currently, I’m not able to establish a secure connection. I have a help request out, and will edit if I get an update later today, otherwise, I already know what next week’s side project will be.

Side Project

Thursday held a startling surprise as a new zero-day exploit appeared affecting Minecraft, among other things. I must have found out within a few hours of it going public. After doing my research and checking sources, I concluded it was real and with the help of tech support, I was on a patched version of Paper within an hour or so of finding out.

Log4Shell (as this one has come to be called) is scary both because an attacker can take full control of a vulnerable computer and how common vulnerabilities are. On the other hand, once such exploits go public, things get updated pretty fast.

Here is the best article I’ve seen as of about ten hours of the exploit going public: https://www.lunasec.io/docs/blog/log4j-zero-day/

The moral of this story is to keep your software up to date, especially if you see any big stories about computer security.

Takeaway

All the HTTPS literature I found appears to be aimed at the curious pedestrian or the seasoned system administrator. This made it very difficult to be someone in an in-between level of understanding. On a personal note, I learned that pressing the / key while in a man page lets me search the document, a feature I really wished I knew about two years ago.

One important critique I’d offer the censiCLICK video is that the tutorial was dumped straight into the home directory and no effort was given to change default usernames/passwords, which I would consider very important for a monolithic tutorial.

Final Question

Have you ever had a project fight you to the bitter end?

Works Cited

[1] “Private CA and self signed certs that work with Chrome”github.com, [Online]. Available:https://github.com/dani-garcia/vaultwarden/wiki/Private-CA-and-self-signed-certs-that-work-with-Chrome. [accessed Dec. 13, 2021].

[2] censiCLICK, “Full Guide to Self-hosting Password Manager Bitwarden on Raspberry Pi,” on YouTube, Nov 15, 2020. [Online video]. Available: https://www.youtube.com/watch?v=eCJA1F72izc. [Accessed Dec. 13, 2021].

I’m Learning Vaultwarden and Podman!

Good Morning from my Robotics Lab! This is Shadow_8472, and today –with a heap of luck– I’ll be putting a Bitwarden server on ButtonMash (or getting so close I can’t help but finish next week). Let’s get started.

Vaultwarden

I’ve already talked about the importance of password strength before. Longer is better, but a unique password per login is more important in case one gets compromised. But who has the attention span to remembering fifty passwords across every obscure site, app, or game he’s ever interacted with? A good password manager solves this by organizing your passwords so you can easily access them from a client, but anyone without your key can’t.

I started researching for this project by revisiting the first time I switched to using Bitwarden and I decided to self-host a server from a Raspberry Pi [1] following a straightforward tutorial by censiCLICK [2]. My SD card corrupted one day, and I’ve been out a password server ever since, despite efforts to repair it. I’ve been covering my exploration of Rocky Linux, a RHEL family OS, on my ButtonMash server/workstation, and now I’m ready to start putting it to work.

The tutorial by censiCLICK was well presented. It takes you from Raspberry Pi 3B+ and layers on Raspberry Pi OS, Docker, and finally Bitwarden_RS all while giving basic introductions to skills you’ll need along the way like SSH and security certificates. It is unfortunately out of date. Around six weeks after I started using it, the project leader announced that there was some confusion over trademark[3] so he was renaming it to Vaultwarden…

Odd… Looking through my posts shortly after the name change, I was already having issues with my Bitwarden server. It could still have been card corruption or me trying to play with Git. I guess I’ll never know…

…In any case, ButtonMash is ready for the next step.

Docker or Something Else?

Docker is a technology I still haven’t fully visualized. While researching instructions to install it on Red Hat systems, I stumbled across a mention of Podman. Online hosting solution Liquid Web provided a decently clear explanation [4]: containerization essentially makes single-purpose VM’s without the overhead of full operating systems. Docker has a master process that runs Docker containers. Podman runs containers separately, doesn’t require root, but requires a separate piece of software called Buildah to create containers to run and doesn’t have available professional support.

Further research confirms that RHEL now endorses Podman over Docker, so Podman I will use. Even so, I had to install it separately along with a Cockpit plugin to manage it. From there, I made just a few well-researched clicks to download Vaultwarden. The Docker-Podman plugin had a lot of fields I didn’t recognize, so I installed the Docker HelloWorld container to play with. I had to run it from terminal, but it appeared to work. I expect running a Vaultwarden container will be my side project next week.

Side Project

Last week for my side project, I set up a Wi-Fi gaming router to hopefully reduce downtime on my Wi-Fi catcher Pi. This week, I made the two get along. First, I thought it might be Wi-Fi drivers, so I updated, getting myself into a tedious cycle of incomplete updates failing when the file system flipped to read-only against the background of Wi-Fi dropouts. I had to flip the power switch because the reboot command broke and reconfigure packages to clean things out before continuing.

My real problem was the static IP landing outside the router’s 192.168.X.X range. Attempts to manually change IP kept failing, so I backed up a known good config file on top of the file I actually needed to go back to dynamic IP and spent many hours piecing it back together. In the end, I was finally able to connect.

Takeaway

PPolished computer tutorials are great for catapulting students of tech over barriers of entry, but they’re each anchored to a fixed point in time: lessons of the recent past compiled for the near future. As much of an accomplishment making a definitive guide to subject X might be, it will only be but a single focus point for future users to look back on when compiling their own procedures.

Final Question

Have you ever gone back to old project notes for insights for follow up projects?

Works Cited

[1] Shadow_8472, “BitWarden: My New Password Manager,” Let’s Build Robotics With Shadow8472, March 15, 2021. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2021/03/15/bitwarden-my-new-password-manager/ [Accessed Nov. 22, 2021].

[2] censiCLICK, “Full Guide to Self-hosting Password Manager Bitwarden on Raspberry Pi,” on YouTube,Nov 15, 2020. [Online video]. Available: https://www.youtube.com/watch?v=eCJA1F72izc [Accessed Nov. 22, 2021].

[3] d. garcia, “1.21.0 release and project rename to vaultwarden #1642” on GitHub, Apr. 19, 2021. [Online forum]. Available: https://github.com/dani-garcia/vaultwarden/discussions/1642 [Accessed Nov. 22, 2021].

[4] Liquid Web, “Podman vs Docker: A Comparison,” Liquid Web, Sept. 10, 2021.[Online]. Available: https://www.liquidweb.com/kb/podman-vs-docker/ [Accessed Nov. 22, 2021].

ButtonMash’s Solid Foundation on Rocky Linux

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am still working on my Rocky Linux server. Let’s get started!

Project Selection

One would think it shouldn’t take a month to set up a server, but the vast bulk of that is research. What all do I want the server to do? What services do I need to set up to include it? When I know more than one way to do something, which way do I want to do it? The questions don’t end until work has progressed beyond the point of answering differently.

My goal for today is to get a few things running: I want to mount the GoldenOakLibry NFS server. I want to update-grub so I can properly dual boot with Debian. I want to install BitWarden. These three things are probably the most important end-goal tasks remaining for configuring ButtonMash’s Rocky install.

Package Managers

Before I can really work on my target goals, I need to know some of the basic specifics. Every major branch has its own compatible package managers. Debian has DPKG and Apt (Snap for the Ubuntu sub-family) while Arch has Pacman and AUR. Wrappers and cross-compatibility tools exist as a zoo of possibilities that will not be fully sorted out here, today.

My first impression as I research the Red Hat branch’s solution are the striking parallels to Debian, though it is also experiencing a stir. RPM (Redhat Package Manager) is like DPKG in that it is used for directly interfacing with the repository. YUM (Yellow dog Updater, Modified) was the package manager the likes of Apt I’ve been hearing about associated with the branch. It is now replaced by DNF (DaNdiFied YUM) for installing Package X and everything Package X needs to run (called “resolving dependencies”). Both YUM and DNF are present on my install, though.

Cockpit

I’ve had a chance to look over this web interface that came with Rocky Linux. By default, there doesn’t appear to be much to it after logging in beyond information readouts, an interactive firewall, and most importantly: an in-browser terminal. There appears to be a whole ecosystem to learn about, but it’s beyond me for now. I will want to look deeper into this subject when I move in to disable password authentication over the network.

Note about the terminal: it’s a little quirky from sharing its inputs with the browser. Nano’s save command also tells FireFox to “Open” and copy-paste commands don’t always work the same.

NFS Mount

From experience, I know that NFS is a royal pain to learn how to set up. On top of that, I know of at least two ways to automount network drives: during boot with fstab, and dynamically with systemd. Mounting it with fstab is annoying on my laptop because it halts boot for a minute and a half before giving up if GoldenOak is unreachable. More annoying is that this appears to be the more well documented method between the two. For an always-on server, though, it may not be a concern.

Not helping systemd’s case is/are the additional way/ways I’m discovering to set its automount functionality up. I don’t even know the proper name for the method I’ve used before – just that I didn’t mess with /etc/fstab whereas another systemd method does. It is a great challenge finding a source that compares more than a single mounting method. The good news is that aside from installation, I should be able to disregard what distro the tutorial was intended for.

While researching this section, I rediscovered autofs, and saw mention of still other automount methods. I’m avoiding autofs because because the more I read about it, the move complex it appears. In this instance, it would behoove me to just leave a line in /etc/fstab because I don’t expect to be booting this server outside the context of the GoldenOak NAS, but as this is more or less the centerpiece of my home’s network, I’m going with systemd mount files, as per the blog by Ray Lyon I referenced last February when I first learned about it. I’ll leave a link to his post in my Works Cited[1].

NFS Automount is tricky stuff, but each time I study it, I retain a little more. I can barely remember how to mount a share manually – let alone configure systemd automounts. It took me several days to find a copy of the files I needed, even after looking back at my above mentioned post from February[2]. My best guess is that I got lost in my own filesystem. I’m taking notes and organizing them in my home directory on this new install.

Update-Grub

When I installed Rocky Linux, I was all nice and safe by not letting it see any drives it wasn’t installing over, but the host machine still has a job to do on the photo trunk project; I need it to dual boot. I read up on a command called update-grub I could just run once everything was installed and physically reconnected. First of all, update-grub is a script, and second of all, it’s notoriously absent.

A variety of help topics exist on what command to run on RHEL instead of update-grub. From what I can tell, it’s pretty universally present on Debian-based systems and when I checked Manjaro (Arch family) just now, it was there too.

Update-grub itself is pretty simple. It’s three lines long, and serves as an easy-to-remember proxy command to actually update your Grub boot loader. The exact command may differ between computers depending on if they’re using BIOS or a newer, less common equivalent called UEFI. I assume it is generally generated during package installation.

Once I had my bearings, it was fairly easy to update grub on my own. I found my configuration file at /boot/grub2/grub.cfg because I am using BIOS. An effectively empty directory stump existed for the UEFI branch, cluing me in that this operation is one you should understand before using copy-paste into terminal. This StackExchange forum has several individual explanations, including reference to what I take to be a catch-all I am not using. Link[3]

So… I go to verify everything is working, and it’s not. A simple reboot loaded Rocky’s GRUB, but the Debian kernel refused to load over the USB 3 PCI card. So much for that idea. I moved the Debian drive to a motherboard USB port and BIOS found it and loaded Debian’s GRUB, which doesn’t know about Rocky Linux. I tried running update-grub in Debian and… it didn’t work. I wasn’t looking to spend even more time on this part of the project, so after confirming that Rocky’s GRUB could boot Debian, I got into BIOS and told them to prefer the internal Rocky drive over anything on USB.

BitWarden False Alarm

I’m super-excited about putting my self-hosted BitWarden server back up. I’ve already started researching, but the topic still feels like it’s expanding when I need to be getting ready for publishing this already lengthy post full of amazing progress. BitWarden will need to wait until I can better teach myself how to properly take care of it.

Takeaway

The Red Hat branch of Linux is in a notable state of flux. Key fundamentals elements of the family like CentOS and YUM are everywhere in old tutorials, and that is bound to make for a frustrating time trying to learn Red Hat for a while to come – especially if you’re new to Linux. Here, more than anywhere else, learning the history of the branch is vital to teaching yourself how to sysadmin.

Side Project

A while ago, I thought Derpy’s RAM was failing because Kerbal Space Program kept crashing the whole system. I’ve been running the three 4 gigabyte sticks on my Manjaro workstation for a month or two, and they appear fine. In the meantime, my father ordered up a pair of 8gb sticks. This week, I installed them, displacing one of the 4gb sticks. Passive testing will now commence.

Final Question

Have you ever had a project take a discouragingly large amount of research time then suddenly come into focus in a single day?

Works Cited

[1] R. Lyon, “On-Demand NFS and Samba Connections in Linux with Systemd Automount,” Ray Against the Machine, Oct. 7, 2020. (Edited Aug. 8, 2021). [Online]. Available: https://rayagainstthemachine.net/linux%20administration/systemd-automount/. [Accessed Nov. 7, 2021].

[2] Shadow_8472, “Stabilizing Derpy Chips at Last,” Let’s Build Robotics With Shadow8472, Feb. 22, 2021. [Online]. Available:https://letsbuildroboticswithshadow8472.com/index.php/2021/02/22/stabilizing-derpy-chips-at-last/. [Accessed Nov. 7, 2021].

[3] “Equivalent of update-grub for RHEL/Fedora/CentOS systems,”StackExchange, Aug. 26, 2014-Oct. 10, 2021 [Online]. Available:https://unix.stackexchange.com/questions/152222/equivalent-of-update-grub-for-rhel-fedora-centos-systems. [Accessed Nov. 7, 2021].