Squashing All My Computers into One: Part 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am centralizing storage across my several computers. Let’s get started!

Computer Drift

One of my favorite things about Linux is exploring the possibility space of designs for what a computer operating system can look like. But maintaining multiple workstations can and will leave you wondering where that one picture is saved or what ever happened to that document you know you saved under blog drafts. I have no fewer than three computers –four or more if you count my laptop and ButtonMash as separate given their common install and/or my dual booted machines– it’s high time I consolidate my computers’ respective identities to reflect me as a single user given my access to GoldenOakLibry, the family network storage.

Project Overview

One would think the process would be as simple as dumping everything in a central location and spreading everything around be it garbage or not. Alas, subtle differences in installed programs or versions of programs make this approach unsuitable.

My best bet will be to think backwards. Not everything will be shuffled around; directories supporting install-specific programs should stay on their specific computer. Backups for such files are fine, but I can accidentally damage other instances if I’m not careful. I’ll need to tailor a number of Rsync commands and schedule them to run automatically with Cron. As this topic is basically day-of filler while I work on a larger project, the full job is a little out of my scope for today.

My goal for today is to make a backup I can operate manually and later automate. If things go well, I can see what I can do about Rsync, but Cron will need to wait for another day.

GUI File Transfer

The terminal is an important skill to have when managing a Linux ecosystem of multiple computers. However, there are some things, such as managing picture files, that inherently work better over a graphical file manager. While preparing for writing today, I noticed places like my respective Downloads directories are quite messy after a few years of Linux.

I wasn’t the biggest fan of jumping workstations all day, so I searched for a way to have the Dolphin file manager operate over SSH. The first result to catch my attention was called FISH (Files Transferred over SHell protocol). Similarly, SFTP (SSH File Transfer Protocol) appears to fill a similar computing niche. Each would be an interesting research topic, but for my purposes today, they both work equally well as long as SSH is configured to use authentication keys.

Derpy’s Backup

The easiest place to start would be my DerpyChips workstation as that’s the one I’m working from starting off. Documents was fairly easy to clean out. I had some Blog drafts and some other stuff I sorted into proper places on the drive.

The dreaded Downloads directory was relatively tame on Derpy. Nevertheless, I still spotted elements from at least four distinct projects ranging from incomplete to long done or abandoned. I even found an instance of GraalVM I may have been running straight from Downloads. My goal is an empty directory. If it will update before I need it again or I won’t need it ever again, it’s gone. If I’m unsure, I’ll find another home for it. I similarly emptied out any directory intended for file storage. Pictures was simple this time, but I expect I’ll need a more elaborate structure once I start trying to organize additional computers’ worth of memories.

ButtonMash’s Backups (Debian and MineOS)

Things were a little more interesting when I started moving things over from ButtonMash. At first, I set a Dolphin instance up with ButtonMash’s home on the left and its view GoldenOak on the right, but when I got a warning about not being able to undo a delete, I thought twice. I did have a deletion accident last phase and used an undo action, so it’s Derpy’s view of it on the right.

I was right about needing to take pictures slowly on this one. Some pictures fit better in with my blog while mems I felt worth saving went in their own directory within the more general Pictures one. But I don’t need copies of everything everywhere if I can just access the drive. Possibly just my favorite desktop and my avatar, if that. I made a directory for those two and any others I may want to spread around.

File manager over SFTP understandably has limitations. Not all files can be directly accessed –particularly audio files– and some graphical files don’t render shortcuts. When I try to preview an archive, it must first be copied over as a temp file.

I had another accident while moving some old Python projects over. For whatever reason –be it permissions or simple corruption– some files didn’t copy over cleanly. I fished around with it a little more and gave up and deleted both source and destination, as I expect another copy was made when I cloned my laptop to its internal drive.

Thanks to this blunder, though, I was more careful when it came to the family’s Minecraft servers from when we were running MineOS. I encountered an error while copying, so I reverted to rsync directly from ButtonMash. Even then, I had to elevate permissions with sudo to finish the job.

Takeaway

I’d like to say I’m somewhere around half way with my goal for today, but if I am to take this task seriously, I’ll need to go back farther and reintegrate any old backups I may have laying around, and by that count, I at least eight computers to consider – more if I count Raspberry Pi’s and any recursive backups I may find.

In some ways, this project is not unlike my experience with synchronizing Steam games manually, but on a larger scale. I’m having to re-think my structure for what I want backed up where as well as how I’m planning to access it. This is not a simple grab and dump.

Final Question

Have you ever made an comprehensive and accessible backup of all your computers, present and surviving?

Family Photo Chest Part 13: Early Prototype Workflow

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am merging a couple projects into one and hoping they stick. Let’s get started!

Overview

I’ve been piecing bits of my photo trunk project workflow together now for way too long. Right now, the architecture is looking like I’ll be scanning sets of pictures to a directory on a Network Attached Storage, then I can use a cluster of dedicated microcomputers from another unfinished project to separate and deskew the raw data into individual files. These files will then live permanently on the NAS.

Progress is rarely linear though. My end goal for this week hopped around quite a bit, but in the end I felt like I did nothing but figuring out how not to proceed: ground work with no structure. Routers are hard when you’re trying to learn them on a schedule!

Lack of Progress

In a perfect world, I would have been well on the way to configuring a cluster node by now. In a less unreasonable one, I would have had my Pi 4 OpenWRT ready to support the cluster. Late-cycle diagnostics chased me into an even more fundamental problem with the system: Wi-Fi connectivity.

During diagnostics, I’m learning about how different parts of the system work. Physical connection points can be bridged for a single logical interface, and Ethernet cables can support separate ipv4 and ipv6 connections. I can’t configure the Ethernet (on either logical interface) the way I want because that’s how I’m connecting to the Web UI and SSH. I end up stuck using two computers besides the two router Pi’s (OpenWRT and a Raspian hack router that actually works) because I don’t like switching my Ethernet cables around on the switch, but I need to do that anyway when I have to copy a large block of text. In short: the sooner Wi-Fi gets working, the better.

I understand I am essentially working with a snapshot. It’s been tidied up a bit, but bugs still exist. Wi-Fi is apparently one of those things that’s extra delicate; each country has its own region, among other complexities. On the other hand, I don’t know if that’s actually the case, as diagnostics are ongoing.

Takeaway

I’m probably going to work on this one in the background for a while. The OpenWRT help forum’s polish at least in part makes up for routers being dull to learn. If it takes too long, I do have other projects, so I may need to replace the cluster with a more readily available solution.

Final Question

Have you ever had upstream bugs that kept you from completing a task?

Stabilizing Derpy Chips at Last

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m addressing an annoying trio of issues I’ve had with Derpy Chips since I installed PopOS on it. Let’s get started!

The Problems

I have a number for gripes to myself about Derpy. I frequently have to stare at an ugly, gray login screen for to a minute and a half before I can select a user account. Tabs sometimes crash in FireFox, but only while I’m using it. Discord sometimes blinks, and I lose any posts in progress – possibly representing minutes of work.

Additionally, my mother uses a separate account to play Among Us on Derpy, and I have my account set up with a left-handed mouse she can’t use easily. Unfortunately, Derpy tends to crash whenever I try switching users, so I’ve been using a full power cycle. And that means we need another long, featureless login screen before the actual login. Some day, I really want to figure out how to change a login screen. Aside from how long this one takes, I’d much rather use the KDE one over GNOME 3.

The Plan

Of the three issues I’m setting out to address, long login is the most reproducible. Fickle FireFox and Ditzy Discord happen often enough to make Derpy frustrating to use as a daily driver, but sporadically enough to resist debugging on-demand. So I am planning on spending up to the full week on Derpy ready to catch the errors when they happen.

Going off what I have to start with, I’m assuming my FireFox and Discord issues are related. Both use the Internet for their every function, and the glitching tends to happen at times when a packet is logically being received: for FireFox, when a page is either loading or reloading, and Discord when someone is typing or has sent a post. If I had to hazard a guess, I would have to say Lengthy Login is directly caused by my NFS being mounted in /etc/fstab, and I’m not sure if there’s anything to be done about it except working the surrounding issues.

For this week, I an reaching out to the the Engineer Man Discord and a Mattermost community I found for PopOS. I don’t know much about the latter, but I heard the PopOS dev team frequents that forum.

The Research

I started by posting about my issues. Help was super-slow, and I often got buried. I don’t remember any self research making any sense. Anyone helping me in the PopOS support chat seemed obsessed with getting me to address Blank Login first, even though it was the least annoying of my three chosen issues, if only other stuff didn’t bug out on me.

Someone gave me a journalctl command to check my logs, and I did so shortly after a target glitch. It came back with a segfault error of some kind. I added this to my help thread and humored them about disabling my NFS fstab lines.

RAM or Motherboard?

When researching further for myself, I came across a number of topics I didn’t understand. I didn’t make any progress until someone told me to try memtest86+. What a headache! I installed the package, but had to dip into GRUB settings so I could boot into the tool. Even then, it kept crashing whenever I tried to run it with more than one stick of RAM at a time, as in the whole thing froze within 8 seconds save for a blinking + sign as part of the title card.

I was hoping at this point it was just a matter of reseating RAM. Best case: something was in there and just needed to be cleaned off. Worst case: a slot on the motherboard might have gone bad, meaning repair might be one of tedious, expensive, or impossible.

I tried finding the manual of Derpy’s motherboard, but the closest was the one for my personal motherboard, a similar model. Both come with 4 slots of RAM: two blue, two black. I used the first blue slot to make sure each stick of RAM passed one minute of testing, followed by a full pass of testing, which typically took between 20 and 30 minutes. I wasn’t careful with keeping my RAM modules straight, in part because I helped clean my church while leaving a test running.

I identified the fourth stick from a previously tested one I’d mixed it up with by how it lit up the error counter, starting just past one minute in. I tried reseating it several times, with similar results: the same few bits would sometimes fail when either reading of writing. If I had more time, I would have a program note the failing addresses and see if they were the same each pass as they kept adding up.

Further testing on the motherboard involved putting a good stick of RAM into each slot. Three worked, but one of the black slots refused to boot, as did filling the other three slots. I landed with leaving one blue slot empty for a total of 12 out of 16 gigs of RAM.

NFS Automount with Systemd

I still want relatively easy access to the NAS from a cold boot. “Hard mount in fstab has quite a few downsides…” cocopop of the PopOS help thread advised me. Using the right options helps, but ‘autofs’ was preferred historically and systemd now has a feature called automounts. I thought I might as well give the latter a try. cocopop also linked a blog post On-Demand NFS and Samba Connections in Linux with Systemd Automount.

I won’t go into the details here, but I highly recommend the above linked blog. It didn’t make sense at first, but after leaving it for a day, my earlier experiences with fstab translated to this new method within the span of about an hour total. I missed an instruction where I was supposed to enable automounting once configured, but it felt almost trivial.

Results

I haven’t had any problems with Discord or FireFox since setting the defective RAM aside in the anti-static bag it came in. As a bonus, switching users works correctly now as well.

NFS mounting is now much more streamlined with systemd. While I cannot say which method would have been more challenging to learn first, the tutorial I was following made this new method feel way more intuitive, even if file locations were less obvious. I didn’t even need any funny business with escape characters or special codes denoting a space in a file share name.

Takeaway

It really should go without mention that people will only help each other with what they know. I find myself answering rookie questions all the time when I’m after help with a more difficult one. Working side by side this week on a future topic, I had such a hard question, people kept coming in with easier questions, and I ended up asking mine enough times someone commented about the cyclic experience. The same thing kept happening with the easy part of my question about login.

Final Question

Do you ever find yourself asking a multi-part question, only to have everyone helping you with just the easiest parts you’ve almost figured out?

Family Photo Chest Part 9: NFS

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am exploring the my preferred method of accessing network drives. Let’s get started!

Storage of any kind is only as good as your ability to access it. On your typical modern, end-user computer, long-term storage is typically limited to a hard drive of some kind, and possibly some sort of cloud storage solution.

On another inbound tangential subject, file transfers within my family’s home network have thus far has been limited to using a USB thumb drive or bouncing it off an online host. But thumb drives are often already laden with data, and size limitations plague e-mail and chat services. SSH and SCP have helped, but they are a bit of a pain to get working smoothly.

File sharing has been around almost as long as computers could communicate. Different protocols have different strengths and weaknesses, and the best one for you can differ depending on your situation. I’m largely dealing with Linux, and NFS speaks Linux/Unix natively, or so I hear. The other easy choice would be SMB, a protocol with more overhead that Microsoft wants its customers to upgrade to Pro or Server to avoid having to use for file sharing. And according to data gathered over at Furhatakgun, I am drawing my own conclusion that SMB has more overhead per file than NFS.

If I would just follow a tutorial, I could have a much faster time with a lot less understanding. My target project was to backup my laptop’s home directory in preparation for migrating my drive from external to a newly installed internal drive.

I would have to say enabling NFS was easy only in the shallowest of terms. After enabling the protocol overall, I found my way over to the appropriate network share and had to resort to whitelisting my IP to mount that share (as root). And at that, I literally had no permissions to read, write, or execute that share — even as root. chmod!

All I know is that I am on the way to understanding, but I have much to learn before I can properly report back on it. For example, I’ve read that I need to have my NAS account name match my local user name. I’ve also read some about hard vs soft mounting, and how setting it up right can minimize the chance of data corruption.

Final question: Have you ever recognized that you know something, but not well enough to teach it?

Family Photo Chest Part 8: NAS Software

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I only noticed last minute that it’s time for this month’s edition of Family Photo Chest, so I’ll probably be on the short and wordy side. Let’s get started!

While I explored around a little in the Network Attached Storage (NAS) documentation last time, I rightly came to the conclusion that this system has more features than I will ever need. I knew I would need time –days perhaps– to scout the documentation.

Fortunately, I found a YouTube channel, mydoodads, with an awesome overview of Synology’s NAS operating system, Disk Station Manager. I highly recommend his series I’ve been watching for today’s post: How to Setup and Configure a Synology NAS. Topics are broken into 6 to 12 minute videos, and more importantly to me, his audio is clean and understandable. My one complaint is that he doesn’t always act like Linux is a thing. If you’re here to follow along, just go check his videos. He’s more set up for actual instruction than I am.

In the meantime, my vision for this system was to just have a simple external hard drive I can see from whatever file browser I like, sort of like the “K drive” at my university. My first impression upon seeing the login over a browser and seeing a full desktop was that that was the only way to use it. Watching mydoodads’ tutorial, I learned about the Server Message Block (SMB) protocol, which looks very much like my memories from the K drive.

There are other ways to access the device, and I’m still deciding how I’m going to get everyone using it. Right now, it’s connected to my personal subnet that won’t let anyone outside look at it, and I’ve given it a static IP of 10.0.1.2. I’ll need to see if it will automatically adjust to the different netmask (I hope I’m using that word correctly).

The one thing I haven’t come across in this video series is setting up RAID, which I already did before. He did go over shared folders, but I still need to set up other basic stuff, like user accounts and groups. Part of why I was getting lost was because I found and started messing with storage pools, which appear to be for when you’re dealing with multiple logical NAS setups on the same network. I still have so much to learn.

Final Question: Have you ever used a network storage? If so, did you understand at the time?

Family Photo Chest Part 7: NAS Hardware

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m assembling a NAS(Network Attached Storage) system. Let’s get started!

These things don’t come cheap. Between the case and four large hard disks, the whole system costs as much as a decent computer. It’s also worth noting that this is the first new system I’ve covered on this blog, aside from Raspberry Pis or similarly powered units.

In a way, it was actually a small blessing that this aspect of the project was delayed. In the time since I started researching NAS and now, Western Digital was found to be selling some of their archive quality drives as SMR (Shingled Magnetic Recording) instead of PMR (Perpendicular Magnetic Recording). This is bad because shingled drives are designed to overlap their magnetic tracks while writing so only the narrow read head can fit when the deed is done, and I intend to store more than just static data on here.

We bought four of the smallest non-shingled model. At seven Tb a pop, we should be able to dump all our existing data onto these things two or three times over should we see fit, but only after it’s set up, and that’s after reserving a full drive’s worth of data for parity.

Parity is a redundancy technique that provides some room for error. RAID 5 (Redundant Array of Independent Disks) does this by using a bit one one drive and adding all the bits in the same position on each of the other drives and ignoring all carries. It then distributes this parity data across all the drives, so the more drives in the array, the more efficient your data loss protection. If one drive is suddenly zapped out of [electronic] existence, you just need to replace it with another one of equal or greater storage capacity and the array can repair itself from there.

The NAS system we got was made by Synology –just to be clear, this is not a sponsorship, and I have never had a sponsorship– and I had some quality time with my father as we assembled it. Assembly wasn’t toolless, and the case was a little hard to get back on, but the photographic instructions were easy enough to understand, though they could have used to label their screws as being for either HDD or SSD.

Software wise… I’ll be spending another post setting things up, but I did take a self-guided tour of the place already. Synology has their own operating system called DMS (Disk Management System). I saw that and panicked. My shtick has been Linux, and at least trying to do it myself until stop learning and get frustrated. Long story short, I got bored and found myself reading the license agreement. I didn’t understand it all, but I found references to the GNU license in there.

According to one post on Redit I have since lost track of, people should be cautioned against toying around with different operating systems on this NAS solution because there’s a chance it could brick the system; I really don’t want that. Besides, it looks to me like DMS is built on at least the Linux kernel, and if that thread I found was to be trusted, it’s a stripped down version of Debian.

While writing this, I poked around in SSH and confirmed Linux, but determining parent distro is outside my abilities right now. It sure didn’t feel like a normal SSH experience. It dumped me straight into the root directory instead of a home folder –understandably without the disclaimer about free software and no warrenty– and when I got around to sudo whoami, it lectured me about the basics of superuser privileges, mirroring my earlier experiences in the graphical web interface.

From the moment I started installing the DMS operating system, I noticed how they somehow managed to design their user experience in such a way that anyone with basic computer literacy can use it, but they somehow managed to avoid insulting the intelligence of their power users. I didn’t need the utility they had to find the device’s local IP. I don’t need their online services to bypass port forwarding or their pictures and video galleries, but they’re there for people who want them.

My only substantial complaint so far is that the interface looks like it wants to be Windows, but clearly isn’t. The blue color scheme is there, but all the icons are off. I’m also constantly rummaging around through documentation for advanced features I don’t have a reason to know about yet, let alone disregard for the time being. I suppose I was a little miffed about GoldenOakLibrary being one character too long for the length cap at 15, but by removing a couple letters near the end, it’s still readable if you know what you’re looking at.

My big plans for this system at present are to make some sort of division between family photos, computer backups, and general storage. Once that’s done, we can finally start scanning, among other things.

Final Question: What systems have you used that don’t baby down their interfaces with scary warnings to the point of alienating power users who already knew what was going on?

“Beowulf Cluster:” Part 5: DHCP and NAT

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am getting a head start on this post, so I don’t yet know where the end point for this segments. Let’s get started!

I hate networking. This router project is really taking a lot to learn. I’m losing track in my memory about what point I was on when the official break in weeks happened. Right now, I’m just looking forward to the time when I can say I have a custom router, and setting up the “Beowulf Cluster” I originally set out to build –not that I can really call it that anymore– should be one more week™ after that.

Once I had the network interfaces set up correctly, DHCP wasn’t far behind. I had a few semicolons missing after some lines, but that was about it. I plugged my laptop into the my little subnet and the Pi assigned it an IP in the range I had told it to. It just wouldn’t let me at the Internet at large, even though I haven’t changed iptables off its default of accepting everything yet. Of note: I was unable to use the Internet even while still connected wirelessly. Two weeks ago (Was it only that short ago?) I would have just poked around with “black box reasoning” before concluding that Ethernet superseded Wi-Fi because it is faster, all other factors being equal.

Further investigation led to believe I should look into NAT (Network Address Translation). It’s disabled by default, likely because if you need it, you’ll soon learn about it like I’m doing now.

A couple guides later, and I had uncommented a line in a config file to allow IP forwarding, but when I used modprobe ip_forward to add NAT abilities to the kernel, listing out my iptables gave a warning about legacy software.

***

Well, this write up is coming due, so here’s what I’ve got.

I worked with a couple friends on my private Minecraft server –yes the one I’ve covered here before– by the names of doflagie and TheRSC. We didn’t go messing around with NAT tables, but we did inspect the DHCP configuration files some more.

At least, that was the plan at first. I accidentally broke an interface by editing the file there. Rebooting cut off my connection. I SSH’ed in over the other interface and promptly broke that one too. I had to swap around cables to fix those config files, so it was all good.

With my little review from last week out of the way, I went to weed through the actual DHCP config file. I wasn’t sure where it was, but before I resorted to finding it again online, I noticed one terminal on my workstation was already in a local DHCP directory when it wasn’t SSH’ed into somewhere else.

As part of my workflow, I hooked BlinkyPie up to the Ethernet subnet. Thanks to the default Raspian settings, eth0 overrode wlan0 as the default network interface, so I didn’t have to disable or disconnect Wi-Fi. Otherwise, Blinky is standing in for a generic microcomputer on this micronetwork.

It took several hours, but when I rearranged the DHCP config on the Pi4 to reflect the subnet and not the wider home network, I was able to reach out from Blinky through the Pi4. However, I was only allowed to the gateway, and no farther.

These results are underwhelming. Each last problem feels like it is comprised of a chain of smaller problems, leading to another such chain. I’m making progress, but if feels like the goalpost is moving on me.

Final question: How hard should this really be? Sure, there are higher lever programs, but how long should a low-level approach like what I’m doing take to figure out?

“Beowulf Cluster:” Part 4: Network Interface Connections

Good Morning from my Robotics Lab! This is Shadow_8472 and today, I am working on that Pi4 network piece again. Let’s get started!

I am having the hardest time with networking. What could have been a confusing, possibly straightforward black box operation turned into a subject of study where I’m likely going to be able to do my own thing later on. With that better foundation, I just might add DHCP back in into my general plan, as such a system would be a much more useful tool I could use in an upcoming project I’ve been looking forward to.

As stated in an earlier post, I’ve been cobbling my way together using a series of incomplete tutorials and seeking help understanding the gaps. Many video tutorials for networking are either old, made with bad audio quality, have a presenter with a thick accent I can’t make out, or a combination of the three. Text is unfortunately the way to go.

My functional goal for this week was to get the Pi 4 functioning normally online with a static IP. Through whatever mishmash of tutorials, the following is what I’ve cobbled together as I understand it.

The Pi 4 has two network interfaces, one called eth0 and another called wlan0. These names represent an old standard on the way out, but for this week at least, I’m sticking with them. I started by going to /etc/network/interfaces and set it up so each interface has its own configuration file in the directory /etc/network/interfaces.d/. Eth0 was straightforward to set to static, but wlan0 had another pointer elsewhere for the WiFi name and passkey.

My first big problem was in how I had to copy configuration file examples by hand. At my worst point, I had some equal signs where spaces belonged and spaces where underscores belonged. The former was a mistake I made when looking at two related, but dissimilar files, and I diagnosed it by paying attention to the error messages when I tried sending sudo ifup wlan0.

The later was a mistake I made because I was copying things by hand without SSH and a dark mode plugin I use actually cut off the bottom row of each line of test in a code box, including the entirety of each underscore. This one only told me the interface failed to start. Someone going by Arch on the EngineerMan Discord pointed me to journalctl, some sort of log viewing utility. I piped the wall of text through grep to pick up wpa_supplicant, a utility that kept complaining. Journalctl also pointed out locations of errors with quotes around the WiFi SSID and psk fields where I needed strings.

Assorted errors along the way included a mistaken time zone that was easily taken care of. I also had a surprise when the contents of /etc/wpa_supplicant/wpa_supplicant.conf vanished on a reboot: turns out I had a blank copy in /boot and as Raspbian boots, it will move certain configuration files in that folder around to overwrite their permanent counterparts, leaving me guessing as to what’s already been done.

With stable network interfaces, I started testing them, but I wasn’t able to ping anywhere successfully. I noticed that it was using its IP from the eth0 socket, and not wlan0 for some reason. Eventually, I either found or was directed to route or route -n: basically the table where Linux decides what door to send packets out of as they leave. Eth0 was set up as default if no other matches on the table were found; all my ping packets were being sent inward and getting lost instead of getting sent out, into the wild.

I spent time studying this new table and found a clear, black box style tutorial I was able to reverse engineer LINK. Manipulating the table directly was straightforward enough, but what I failed to realize was that parts of the table are generated fresh each time a network interface goes down or comes up. My work totally collapsed when I rebooted, reverting the default interface back from wlan0 to eth0.

The fix (shown as “Part 2” in the black box tutorial linked above) led me back around to the beginning. Part of the configuration of different interfaces includes a setting called gateway. I actually had to move it from eth0’s config file over to wlan0’s config file.

Admittedly, I am a bit disappointed in this week’s results, but I learned a lot more than a total black box experience would have afforded me.

Final Question: Have you ever poked at a “black box” and learned how it worked?

“Beowulf Cluster:” Part 3: Networking Nightmare

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I actually spent the better part of the week on this segment and still didn’t get as far as I would have liked. Let’s get started!

There is a reason networking scared me off when I first looked into it back when I was in university. Every computer needs a way to introduce itself to not only its communication counterpart, but all the message carriers along the way. It’s almost unbelievable how easily it all can be smoothed over so even a toddler can get him or herself into trouble without proper supervision. But all the inner workings are still there, churning away, ready for someone to go monkeying around until something breaks.

Last week, I talked about the hardware side of things. I have all the equipment I need ready to use, but it’s waiting on a digital support structure to hold the worker nodes together. My goal for this week was to build that support structure. I would have declared success when I could connect my laptop to an Ethernet network served by the new Pi 4 and access a web page.

Originally, I was aiming to set the router up as a DHCP server so it could hand out IP addresses on the fly. Along the way, I learned how to set up a static IP. Did I mention how ugly raw networking configuration looks to complete beginners?

I honestly don’t know half the stuff I did as I was trying to follow multiple tutorials at once, hoping one would fill in for another when it didn’t apply to my present situation. One blank on a text-based tutorial was elaborated on by a quick comment by a YouTube video, and I ended up stumbling in a third direction in the hopes I was doing the right thing for my project.

And that’s when I can even follow the tutorials! Half the time I landed on a video where some would-be instructor may know what he’s talking about, but his audio quality says he’s working with what he’s got, and that can be a bit sharp on the ears. Another good chunk of the time, it’s someone with a thick accent I wouldn’t be able to understand unless I could ask him to repeat multiple words. Unfortunately, text is going to be the way to go for tutorials at my level of specialization.

I wasn’t able to get the DHCP server sorted out. Something about there can only be one per network, and I have no idea how to confine it to confine it to one network interface — that’s another thing I learned about.

Network interfaces: the whole reason this project is a thing is because the Pi 4 has two separate network interfaces, or ways to access a network, such that it can participate in two distinct networks at once: one for the Ethernet, called Eth0, and one for the WiFi, called wlan0. If that weren’t confusing enough, there’s a new standard that names network interfaces off the hardware they’re connected to. For my intents and purposes, where I only have a single interface per kind of connection, it only serves to confuse when it shows up elsewhere — namely my laptop — but I imagine it would make handling a 50 port switch a little more bearable.

At some point, I gave myself a reality check. I only had five computers to contend with. DHCP is a little bit much for what I’m looking to do. Extra overhead: needs to be axed. I’ll just use static IP’s all around. I went ahead and set my laptop’s Ethernet network interface up with one and rebooted, since I couldn’t seem to restart just the network settings. I had to fix a couple settings, but it was pretty easy to get it to ping –and even ssh into– the Pi 4 over the Ethernet network I had going. The trouble was that I couldn’t access a web page like I wanted.

I messed around a bit and when I was done, I unplugged and connected to my home network, only to find it was being difficult. The best I can figure is that by editing one of the configuration files to give me a static IP, I was accidentally voiding some default for WiFi settings as well. I made an attempt to copy what I had going on from a tutorial on the Pi 4, but due to a series of disconnection issues, I ended up restarting the Pi and losing it on the network.

I was sure to make notes on how to reverse the network damage to my laptop as I made it. I commented out the altered lines, and rebooted again. Still no Pi 4 on the network. I looked on the miniature network again, and eventually resolved to swapping over the HDMI cable to the back of the monitor, only to find a line in ITS network configuration files from when I was experimenting with iptables to solve the present issue.

I’m laying this issue to rest where it is for now. I had some measure of tangible success, but the majority of my efforts were sunk into learning new skills. My laptop is in a usable state, at least.

Final Question: What have you cut out to make things easier on yourself?

Beowulf Cluster: Part 2: Not a Beowulf Cluster Yet

Good Morning from my Robotics Lab! This is Shadow_8472 and today, I learned a lot toward the next step in my project and backed out at the last minute. Let’s get started!

You know the thing about clusters: it takes some overhead to run them? The computers I’m working with in my Third Workshop care package don ‘t have much CPU to begin with. Special thanks to kevrocks67 on the EngineerMan Discord server for getting that point through my head and pointing me in the right direction: subnetting.

My understanding of subnetting is lacking to say the least. Perhaps if I back up and just tell the story.

This week saw most of the parts I need for my cluster come in. Shopping for the Pi 4, I compared different sellers and chose one to go along with a separate heat sink case I picked out from a different distributor. I managed to save a little money by assembling my own kit instead of getting one that included some small, aluminum heat sinks.

When the case arrived, it came with some thick thermal pads. Yeah, those won’t be separating once the case is fully installed and operational temperatures melt the pads into place or they are just sticky. I did a dry fit and it doesn’t look like the case will operate correctly without them. In the meantime, I noticed the camera cable slot and the similar-looking display slot will likely be inaccessible once I assemble the case, though the GPIO pins will still be open.

My father picked up a couple outdoor grade 3-way power spliters for power management. The minicomputers I’m working with have bulky transformers right on the plug, and they are such a shape as to make it difficult to creatively arrange more than three on a seven-plug power strip.

The Ethernet cable we picked up was paid for with a little effort. One of my church members was wiring up the church a while back for Cat6, and he has some left on the spool. I stayed topside while my father donned a jumpsuit and finished running the cable to just below the office, where he cut the cable. There was enough to reach the office, but that was it: maybe about 30 feet max: more than double my very rough estimation of what I’d need. A package of Cat6 ends and a crimper will see me learning yet another new skill in the near future.

About the only piece I don’t have picked out yet is a board to mount things on. It doesn’t look like the heat sink case has any good places to mount, though the switch and minicomputers do.

The next logical thing to do is assemble an early prototype! One control node and one worker node should be a good start… if only I could get the pi to boot from an SD.

Lacking the desire to format any drives where a valued drive can possibly be hit in the crossfire, paired with the inability to read SD cards on anything but my laptop, I tried using an app on my tablet to flash a card. For one reason or another, I wasn’t able to boot. I suspect I need root privileges to properly format anything with Android, unless I want to format as internal or removable storage. Lacking access to an old phone nobody has no qualms with me “voiding the warranty,” a multi-SD reader is on order.

Speaking of SD readers, turns out each of the five little helpers has one built in. I’ll be using one of those along with a live USB for formatting fun in the near future with no chance of fireworks at the end, just as soon as I figure out why they look exactly the same with lsblk -l. I suspect it’s got its own SD card like storage device in there. Ironically enough, I suppose the difference should make a slip up of the same calibur nearly impossible this go around. Nevertheless, I still want to practice isolation of formatting whenever possible. In either case, what I am trying to think of as equivalent to a hard drive is not labeled sdx anymore.

Somewhere, I made the switch in short-term project aim from cluster to subnet to reclaim some of that overhead. I still want to end up with a cluster for the experience, but for now my token effort among token efforts will be more efficient with less overhead.

To help with overhead costs, I found a site that compares lightweight distros. Thishosting.rocks/best-lightweight-linux-distros/ They have a sortable chart of about 30 distros and their rough minimum system requirements, including RAM, CPU, and space on disk. Folding@Home is likely to use CPU above anything else, I suppose, and RAM can’t be easily expanded, so my dump stat is disk space.

I optimized for CPU, and found Debian was a near-perfect choice. Most, if not all, my Linux experience is in Debian or a decedent of Debian. The required RAM is one of the lower rankings while still needing an almost non-existent CPU. I even already have the install media plugged into the thing and booted! Though I may want to look into a server installation option.

In terms of mentally tangible progress toward a prototype, I set the Pi 4 aside for the moment and brought in BlinkyPie, my Pi 3B+. My goal: turn a Raspberry Pi into a router. I’ll know everything works when I have Internet access on the minicomputer that is otherwise not connected through any other means.

I viewed a few networking videos that were either dated, covering basic material I already knew, had bad audio quality, or showed what to do without giving any theory behind their actions. I eventually settled on a clear, step by step tutorial on at Linuxhint.com/raspberry_pi_wired_router that does not explain its actions, but the steps are short enough that I can seek out my own explanations as I go.

The tutorial starts with installing Raspian and getting SSH going. As anyone who’s been following my blog long term knows, SSH was a real pain for me to figure out, and if I didn’t know what I was doing, it might as well be a small element of a magic ritual to start over if things fail. Since I intended to work with a Pi that was already established, I glazed over those steps and started in on configuring the network.

The whole reason this project of a subnet is possible with the hardware I have is because the Pi can participate in two networks at once: one via Ethernet, and one via WiFi. To make a long story short: this procedure requires the Pi establish itself in a special IP address for the router, implying a static IP. I sometimes take this Pi elsewhere (or at least its chip), and I don’t want to forget and accidentally mess up someone’s network, so I backed out in favor of waiting on a properly running Pi 4.

As always, this project is evolving as I work on it. The Gigabit Ethernet speeds I thought I would use for a cluster won’t be necessary for the time being, as there’s a slightly lower bottleneck at the Pi’s WiFi network adapter. Next week should be interesting.

Final Question: When was your last project where you lowered the specs and found yourself overbuilt?