New OS Smell: Manjaro Linux

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m starting my first expedition into another branch of the Linux family. Let’s get started!

Moving away from Windows involved one of the larger culture shocks I’ve ever experienced without leaving home. The concept of software package managers was something I had only seen on phones. The file system left me wondering where my secondary drive was. And the downright alien sense of open-ended desktop environment customization options almost sent me reeling with decision fatigue. But the mindset surrounding Linux lends itself to self-guided study, an element that is sorely absent from a number of other mainstream operating systems.

The switch to Manjaro has brought back some of that sense of culture shock, but the similarities are already surfacing. Of note, I am also finally trying out this fancy KDE desktop environment, now that I’m on something with a discrete graphics card, so I won’t be totally sure what change comes from where all of the time. The file system has some subtle differences, such as /run/.local, a directory I am otherwise used to finding in ~/.local. Software installation comes from either an official repository, or else code can be downloaded and compiled from AUR (Arch User Repository — similar to Ubuntu’s PPA, but it’s supposed to be safer to use). Furthermore, Arch-based distros have a rolling release schedule for their software, delivering the “bleeding edge” experience.

Installation was typical from what I have come to expect from distros meant for a desktop experience. I downloaded an ISO to “restore” to a thumb drive, and installed the intended install drive. I was sure to physically detach any other drives that I wanted intact before booting to a live session with a few essential programs and an installer application. It did offer a choice between a couple Office suites, which was nice.

Of note, I did have to install some proprietary drivers to get my NVIDIA graphics card to run Minecraft, or at least a custom mod pack. I hear it’s a bit of a gamble with KDE and that there are workarounds that tend to break with updates. I was cautioned to enable SSH before flipping the proverbial switch, but fortunately I was lucky and didn’t have to do some extensive repair.

I gave the system a few days to ease in before getting my father to rustle up my old PCI Wi-Fi card from somewhere in the garage. Long ago, we got this card when the machine was relatively new. I was in a dorm setting, and my wireless printer was particular about having only serving computers connected to the same wireless network. When I upgraded from Windows 7 to 10, was having the hardest time getting the upgrade to take. I even reached out to tech support some scammers who showed me to a network log or something and said, “These errors are what is keeping your computer from –what was it your problem was?– upgrading to Windows 10.” (aside paraphrased, the rest is as close as I remember) My parents were ready to pay these guys a reasonable-sounding fee, but the whole thing smelled off, and I backed out. Our own, internal diagnostics pointed to the Wi-Fi card, and I had to give it up.

My Wi-Fi card has had one of its detachable twin antennas smashed and trashed. In the meantime, I had need for my desktop to go wireless again. We bought a USB3 based antenna that served me well until the day I decided to use it with Linux. Since then, I’ve been using my Raspberry Pi 4 as a reverse Wi-Fi hotspot, and I’m quite proud of it.

Today, I decided to install the old card back into its slot and lo and behold, it worked straight away. I tried to run some speed tests to find if the newer Pi 4 was outpacing the older, more direct PCI card. But it was during peak usage in my area, so I’ll need to run something across my local network. Whatever the case, my father had the idea to stick the double-length antenna from the new Wi-Fi adapter onto the vacant attachment point on the old card.

Final Question: Have you ever reinstated anything you thought was hopelessly obsolete?

Family Photo Chest Part 9: NFS

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am exploring the my preferred method of accessing network drives. Let’s get started!

Storage of any kind is only as good as your ability to access it. On your typical modern, end-user computer, long-term storage is typically limited to a hard drive of some kind, and possibly some sort of cloud storage solution.

On another inbound tangential subject, file transfers within my family’s home network have thus far has been limited to using a USB thumb drive or bouncing it off an online host. But thumb drives are often already laden with data, and size limitations plague e-mail and chat services. SSH and SCP have helped, but they are a bit of a pain to get working smoothly.

File sharing has been around almost as long as computers could communicate. Different protocols have different strengths and weaknesses, and the best one for you can differ depending on your situation. I’m largely dealing with Linux, and NFS speaks Linux/Unix natively, or so I hear. The other easy choice would be SMB, a protocol with more overhead that Microsoft wants its customers to upgrade to Pro or Server to avoid having to use for file sharing. And according to data gathered over at Furhatakgun, I am drawing my own conclusion that SMB has more overhead per file than NFS.

If I would just follow a tutorial, I could have a much faster time with a lot less understanding. My target project was to backup my laptop’s home directory in preparation for migrating my drive from external to a newly installed internal drive.

I would have to say enabling NFS was easy only in the shallowest of terms. After enabling the protocol overall, I found my way over to the appropriate network share and had to resort to whitelisting my IP to mount that share (as root). And at that, I literally had no permissions to read, write, or execute that share — even as root. chmod!

All I know is that I am on the way to understanding, but I have much to learn before I can properly report back on it. For example, I’ve read that I need to have my NAS account name match my local user name. I’ve also read some about hard vs soft mounting, and how setting it up right can minimize the chance of data corruption.

Final question: Have you ever recognized that you know something, but not well enough to teach it?

A Pesky Game to Run in WINE

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am revisiting a very old topic from a few days short of two years ago. Spoiler alert: I don’t have a full solution for running SimCoaster (aka Theme Park Inc.), but I might have a clue or two. With that depressing ending out of the way, let’s get started!

Sim Coaster is a special game to me for the nostalgia factor. Read any write up by a well-read reviewer, and he’ll say it was a meh sequel when there was a better competing game on the market. Love for this game is so rare, I’ve even been referred to myself while researching how to get it to work, see hyperlink in opening paragraph.

Wine is not an emulator, nor is it a toxic beverage in this context; it is a compatibility layer. It looks at Windows executables’ logic, makes its best guess at what the equivalent logic in the local Linux or Mac operating system, and runs those instructions. Windows’ long legacy means it has an almost unfathomable library of software, so there are a mind boggling amount of settings to ensure maximum compatibility with as much of it as possible.

Lutris is a tool primarily used for playing games on Linux. It provides a nice graphical interface that configures and launches a number of “runners,” weather they be compatibility layers like Wine, or emulators like MAME, DOSBOX, or Dolphin. I don’t even recognize most of them, but I doubt anyone uses everything on a regular basis outside development — if that. The community can contribute installers for other people to use, but if your game doesn’t have one, you can still configure it manually.

From what I can tell, the Sim Coaster (sometimes spelled without the space: SimCoaster) installer for Windows has always been very stable in Wine; I’ve never had a problem with the installer. The WineHQ page on the game says the full game should be stable as of Wine 2.5. (For reference, I would have to go out of my way to find a version older than Wine 4.0.) I have it on credible authority that once a program gets full compatibility status, it usually doesn’t get worse.

Yet that’s where I have been for years here. I’ll update my progress, but I feel like I’ve been poking at a wall for loose bricks here, only to find one with the next wall behind it.

My biggest development was getting into the Lutris community’s Discord server. Linux is expansive to the point no one person can know everything, and the more focused of a community you can find for help, the more effective any help you find will generally be.

My second development was learning about the inner workings of Wine. Windows programs expect a certain file structure, so Wine creates a directory at ~/.wine called a Wine prefix –sometimes called a “Wine bottle”– to contain this file structure. In this Wine bottle, it provides free and open source equivalents to libraries Windows programs commonly expect, among other things to fool its programs into believing they’re running on an actual Microsoft Windows operating system.

I eventually made some tangible progress when I successfully created a 32 bit Wine bottle. While most programs can be made to work in 64 bit bottles, a few like Sim Coaster, just crash without much interesting to say to the debug log accessed by launching Lutris with lutris -d (Not just the one launched from the GUI). For all my trouble, I ended up with a text box titled A debugger has been detected saying Unload the debugger and try again.

And here are some miscellaneous clues I hope people like me might find useful:
1. 64 bit Wine bottles can only go back as far as XP, while 32 bit go back to Windows 2.
2. Sim Coaster only gives me its debugger error box when it’s trying to run in Win98 or ME, but not 2000 or XP.
3. It’s been suggested I may be bumping into DRM here. I have the CD, and Wine has that mapped to drive I (as in indigo).

Future things to try are getting the development version of Wine and making a bug report based on that. By the time I try this project again, I’m likely going to be trying on a distro friendlier to gaming. I’ve been interested in learning Arch, and I hear Manjaro is a good trade off for my needs. I’d just like to finish a project or two before learning another major Linux branch.

Final Question: What game from your past do you wish you could play again?

Minecraft NoVillage Datapack

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I was going to cover work on my cluster, but I needed an easier week, so I worked on sysadmin stuff for my family’s Minecraft server. Let’s get started!

Minecraft is a constantly changing game. I don’t always like change. My ideal creative world is a clean superflat world with a normal Nether and End, but while working on a resetting End city with a friend, we found my creative server wasn’t just generating a clean overworld, but the End and Nether were also devoid of structures.

Structure generation in Minecraft is controlled by a single flag in the initial config file. Historically, it only affected the overworld, but more recently, as I found recently, it now has power over other dimensions as well. I wasn’t happy.

Plains villages are the only possible structure that can spawn that I don’t want. My plan was simple: create an empty structure file, and replace the contents of all unwanted structure files with it while preserving the names.

I started with a working datapack that fires off a series of fireworks in the shape of an American flag. I unpacked the server jar file and cringed at possibly having to do 50 or so of these structure file swaps. Fortunately, I’ve recently been working with the find command in Linux, so I had an idea of the sort of things it could do, and such a massively parallel operation is second nature to it. I had to look up a little syntax to target all files but matches, the -not command to negate the next option, but I got the feeling I was over the initial learning curve.

In practice, this project was almost self-assembling. The biggest hitch was when NBTExplore was required to edit level.dat when structures failed to generate, even after telling them to do so in the config options. I had to install mono, a compatibility layer for .NET framework applications. I had previously had issues with getting it to work while following the programmer’s instructions, but sudo apt-get install mono-complete was suggested on an old forum post from around 2011-2013, followed by a dead link. I also had to look up how to extract a .exe file from the given .msi folder using a version of 7zip that I’m pretty sure came with Debian.

It felt like a small miracle when NBTExplore showed up properly. I have no clue if I can zoom in, but I was able to get in there and do what I needed to.

In short: I pulled off something satisfying using mostly skills I already had.

Of note: Structure packs use something called a domain or something that can be turned on or off. In order to override default anything, you must place your assets in a domain called minecraft and in an appropriately titled sub-directory. It does not matter what your datapack itself is called. Voiding structures will only work with structures that aren’t hard coded into the game, like nether fortresses strongholds, or desert and jungle temples.

Final Question: If I submit this datapack for publication, what should I name it?

Recovery

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am trying to recover my Laptop’s Windows drive. Let’s get started!

This project… this project I never wanted to need… has bullied around other stuff, and I’ll be glad when the equipment I’m using for it is freed up.

What felt like ages ago, I was formatting a drive so I could install MineOS on a 1 TB drive and move my family’s Minecraft server over from a smaller drive. The tutorial I was following said to format sdb, and I formatted sdb, but sdc was the one I really wanted to format. I missed the warning signs, and by the time I noticed, the deed was done. Normal procedure would dictate that I immediately shut down and remove the affected drive, but it’s screwed in and the computer is presently functioning as my primary workstation. Aside from mounting it once or twice right away to verify the damage, I’ve mostly left it alone.

Ideally, I would have backed anything up right away and only worked on a copy. For that, we ordered an eSATA to SATA cable (with power included) and it came on a slow boat from Taiwan. In the meantime, doflagie, a friend from my family’s server who has a few decades in IT, told me how handy USB to SATA cords are and “[wished] you were next door, I’d throw you one!”

With the cord in hand, I hooked it up and quickly learned to treat it more like an internal SATA connection and less like an external USB drive. I HAVE A QUESTION OUT ABOUT USB TO eSATA. I had to rearrange the BIOS again to prefer USB over eSATA, but that wasn’t anything new to me.

I learned my way around the dd command. There’s a reason it’s called Disk Destroyer, and I’m thankful I haven’t learned that lesson for myself yet, and I hope this lesson I’m going through now is close enough. Data Duplicator, the actual name, is a rather odd command when you look at it. Where most commands would have you order your parameters, dd makes you explicitly state the input and output files.

I eventually dumped the formatted disk straight onto a waiting 1 TB SSD and took it to my tower upstairs. I don’t want any chance of wiping another important drive, so I grabbed the original HDD from ButtonMash, the drive with my first Linux install, and put it in my personal desktop instead of another Windows drive. I tried installing TestDisk from a package, but I was missing dependencies, and Ubuntu didn’t like my USB Wi-Fi dongle (What is it with that machine and OS changes needing different Wi-Fi? I mean, first Win10 hates my internal Wi-Fi card and now this?).

This is where the project lingered while I worked on my Pi 4 Wi-Fi to Ethernet router. I broke down and finished that the easy way shortly after posting about it last, and due to fatigue and a photo chest week, I bumped this post a couple weeks and used something I had in reserve about a scam. I had another one, by the way. This time it was only about half a Bitcoin, but I reported it right away.

With a working Internet solution, I installed TestDisk and let it run. All I know about this utility is that if you’re doing free and open source data recovery, you’re dealing with TestDisk. I didn’t really see another option. I used it to find missing partitions, and I didn’t understand a lick of what I was doing. I ended up giving up and making an ISO file of the original drive over the copy I had made.

ISO copies don’t appear to be readily editable, it seems. I’ve since been working with the main drive. Most of my work toward this project has been the slog through the hard drive, looking for partitions, multiple times over.

I tried burning Win10 recovery CD. which needed a few megabytes more than a single layer, single sided DVD. A new, low-end USB drive has joined the fold, and it’s the biggest thumb drive I now have, weighing in at 16 GB. The ISO Microsoft gave me didn’t work: “operation [sic] not found”. I tried booting to it from my GRUB CD, but was told something or rather was invalid. I also tried some small distro called Trinity Recovery Kit, but again, it’s the right tools in inexperienced hands.

I’m getting tired of this. I want to move on, and doflagie even told me I could be on this for months. I just want to run a general recovery program and see what I can grab from the mess of things I have now, then try again after putting the ISO back for another pass. After that, this project needs to go rest in peace.

Addendum: I was going to make this post a two parter, but in all reality, I’m done with this project. I don’t have enough content for a second half.

I carved out what I could with some program I forget the name of. It combed through the remains of my hard disk and spat out a bunch of files. At least it had the courtesy to separate them by file type, because when I opened the PNG and JPG folders to sift through the ashes with a GUI viewer, my laptop chugged at tens of thousands of tiny, little files.

I had to learn the find command to weed out the smaller ones. I figure in the hands of someone who knows what they’re doing, it can fully replace all functions you would expect from a GUI file viewer except actually generating a preview. One little adventure here was when I had around 22,000 or so PNG’s larger than one kilobyte , roughly 50,000 PNG’s total, yet 0 of those were smaller that 1k. The difference were exactly at 1k.

Another small adventure was when I moved the 1k files into their folder, but then it tried moving each file over and over again in an infinite loop. I immediately knew I was dealing with a recursive directory error.

When I finally went and combed through the reasonably sized PNG’s it was mostly stuff I probably had lying around in a cache or swap at some time. Other bits were system icons like for forward or back. The JPG folder looks more promising, so I hope to recover more memories from there.

I’m done with this project. As with a few of my other projects, I need to release this one before all progress possible is made. Data recovery is expensive for a reason. I’ve done what it’s worth it to salvage things. Any additional data the professionals might glean from my drive isn’t worth it.

Final Question: What projects have you had to lay to rest with no intent to ever finish?

Family Photo Chest Part 7: NAS Hardware

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m assembling a NAS(Network Attached Storage) system. Let’s get started!

These things don’t come cheap. Between the case and four large hard disks, the whole system costs as much as a decent computer. It’s also worth noting that this is the first new system I’ve covered on this blog, aside from Raspberry Pis or similarly powered units.

In a way, it was actually a small blessing that this aspect of the project was delayed. In the time since I started researching NAS and now, Western Digital was found to be selling some of their archive quality drives as SMR (Shingled Magnetic Recording) instead of PMR (Perpendicular Magnetic Recording). This is bad because shingled drives are designed to overlap their magnetic tracks while writing so only the narrow read head can fit when the deed is done, and I intend to store more than just static data on here.

We bought four of the smallest non-shingled model. At seven Tb a pop, we should be able to dump all our existing data onto these things two or three times over should we see fit, but only after it’s set up, and that’s after reserving a full drive’s worth of data for parity.

Parity is a redundancy technique that provides some room for error. RAID 5 (Redundant Array of Independent Disks) does this by using a bit one one drive and adding all the bits in the same position on each of the other drives and ignoring all carries. It then distributes this parity data across all the drives, so the more drives in the array, the more efficient your data loss protection. If one drive is suddenly zapped out of [electronic] existence, you just need to replace it with another one of equal or greater storage capacity and the array can repair itself from there.

The NAS system we got was made by Synology –just to be clear, this is not a sponsorship, and I have never had a sponsorship– and I had some quality time with my father as we assembled it. Assembly wasn’t toolless, and the case was a little hard to get back on, but the photographic instructions were easy enough to understand, though they could have used to label their screws as being for either HDD or SSD.

Software wise… I’ll be spending another post setting things up, but I did take a self-guided tour of the place already. Synology has their own operating system called DMS (Disk Management System). I saw that and panicked. My shtick has been Linux, and at least trying to do it myself until stop learning and get frustrated. Long story short, I got bored and found myself reading the license agreement. I didn’t understand it all, but I found references to the GNU license in there.

According to one post on Redit I have since lost track of, people should be cautioned against toying around with different operating systems on this NAS solution because there’s a chance it could brick the system; I really don’t want that. Besides, it looks to me like DMS is built on at least the Linux kernel, and if that thread I found was to be trusted, it’s a stripped down version of Debian.

While writing this, I poked around in SSH and confirmed Linux, but determining parent distro is outside my abilities right now. It sure didn’t feel like a normal SSH experience. It dumped me straight into the root directory instead of a home folder –understandably without the disclaimer about free software and no warrenty– and when I got around to sudo whoami, it lectured me about the basics of superuser privileges, mirroring my earlier experiences in the graphical web interface.

From the moment I started installing the DMS operating system, I noticed how they somehow managed to design their user experience in such a way that anyone with basic computer literacy can use it, but they somehow managed to avoid insulting the intelligence of their power users. I didn’t need the utility they had to find the device’s local IP. I don’t need their online services to bypass port forwarding or their pictures and video galleries, but they’re there for people who want them.

My only substantial complaint so far is that the interface looks like it wants to be Windows, but clearly isn’t. The blue color scheme is there, but all the icons are off. I’m also constantly rummaging around through documentation for advanced features I don’t have a reason to know about yet, let alone disregard for the time being. I suppose I was a little miffed about GoldenOakLibrary being one character too long for the length cap at 15, but by removing a couple letters near the end, it’s still readable if you know what you’re looking at.

My big plans for this system at present are to make some sort of division between family photos, computer backups, and general storage. Once that’s done, we can finally start scanning, among other things.

Final Question: What systems have you used that don’t baby down their interfaces with scary warnings to the point of alienating power users who already knew what was going on?

Minecraft Server Graduation

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am reviewing the history of the Minecraft server my family started for us and a couple friends. Let’s get started!

It all started when we left a larger community when its figurehead started appealing towards a different audience and the culture shifted a little too far for comfort. My sister assembled a series of datapacks from the Hermitcraft server that would give a Vanilla feel to the game while adding a few fun things, resulting in the “Creepers and Cream” pack.

Using a few old parts, I spent a few intense weeks working on Micro Core Linux, trying to optimize every last drop of performance out of the aged CPU. In that time, the biggest takeaway I had was learning the root directory, /. In the end, Java did not want to work for me, so I capitulated and used a ready made distro, MineOS.

MineOS is a Linux distribution that was built to host Minecraft. All overhead is trimmed down, and the firewall is sealed tight, only opening ports absolutely vital for operation: Port 22 for SSH, Port 25565 as the default Minecraft port, Port 8443 for HTTPS access to the WebUI. While I wasn’t too sure about this WebUI and how much it would cut into the precious resources of the host machine, I will say it has been worth it.

One of the early optimizations was adding G1GC to the Java arguments. On the default Java garbage collection, Minecraft would just keep carving out more and more RAM and try to clean it out all at once — literally the worst case possible for a game where things are constantly being loaded and unloaded all the time. I added all the RAM I had and still had laying around and I still had a predictable big crash after a couple weeks of nonstop play.

Things were largely smooth after that. I set up automatic daily restore points and weekly archives. We ended up investing in a 1TB drive when the archives filled it up and the server declined to continue running. My laptop’s Windows drive was formatted in the crossfire.

Over time, our community grew. We aren’t a huge server, but we have a few regulars. The limiting factor I cannot control is the CPU. It has four cores, but get just two people on there in different parts of the map and it maxes out the activity for a single core, and the server starts skipping ticks every so often. It’s still very playable, but I’d like to not see that message, if possible. There are three other threads, but Minecraft servers are still unable to use more than one core at a time. So that is our limiting factor…

At least it was, until people started having random connection issues. It looks like our ISP has seen fit to make things easier for the general public, which is fine, as long as you don’t alienate your power users. Long story short, they’ve moved things around and made their online protection more aggressive, labeling at least one of our players’ IP addresses as malicious. We told it to allow access, and it only gave him a month. I tried looking into port forwarding, and it was elsewhere in a place I couldn’t find. Tech support was no help because the line dropped during a hold and nobody ever called back.

In the face of these challenges outside our control, we are now considering moving to a hosting service. We don’t know which one yet, or when we will move over, but in the meantime, I’m trying to open a creative world on the same machine. The first server rarely uses more than 4GB of RAM anymore, and there are still three other cores, though I will be saving at least one for normal operations and miscellaneous overhead.

Final Question: If you play Minecraft, what version did you start playing on?

“Beowulf Cluster:” Part 5: DHCP and NAT

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am getting a head start on this post, so I don’t yet know where the end point for this segments. Let’s get started!

I hate networking. This router project is really taking a lot to learn. I’m losing track in my memory about what point I was on when the official break in weeks happened. Right now, I’m just looking forward to the time when I can say I have a custom router, and setting up the “Beowulf Cluster” I originally set out to build –not that I can really call it that anymore– should be one more week™ after that.

Once I had the network interfaces set up correctly, DHCP wasn’t far behind. I had a few semicolons missing after some lines, but that was about it. I plugged my laptop into the my little subnet and the Pi assigned it an IP in the range I had told it to. It just wouldn’t let me at the Internet at large, even though I haven’t changed iptables off its default of accepting everything yet. Of note: I was unable to use the Internet even while still connected wirelessly. Two weeks ago (Was it only that short ago?) I would have just poked around with “black box reasoning” before concluding that Ethernet superseded Wi-Fi because it is faster, all other factors being equal.

Further investigation led to believe I should look into NAT (Network Address Translation). It’s disabled by default, likely because if you need it, you’ll soon learn about it like I’m doing now.

A couple guides later, and I had uncommented a line in a config file to allow IP forwarding, but when I used modprobe ip_forward to add NAT abilities to the kernel, listing out my iptables gave a warning about legacy software.

***

Well, this write up is coming due, so here’s what I’ve got.

I worked with a couple friends on my private Minecraft server –yes the one I’ve covered here before– by the names of doflagie and TheRSC. We didn’t go messing around with NAT tables, but we did inspect the DHCP configuration files some more.

At least, that was the plan at first. I accidentally broke an interface by editing the file there. Rebooting cut off my connection. I SSH’ed in over the other interface and promptly broke that one too. I had to swap around cables to fix those config files, so it was all good.

With my little review from last week out of the way, I went to weed through the actual DHCP config file. I wasn’t sure where it was, but before I resorted to finding it again online, I noticed one terminal on my workstation was already in a local DHCP directory when it wasn’t SSH’ed into somewhere else.

As part of my workflow, I hooked BlinkyPie up to the Ethernet subnet. Thanks to the default Raspian settings, eth0 overrode wlan0 as the default network interface, so I didn’t have to disable or disconnect Wi-Fi. Otherwise, Blinky is standing in for a generic microcomputer on this micronetwork.

It took several hours, but when I rearranged the DHCP config on the Pi4 to reflect the subnet and not the wider home network, I was able to reach out from Blinky through the Pi4. However, I was only allowed to the gateway, and no farther.

These results are underwhelming. Each last problem feels like it is comprised of a chain of smaller problems, leading to another such chain. I’m making progress, but if feels like the goalpost is moving on me.

Final question: How hard should this really be? Sure, there are higher lever programs, but how long should a low-level approach like what I’m doing take to figure out?

“Beowulf Cluster:” Part 4: Network Interface Connections

Good Morning from my Robotics Lab! This is Shadow_8472 and today, I am working on that Pi4 network piece again. Let’s get started!

I am having the hardest time with networking. What could have been a confusing, possibly straightforward black box operation turned into a subject of study where I’m likely going to be able to do my own thing later on. With that better foundation, I just might add DHCP back in into my general plan, as such a system would be a much more useful tool I could use in an upcoming project I’ve been looking forward to.

As stated in an earlier post, I’ve been cobbling my way together using a series of incomplete tutorials and seeking help understanding the gaps. Many video tutorials for networking are either old, made with bad audio quality, have a presenter with a thick accent I can’t make out, or a combination of the three. Text is unfortunately the way to go.

My functional goal for this week was to get the Pi 4 functioning normally online with a static IP. Through whatever mishmash of tutorials, the following is what I’ve cobbled together as I understand it.

The Pi 4 has two network interfaces, one called eth0 and another called wlan0. These names represent an old standard on the way out, but for this week at least, I’m sticking with them. I started by going to /etc/network/interfaces and set it up so each interface has its own configuration file in the directory /etc/network/interfaces.d/. Eth0 was straightforward to set to static, but wlan0 had another pointer elsewhere for the WiFi name and passkey.

My first big problem was in how I had to copy configuration file examples by hand. At my worst point, I had some equal signs where spaces belonged and spaces where underscores belonged. The former was a mistake I made when looking at two related, but dissimilar files, and I diagnosed it by paying attention to the error messages when I tried sending sudo ifup wlan0.

The later was a mistake I made because I was copying things by hand without SSH and a dark mode plugin I use actually cut off the bottom row of each line of test in a code box, including the entirety of each underscore. This one only told me the interface failed to start. Someone going by Arch on the EngineerMan Discord pointed me to journalctl, some sort of log viewing utility. I piped the wall of text through grep to pick up wpa_supplicant, a utility that kept complaining. Journalctl also pointed out locations of errors with quotes around the WiFi SSID and psk fields where I needed strings.

Assorted errors along the way included a mistaken time zone that was easily taken care of. I also had a surprise when the contents of /etc/wpa_supplicant/wpa_supplicant.conf vanished on a reboot: turns out I had a blank copy in /boot and as Raspbian boots, it will move certain configuration files in that folder around to overwrite their permanent counterparts, leaving me guessing as to what’s already been done.

With stable network interfaces, I started testing them, but I wasn’t able to ping anywhere successfully. I noticed that it was using its IP from the eth0 socket, and not wlan0 for some reason. Eventually, I either found or was directed to route or route -n: basically the table where Linux decides what door to send packets out of as they leave. Eth0 was set up as default if no other matches on the table were found; all my ping packets were being sent inward and getting lost instead of getting sent out, into the wild.

I spent time studying this new table and found a clear, black box style tutorial I was able to reverse engineer LINK. Manipulating the table directly was straightforward enough, but what I failed to realize was that parts of the table are generated fresh each time a network interface goes down or comes up. My work totally collapsed when I rebooted, reverting the default interface back from wlan0 to eth0.

The fix (shown as “Part 2” in the black box tutorial linked above) led me back around to the beginning. Part of the configuration of different interfaces includes a setting called gateway. I actually had to move it from eth0’s config file over to wlan0’s config file.

Admittedly, I am a bit disappointed in this week’s results, but I learned a lot more than a total black box experience would have afforded me.

Final Question: Have you ever poked at a “black box” and learned how it worked?

Family Photo Chest Part 7: How to Plug In a Scanner

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I would like to share with you the inSANE time I had trying to set up a scanner without root privileges. Let’s get started!

SANE is a resource for Linux to streamline the setup of scanners. From what I understand, it’s supposed to just make things easy.

Unfortunately, there’s a bug (or was) a bug in the version written for Debian Buster. I honestly thought this would be easy; I spent the whole week trying to not only fix the issue, but understand what I was doing as I was doing it. Special thanks go to all the people who at least tried to help me on the various Discord servers I broadcast my distress calls on.

As with any problem, diagnosis is the first step. Tutorials, in general, expect a clean installation, but I’m past that point. I’ve already tried, and failed to complete this procedure. I at least know how to look around with ls -l and other commands.

I started out with sane-find-scanner and lsusb, and eventually figured that while I –my user account– had permission to use the scanner, I didn’t have permission to access the USB bus it was connected to. A little research led me down the path of exploring the groups mechanic of Linux. They appear very useful, but I have yet to figure them out. What I did learn was that the scanner group wasn’t enough to let me scan in and of itself.

It took me a while to find that bug, but since it was crossed out and a note said it was fixed in some future build, I updated everything from the apt repository and nothing! Reboot: nothing!

I reexplained my support issue several times across multiple Discord servers, and I sometimes got someone to try and help me, but eventually, I ended up going back to that bug and reading up on it.

Buried within what felt like an endless amount of terminal output with the occasional e-mail header, I came across a workaround. It called for making a symbolic link between a couple files, then to put some text in a file elsewhere in the filesystem. The instructions sounded familiar from my last attempt, and they were consistent with other Linux help forum posts from different places.

I spent about a day trying to figure out why the symbolic link wasn’t showing up. The folder in question where I thought it was supposed to appear already a link. There were three files in the directory, and when I used a –verbose flag, I saw three entries. I rummaged around in ln –help and found that one of the other flags caused links to overwrite existing links to replace them. I figured that must be happening here, but its timestamps weren’t changing from a date last year when I supposedly updated them.

I eventually decided to play around with a symbolic link in an isolated spot in my home directory. I had my sandbox directory with another directory in it. I made a test .txt file in the sub directory, and tried making a link directly in the sandbox’s root using the same flags as in my problem link. The link didn’t appear in the root, but it instead replaced the text file, and the terminal warned me about it being a broken link by rendering the link in red instead of the normal cyan. I fiddled around with it some more by remaking the test .txt file in the root and leaving the link in the sub directory. I had a few missteps, but I managed to access the file by using the link.

Back on the main area, I found the links I had been overwriting in the other directory by listing them with ls -l and scanning for a timestamp that wasn’t in 2019. Three entries popped out at me. On closer inspection, the link I thought was a rewrite dud was, in fact, linking to a file within the same directory, but with a different name. It looked like it was meant to catch programs calling for the old version of a program and redirecting them to a slightly updated copy — just like forwarding mail.

The rest of the workaround was simple. I learned about the touch command, a little program to update a file’s timestamp or create a file if it isn’t there. I rebooted and had a good feeling when I heard the scanner move a little as Linux was starting up. Sure enough, I was able to start the GUI interface without superuser.

All in all, this week taught me a near-miss lesson about one of the more interesting features in Linux, and apparently Windows as well.

Final Question: Who’s ready to hook up a printer?