Squashing All My Computers into One: Part 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am centralizing storage across my several computers. Let’s get started!

Computer Drift

One of my favorite things about Linux is exploring the possibility space of designs for what a computer operating system can look like. But maintaining multiple workstations can and will leave you wondering where that one picture is saved or what ever happened to that document you know you saved under blog drafts. I have no fewer than three computers –four or more if you count my laptop and ButtonMash as separate given their common install and/or my dual booted machines– it’s high time I consolidate my computers’ respective identities to reflect me as a single user given my access to GoldenOakLibry, the family network storage.

Project Overview

One would think the process would be as simple as dumping everything in a central location and spreading everything around be it garbage or not. Alas, subtle differences in installed programs or versions of programs make this approach unsuitable.

My best bet will be to think backwards. Not everything will be shuffled around; directories supporting install-specific programs should stay on their specific computer. Backups for such files are fine, but I can accidentally damage other instances if I’m not careful. I’ll need to tailor a number of Rsync commands and schedule them to run automatically with Cron. As this topic is basically day-of filler while I work on a larger project, the full job is a little out of my scope for today.

My goal for today is to make a backup I can operate manually and later automate. If things go well, I can see what I can do about Rsync, but Cron will need to wait for another day.

GUI File Transfer

The terminal is an important skill to have when managing a Linux ecosystem of multiple computers. However, there are some things, such as managing picture files, that inherently work better over a graphical file manager. While preparing for writing today, I noticed places like my respective Downloads directories are quite messy after a few years of Linux.

I wasn’t the biggest fan of jumping workstations all day, so I searched for a way to have the Dolphin file manager operate over SSH. The first result to catch my attention was called FISH (Files Transferred over SHell protocol). Similarly, SFTP (SSH File Transfer Protocol) appears to fill a similar computing niche. Each would be an interesting research topic, but for my purposes today, they both work equally well as long as SSH is configured to use authentication keys.

Derpy’s Backup

The easiest place to start would be my DerpyChips workstation as that’s the one I’m working from starting off. Documents was fairly easy to clean out. I had some Blog drafts and some other stuff I sorted into proper places on the drive.

The dreaded Downloads directory was relatively tame on Derpy. Nevertheless, I still spotted elements from at least four distinct projects ranging from incomplete to long done or abandoned. I even found an instance of GraalVM I may have been running straight from Downloads. My goal is an empty directory. If it will update before I need it again or I won’t need it ever again, it’s gone. If I’m unsure, I’ll find another home for it. I similarly emptied out any directory intended for file storage. Pictures was simple this time, but I expect I’ll need a more elaborate structure once I start trying to organize additional computers’ worth of memories.

ButtonMash’s Backups (Debian and MineOS)

Things were a little more interesting when I started moving things over from ButtonMash. At first, I set a Dolphin instance up with ButtonMash’s home on the left and its view GoldenOak on the right, but when I got a warning about not being able to undo a delete, I thought twice. I did have a deletion accident last phase and used an undo action, so it’s Derpy’s view of it on the right.

I was right about needing to take pictures slowly on this one. Some pictures fit better in with my blog while mems I felt worth saving went in their own directory within the more general Pictures one. But I don’t need copies of everything everywhere if I can just access the drive. Possibly just my favorite desktop and my avatar, if that. I made a directory for those two and any others I may want to spread around.

File manager over SFTP understandably has limitations. Not all files can be directly accessed –particularly audio files– and some graphical files don’t render shortcuts. When I try to preview an archive, it must first be copied over as a temp file.

I had another accident while moving some old Python projects over. For whatever reason –be it permissions or simple corruption– some files didn’t copy over cleanly. I fished around with it a little more and gave up and deleted both source and destination, as I expect another copy was made when I cloned my laptop to its internal drive.

Thanks to this blunder, though, I was more careful when it came to the family’s Minecraft servers from when we were running MineOS. I encountered an error while copying, so I reverted to rsync directly from ButtonMash. Even then, I had to elevate permissions with sudo to finish the job.

Takeaway

I’d like to say I’m somewhere around half way with my goal for today, but if I am to take this task seriously, I’ll need to go back farther and reintegrate any old backups I may have laying around, and by that count, I at least eight computers to consider – more if I count Raspberry Pi’s and any recursive backups I may find.

In some ways, this project is not unlike my experience with synchronizing Steam games manually, but on a larger scale. I’m having to re-think my structure for what I want backed up where as well as how I’m planning to access it. This is not a simple grab and dump.

Final Question

Have you ever made an comprehensive and accessible backup of all your computers, present and surviving?

Linux 101 with Leo_8472: Part 3: Minecraft

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am helping my father get his Debian install to a livable level. Let’s get started!

Reality Checks and Cancellations

My goal with talking my father through a Linux install is to grow my own abilities as well, but our time together is often limited, leaving little room for research during our work sessions.

Debian Testing sounded like a good idea until we started reading about it. I knew there would be warnings, but the booklet worth of information about special care it required was more than a little intimidating. It’s still something I’d like to try some day, just not when back seat computing.

Likewise, setting up a Windows look-alike desktop environment was beyond the scope of our allotted time. The selection I had located were simply themes build upon actual desktop environments. Again: something I’d like to try when I have a machine to do more advance research on.

A lot of our work this week was spent reading on these canceled side projects. We unfortunately had to reduce our scope, but we now understand our limitations better for it.

MultiMC with GraalVM

Overall, my family’s favorite game is Minecraft. I’m trying to move the family away from the default launcher, so we installed MultiMC, a third party launcher that doesn’t confuse your versions.

One download option for MultiMC is a .deb file labeled as if for Ubuntu. I improved a little on the installation when trying to install it locally with aptitude, apt, and finally dpkg. We had to look up how to address dependencies. While it wasn’t painless, it wasn’t memorable either, but it was one example of why straight Debian may not be the best for someone looking to get into Linux without a guide.

Minecraft uses Java. The current version uses the relatively new Java 16, which isn’t a common find on Debian based systems unless purposefully installed. Graal is my preferred open source version of Java, and they have experimental builds for Java 16. With my recent return to the game, I’ve done the following procedure a few times already.

Graal can be downloaded from their GitHub. They provide builds for both amd64 or aarch64 architectures. We didn’t happen to know which one we needed, so we just choose one with a mental note to be prepared in case it wasn’t recognized, which it wasn’t. While Graal will work from anywhere as long as it’s called, the proper place to install it is in /usr/lib/jvm/. On my other installations, the jvm/ directory already existed and was host to other Java installations, but this time we had to make it ourselves. MultiMC recognized Graal shortly after properly installing it properly.

Takeaway

There’s a special feeling when you can reference your own work for solutions to problems you’ve already solved. The standard Java path is something that took me a while to learn, and I still had to look it up on my own system while writing. A week or two ago, I was happy to dump Graal wherever I could find it –one instance was even running it from my Downloads directory– but it’s best to put utilities on a computer where others can expect to find them.

In the meantime, we are practicing terminal with that… hard to read terminal emulator that came with LXDE. My father tried to change the colors, but his settings didn’t stick on restarting the program.

Final Question

Have you ever self-referenced for answers?

Linux 101 with Leo_8472: Part 1

Good morning from my Robotics Blog! This is Shadow_8472, and today I am talking my father through his first Linux install. Let’s get started!

Installing Linux isn’t as difficult as people think it is. If you ask me, the hardest part is the research. You need at least a little experience to know what your priorities in an operating system are or even could be, but you won’t get that experience without exploring the possibility space.

For this project, I’ll be helping my father install Linux. However, he will be the one at the controls and I’ll be “over the phone” from the same room, telling him what hardware to use, what software to download, and what commands to enter.

Step 1: Hardware

Linux doesn’t run in thin air, but it can run on otherwise low-spec hardware. For today’s project, we’ll be installing over my first Linux install — from before I really knew what I was doing — the hard disk drive originally from ButtonMash, my server/phototrunk workstation. For the rest of the system, we will be using his existing computer, ButtonMash’s hardware sister. I’ve selected one of my thumb drives to host our install media.

Very important: back up your data. THE LINUX INSTALLATION PROCESS DESTROYS ANY DATA IT TOUCHES. The USB we’re using is just outdated installation media for a previous system. I started talking my father through the backup process, going through a number of commands, but when I remembered an existing backup, I went and laid eyes on it before giving the final clearance to overwrite it.

This section went well, but I did have to fetch an extra SATA cable to install the drive with.

Step 2: Installation Media

Linux is often installed from some sort of installation media, usually a USB stick. Normally, I’d use the dd command and a setup where I won’t nuke a different drive by accident. I’d rather not risk an accident, though, so I spent a while searching for some sort of Linux USB flasher.

Turns out installing Linux from Windows is a well-documented rite of passage, and dd is what you’re expected to use when jumping from Linux to another Linux. I eventually dropped a question in the Engineer Man Discord and user localhost recommended Belena etcher, so we’ll be trying that. I chose Debian to install because I’ve had some success with it in the past, but to keep things fresh, I’ll be moving him to the testing branch.

Belena etcher came without any sort of checksum, but Debian does. When given, always make sure to look for a checksum — especially for core applications like operating systems.

This section went smoothly aside from an overstuffed Downloads directory providing us with plenty of distractions and command line practice. Belena Etcher came in a .zip file, so we created a decompression chamber my father dubbed his Bombshelter. Once we were ready, Belena Etcher made it clear which drives were which as advertised. The only thing remotely worth complaining about was their self-promotion for premium and other products. Of note: they are multiplatform; if you’re looking to switch from Windows, they are a viable choice.

Step 3: Installation

Now for the main event and my father’s first important choice. Dual booting. When multiple bootable drives are present, the BIOS select which drive to boot and the bootloader on that drive can provide a list of operating systems to boot into. When I wanted to set up one of my computers for multiboot, I wasn’t able to configure it manually after physically disconnecting all drives I care about.

Debian 11 booted straight into an installer, where the graphical installation process mostly cared for itself. Overall, though, this step was a little harder. Names are hard, even for a computer. I know about Logical Volume Management (LVM), but not enough to recommend it for this particular project. When we tried it, it was looking to mess with the existing disk.

We had the option in terms of desktop environment to install, but as I plan on taking him to a place where he can further customize it, it doesn’t matter so much. I nudged him in the direction of trying KDE in the name of exposing him to something new, even though it’s typically a little heavier.

We had a some trouble getting the computer to boot into Debian proper. I suspected it was the GRUB list only outputing to VGA until we pulled up a BIOS level boot list and forced it to boot to the new Debian drive and the GRUB menu offered access to Debian or Mint.

As I feared, KDE was a bit slow.

Takeaway

Congratulations to my father on his first Linux install. I had more planned, but we simply ran out of time. I expect a part 2 to follow shortly where we’ll focus on getting from a point-and-click environment to something he can feel more comfortable in.

Final Question

Have you ever tried to pass on a skill to another person?

A Minecraft Server Sent Me Source Diving

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am building a private Minecraft server for plugin testing. Let’s get started!

Minecraft Paper Server

Any sufficiently popular computer game will eventually attract someone with the ability and drive to modify it. Others will come along and make tools to lower the barrier of entry so even more people can customize their experiences. Barring legal action from parent companies or a drop in popularity, the cycle continues to the point someone with basic computer literacy can find the resources he needs to join the modding community as I did almost ten years ago after my friend introduced me to Minecraft Bukkit servers. Shortly afterwards, I had myself a Last Airbender inspired Minecraft server.

In the year I was gone from Minecraft, my mother and sister have been helping on the moderation team for DS9Fireblade on PhoenixCraft. DS9 has selected a number of plugins to manage the chaos that often accompanies publicly available servers, but it’s hard to master all the commands when you have to worry about not breaking stuff.

Server Building

Little has changed about the fundimentals of Minecraft server construction. A modded server provides a stable platform imitating the vanilla game while providing plugin or mod makers a space to hook into without interfering with other plugins.

DS9’s server is running Paper, a version of Spiggot from the Bukkit modded server family. Ideally, I would take the time to track down the exact version for 100% compatibility, but I was having trouble finding the download. I made the executive decision to just use the newest version of each piece of software and adjust things if needed.

I loaded the server onto ButtonMash even though it’s still technically on Photo Trunk duty until that project is done, idle as it is. I remembered a series of topics I covered a while ago on how Minecraft doesn’t do well with the default settings and that G1GC (Generation 1 Garbage Collection) makes things go more smoothly in terms of long-term problems. I wasn’t fond of doing all that research again, so I reviewed this sight [1] which I do not look forward to citing, but offers a a list of Java flags to use and what each one does.

Months of idling were not kind to Button’s RAM, as it was about maxed out with Xorg (graphical server), even when I closed everything. I rebooted. It defaulted to its internal Minecraft server drive I have slated for a future Linux install some day. Around ten minutes of digital technical taps to the BIOS and removing a bent thumb drive later, I got it back into Debian.

The server still refused to start. Java was up to date with the repositories at Java 11 but it wanted Java 16. I just so happen to have solved the exact same issue with a Minecraft client earlier week. I downloaded an appropriate version of GraalVM [2]. Since I don’t plan on this server going anywhere, I saved it within the server’s main directory and edited my serverstart command accordingly.

The server was a bit more cooperative after that. I signed the EULA and modified a comment about tacos supposedly being the best food (why is that even in there?). Once I confirmed the server was running I started adding plugins, starting with the modern bending plugin and following it up with tools from PhoenixCraft.

mv Goof

My workflow settled into a pattern of looking up a plugin from the list, going to the download page, copying the actual download link, then using wget to download it onto ButtonMash as I’m working over SSH. I wasn’t impressed when I had to rename each file as it came in, but I figured it wasn’t worth my time to immediately puzzle it out, but I made a Downloads directory to isolate incoming files so they didn’t get lost.

Things were going well until I found one that didn’t want to be downloaded via command line. I waited a few minutes, but it still said it was temporaily unavalible. I was able to download in Firefox though, so I saved it to GoldenOakLibry, my family’s network storage. Soon I was copying its containing directory over into plugins.

Oops.

The containing directory was gigabytes of information at the least. I at least knew my Unsteam games project was in there, but I also found an old backup from my laptop. The connections are all hard wire, but I didn’t feel like waiting an unknown amount of time for the half way point, so I canceled the command with CTRL+C to assess damages.

It had only really started: two visible files at the very least. They appeared to still be in their original spot, but I wanted to be sure. I looked up the mv command’s inner workings, but my search results were filled with helpful information for someone learning normal terminal operations, not an unusual situation like my own.

With few places left to turn, I went source code diving. The hardest part was finding the code, but dpkg -S is the tool for that job. I zeroed in on the mv source and found the exact file [3] on the Debian website, a file written in C. My mission at this point was to answer this basic question: does mv do anything to directory trees its moving between physical disks as it goes or does it copy everything and clean up at the end?

I found what appeared to be a loop structure at line 364 in main(), but it didn’t appear to be trying to traverse any sort of file system structure. Further study led me to line 173, in do_move() where it copies the file in question before flagging the whole thing for deletion on line 224. And with that, I had answered my own question: cleanup is done after everything is safely moved.

Takeaway

This post was entertaining to make. It was supposed to be a boring, but quick and easy job I didn’t need to research much for after a week of stalling for topics. It was also the first time I went looking into the Linux source code, and while it makes poor skimming material, it was insightful. Find In Page was my best friend.

Cleaning up in post makes sense though. Everything in Linux is a file, even directories that contain other files.

Final Question

Have you ever studied the laid out inner workings of anything?

Works Cited

[1] lechowski (I think… Author is unclear), <March 5 OR May 3> 2021. [Online]. Available: https://lechowski.info/gry/minecraft/modded-mc-and-memory-usage-history-crappy-graph [Accessed Sept. 12, 2021]

[2] GraalVM, 2018-2021. [Online]. Available: https://www.graalvm.org/ [Accessed Sept. 12, 2021]

[3] M. Parker, D. MacKenzie, and J. Meyering, “mv.c” 1986-2018. [Source code]. Available: https://sources.debian.org/src/coreutils/8.30-3/src/mv.c/ [Accessed Sept. 12, 2021]

Unsteam Games

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am doing a brief followup to my work on moving Kerbal Space Program (KSP) between computers. Let’s get started!

Author’s Note

My experience is with Linux. I make no claims about how Steam handles games in Windows. Furthermore: this post is for educational purposes only. Don’t take advantage of DRM-free games by pirating them. Make an effort your money goes to the legitimate owners of a game when playing.

Steam

Like many computer gamers since 2003, I have a Steam account and I have a number of games through the platform. It’s a nice way to make sure you don’t lose physical copies, and it adds a number of achievements for completionists to strive for.

But that doesn’t mean I have to like getting ads stuffed up my face every time I already know what I want to play, nor do I need to like the feeling of being watched as I play. I like the feeling of achievements, but for some games they’re unironically tacked on decades after release (I’m looking at you, Sonic CD). Finally, there’s that overhead from the client itself. Other than achievements or tracking how much of my life I’ve spent in what games, there are very few obvious reasons anyone might want Steam running in the background if it isn’t strictly necessary.

Home Cloud Saving

Admittedly, Steam does provide a few reasonable features you can’t easily get while cutting them out (assuming the game can even be started). User created content can be added through the Steam workshop. Some games’ multiplayer modes were made specifically to go through Steam’s servers. Saves can be automatically synchronized from one computer to another.

That last one is a feature I’ve managed to replicate, or otherwise have well underway. For single player games I can run without Steam, I’ve been able to use rsync to quickly copy progress from one computer to another. Last time I talked about it, I discovered it’s safest to always include as much of the respective file paths for the directories being synced.

By default, Steam will put games in ~/.steam/steam/steamapps/common/, but any game that doesn’t care if Steam is running should be happy running out of anywhere in the file system – just keep in mind Steam won’t update what it can’t see. This time, however, I’ve upped it a little. I moved my copy of KSP to a directory I called ~/Games/UnsteamGames/ and added two more titles from my collection I wish to synchronize together: Starbound and Stardew Valley. I had to reflect the changes in GoldenOakLibry, the network storage, but I also wrote simple scripts with the synchronization commands embedded.

I also addressed Steam updates. I have my preferred daily driver where I’m installing all my games from. On that machine, I used a symbolic link for each game I’m relocating.

Rsync is a powerful tool. Perhaps my next improvement will involve a hidden file my script ignores.

Takeaway

Rsync wasn’t the kindest to Starbound. The process kept hanging, and I have no idea why – I just know it wasn’t because of large file sizes as my research suggested as it was having issues when all I told it to copy was a directory with MIDI-like files or low-resolution sprites stored as PNG’s.

Final Question

What other games might I isolate and re-home for ease of synchronization?

A Self-Guided Rsync Lesson

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am following up on last week’s topic and actually bouncing the game Kerbal Space Program (KSP) back and forth across a couple computers. Let’s get started!

Project Introduction

KSP is a rocket simulator. Rsync is an advanced copy tool. I don’t understand rsync well enough to use it to its full potential, but this week I’m answering three questions in order: 1. Is the project possible? 2. Is the project feasible? 3. Is the project optimizable?

The goal is a smooth transition from one computer to another. When I need to move, I just save/quit my game, move to the other computer, and start/continue playing. To do this, I’m copying them to a network share they both have mounted, as a third workstation would make for a messy network trying to move it around ad-hawk.

Project Possibility

The main goal here is to get my game running on a different computer, regardless of how ugly the process looks or how long it takes, and ugly it looked and long it took! It’s understandable though. First time moving everything will be from scratch, but after that some noticeable speed up should take place because it should only need to copy the changes between files.

Besides taking longer than I’d consider feasible, my game also crashed, locking up Derpy, a workstation I want to play on. The window dressing appeared, but I coldn’t move my mouse or bring up a TTY terminal with CTRL+ALT+<Function keys 1-6>. I had an existing SSH (Secure Shell) connection, and the top command told me KSP was trying to run, even though I kept trying to shut it down with kill, even with sudo. I wasn’t even given an error. I stumbled across using a -9 flag for kill -9. That worked, but I have yet to understand why. Presumably it’s a little more strongly worded way to stop a program.

My exploration an hour or so later was interrupted by an unexpected success. My game launched, I took a rocket to orbit, confirmed it had enough dv to complete its mission, and I saved and quit.

Is the project possible? YES!

Project Feasibility

The next step was to go back and forth a few times. In my previous step, I had difficulty finding my game ended up inside my existing game directory instead of syncing to it. I did the same thing again my first attempt trying to retrieve it from Derpy, and it took just as long as the first time.

My second attempt started with a successful synchronization from Derpy to the network share with the inclusion of the –delete flag in case a file is ever removed. I errered on the side of caution and made a copy of my KSP directory inside another directory I called KSPLand. Good thing I did. The way I set up rsync, the second phase of copying wiped out my backup and stuffed the game directly into KSPLand instead of KSPLand/Kerbal\ Space\ Program/. I’m attributing this to oversight to trailing slashes at the end of file paths. I would have said I just had a lack of caution, but at least I didn’t wipe out my other games!

Attempt 2.1 went much more smoothly. I compared the exact commands and eliminated the offending slash. I briefly got confused looking through my real game directory and saw one of those recursive copies. This back and forth is mentally exhausting, but it eventually went through.

Rsync Sandbox

At this point, I backed out and made a directory where I can practice rsync without it taking half an hour to reset. In it, I made two directories representing my two workstations. In each of those, I made a notKSP directory with files named after the numbers 1-10 as well as an “innocent” game directory I didn’t want messed with.I lined up a dozen possible variations controlling for my file path ending in path/notKSP is followed by nothing, a /, or /* as well as my destination’s path, path/, path/notKSP, and path/notKSP/. I made a chart by using the –dry run flag. About half of them appeared to work, but to my surprise, the actual runs didn’t match up.

All my source path/notKSP/* got the data over, but nothing was ever deleted. Also, trailing /’s on the destination directory didn’t affect my results. The frustrating part is that I had four commands (two if I ignore destination trailing /’s) give me exactly what I want, and they’re each a single / each away from an unsatisfactory command: rsync -avh –delete source/notKSP/ destination/ will add notKSP to destination and make sure it’s the same while rsync -avh –delete source/notKSP destination/ will make sure the final directory in destination/ contains notKSP and nothing else.

In my opinion, the safest form of using rsync is rsync -avh –delete source/notKSP/ destination/notKSP/ because if I mess it up and forget the trailing /, I end up with Russian nesting directories and have to fish a freshly created copy out of the directory it was supposed to update. Furthermore, tab to complete prefers to add those trailing / characters [unless an extended directory name is present] and I can specify an alternate directory name for the destination should I choose as I discovered while conducting another test in my rsync sandbox.

Project Feasibility Cont.

For how useful the –delete flag is for rsync, it oddly doesn’t have a single-character shortcut. Otherwise, I would have written about implementing it here. There exists a –del alias, but I’ll pass for now.

With a properly calibrated rsync command, I ran a dry-run to the network share from my tower to the network share. Where I previously had a wall of text detailing all game files being compared, I now only had a few files –many of which were deleting a test game I had purposely planted for later deletion– totaling to a mere 1.69 megabytes as opposed to the 3 gigabytes of the full game. Even over Wi-Fi, I was done in seconds.

Is the project feasible? YES!

Project Optimizeability

As long as the possibility remains that a single character can blow a hand-entered command, I won’t be comfortable. The plan from here is to curate a set of four commands and plant them in scripts so I don’t have to retype them each time. One thing is for sure: I’ll be keeping the container directories until I have my scripts working.

I’ll probably set each one up in a script and use the appropriate one when I need it. A more complex script might automatically pull any changes from the NAS when I start and push them when I’m done. Once feature creep sets in, I might as well add checks for if the share is down, or if I even need to sync at all. It might even be useful to add a hidden file next to the directory in the share to track who has the game open and error out if I try to open it twice. But then again, those are all dreams. I’d only go to that extent if/when I give the same treatment to my other entries in my offline/LAN compatible Steam library.

Is the project optimizable? The sky’s the limit, and I don’t have a rocket.

Takeaway

As I worked, I learned a bit more about the command line. For example, I discovered how much easier it is working with multiple terminals in the same window. I also learned that ls will let you peer into multiple directories at once. When a directory is deleted and a backup restored, the normally useless command cd . will update your terminal from trying to look at the deleted directory to the new one. I was able to work on my rsync commands in one terminal, view results in another, and use a third one to reset between trials. With a little working, I got my two accessory terminals down to a single line each with a little help from the && operator to run multiple commands at once.

Final Question

While working on the verification step, I was running out of time to post and declared victory a little prematurely. I had trouble with my rsync NAS to Derpy dry run looking like a full loadout. Something I did goofed the metadata. Have you ever declared victory early?

Stabilizing Derpy Chips at Last

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m addressing an annoying trio of issues I’ve had with Derpy Chips since I installed PopOS on it. Let’s get started!

The Problems

I have a number for gripes to myself about Derpy. I frequently have to stare at an ugly, gray login screen for to a minute and a half before I can select a user account. Tabs sometimes crash in FireFox, but only while I’m using it. Discord sometimes blinks, and I lose any posts in progress – possibly representing minutes of work.

Additionally, my mother uses a separate account to play Among Us on Derpy, and I have my account set up with a left-handed mouse she can’t use easily. Unfortunately, Derpy tends to crash whenever I try switching users, so I’ve been using a full power cycle. And that means we need another long, featureless login screen before the actual login. Some day, I really want to figure out how to change a login screen. Aside from how long this one takes, I’d much rather use the KDE one over GNOME 3.

The Plan

Of the three issues I’m setting out to address, long login is the most reproducible. Fickle FireFox and Ditzy Discord happen often enough to make Derpy frustrating to use as a daily driver, but sporadically enough to resist debugging on-demand. So I am planning on spending up to the full week on Derpy ready to catch the errors when they happen.

Going off what I have to start with, I’m assuming my FireFox and Discord issues are related. Both use the Internet for their every function, and the glitching tends to happen at times when a packet is logically being received: for FireFox, when a page is either loading or reloading, and Discord when someone is typing or has sent a post. If I had to hazard a guess, I would have to say Lengthy Login is directly caused by my NFS being mounted in /etc/fstab, and I’m not sure if there’s anything to be done about it except working the surrounding issues.

For this week, I an reaching out to the the Engineer Man Discord and a Mattermost community I found for PopOS. I don’t know much about the latter, but I heard the PopOS dev team frequents that forum.

The Research

I started by posting about my issues. Help was super-slow, and I often got buried. I don’t remember any self research making any sense. Anyone helping me in the PopOS support chat seemed obsessed with getting me to address Blank Login first, even though it was the least annoying of my three chosen issues, if only other stuff didn’t bug out on me.

Someone gave me a journalctl command to check my logs, and I did so shortly after a target glitch. It came back with a segfault error of some kind. I added this to my help thread and humored them about disabling my NFS fstab lines.

RAM or Motherboard?

When researching further for myself, I came across a number of topics I didn’t understand. I didn’t make any progress until someone told me to try memtest86+. What a headache! I installed the package, but had to dip into GRUB settings so I could boot into the tool. Even then, it kept crashing whenever I tried to run it with more than one stick of RAM at a time, as in the whole thing froze within 8 seconds save for a blinking + sign as part of the title card.

I was hoping at this point it was just a matter of reseating RAM. Best case: something was in there and just needed to be cleaned off. Worst case: a slot on the motherboard might have gone bad, meaning repair might be one of tedious, expensive, or impossible.

I tried finding the manual of Derpy’s motherboard, but the closest was the one for my personal motherboard, a similar model. Both come with 4 slots of RAM: two blue, two black. I used the first blue slot to make sure each stick of RAM passed one minute of testing, followed by a full pass of testing, which typically took between 20 and 30 minutes. I wasn’t careful with keeping my RAM modules straight, in part because I helped clean my church while leaving a test running.

I identified the fourth stick from a previously tested one I’d mixed it up with by how it lit up the error counter, starting just past one minute in. I tried reseating it several times, with similar results: the same few bits would sometimes fail when either reading of writing. If I had more time, I would have a program note the failing addresses and see if they were the same each pass as they kept adding up.

Further testing on the motherboard involved putting a good stick of RAM into each slot. Three worked, but one of the black slots refused to boot, as did filling the other three slots. I landed with leaving one blue slot empty for a total of 12 out of 16 gigs of RAM.

NFS Automount with Systemd

I still want relatively easy access to the NAS from a cold boot. “Hard mount in fstab has quite a few downsides…” cocopop of the PopOS help thread advised me. Using the right options helps, but ‘autofs’ was preferred historically and systemd now has a feature called automounts. I thought I might as well give the latter a try. cocopop also linked a blog post On-Demand NFS and Samba Connections in Linux with Systemd Automount.

I won’t go into the details here, but I highly recommend the above linked blog. It didn’t make sense at first, but after leaving it for a day, my earlier experiences with fstab translated to this new method within the span of about an hour total. I missed an instruction where I was supposed to enable automounting once configured, but it felt almost trivial.

Results

I haven’t had any problems with Discord or FireFox since setting the defective RAM aside in the anti-static bag it came in. As a bonus, switching users works correctly now as well.

NFS mounting is now much more streamlined with systemd. While I cannot say which method would have been more challenging to learn first, the tutorial I was following made this new method feel way more intuitive, even if file locations were less obvious. I didn’t even need any funny business with escape characters or special codes denoting a space in a file share name.

Takeaway

It really should go without mention that people will only help each other with what they know. I find myself answering rookie questions all the time when I’m after help with a more difficult one. Working side by side this week on a future topic, I had such a hard question, people kept coming in with easier questions, and I ended up asking mine enough times someone commented about the cyclic experience. The same thing kept happening with the easy part of my question about login.

Final Question

Do you ever find yourself asking a multi-part question, only to have everyone helping you with just the easiest parts you’ve almost figured out?

Virtual Machines: a Preliminary Exploration

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m teaching myself a long-overdue skill for Linux: Virtual Machines. Let’s get started!

Overview

Virtual machines are like a computer within a computer; a host operating system allocates some of its resources to a less powerful computer, often with an entirely different operating system. Applications vary from getting a feel for a new Linux distribution before/without installing it on baremetal to giving scammers a sandbox to waste their time destroying without risking your actual system.

Failing other methods with less overhead, virtual machines are a more brute force way to run old or incompatible software on your machine. One personal example from my past was a 16 bit Bible program I liked the interface for. Windows 7 wasn’t happy running it, but there was a special XP mode I could boot into and run my program. I found the solution slow and clunky, and I didn’t use it but twice. Furthermore: the license didn’t extend for Windows 10, so I refused to use it on principle when I downgraded.

Choosing a VM

Wikipedia is a great place for finding software comparisons. Their list of VM’s is quite lengthy, but I wanted a general purpose VM solution I could use anywhere and run anything, as I had an idea I wanted to try on a Windows machine, but my main focus would be running one Linux from another. I was also trying and failing to keep an eye on weather a VM was using a type 1 hypervisor (better performing) or a type 2 hypervisor (more portable/debugable – I think) to run a guest OS.

Looking into individual results, Oracle Virtualbox came out as having a reputation for being easy, even for beginners, though it does lock away some features for a premium version. Free and Open Source KVM (Kernel Virtual Machine) also came up as a better performing, but harsher on the barrier to entry. Further research from LinuxConfig article “Virtualization solutions on Linux systems – KVM and VirtualBox” warned me that KVM may not be as enthusiastic as advertised when it comes to running outside Linux on Linux VM’s, and that I’ll probably want to learn both at some time when I revisit this topic to straighten things out with QEMU and other elements I’ve glossed over while reading about.

Installation: First Attempt

While my goal when starting research was putting a VM on my father’s Mint machine for unrelated –and soon outdated– reasons, I started on my older, but more capable Manjaro machine. I found VMM, a package in a community repository for monitoring VM’s, so I installed it, though poking about only yielded error messages.

It took a while, but it looks like my CPU doesn’t support everything I need for running VM’s. None of my personal computers do. During my initial investigation, I looked up my CPU model on Intel’s site. From what I saw, it supported Intel Virtualization Technology (VT-x), but not Intel Virtualization Technology for Directed I/O (VT-d). One guide only mentioned the former as a requirement, but no package for KVM proper showed up when I searched for it. Furthermore: any commands that look through my CPU’s info properly don’t see the required component.

Takeaway

So, no. I’m not doing a VM today, but when I looked at my father’s Mint box, the newer CPU did support virtualization, and by extension, ButtonMash should too, though their other resources may limit useful applications.

This week’s research has also given me insight as to why XP Mode was so clunky those years ago. I was sending my hardware in directions it wasn’t designed to go. It can still pretend like it’s up to the task, and for old enough applications it doesn’t matter. But hosting a modern OS on top of a modern OS is not for me at present.

Final Question

Have you ever gotten closure to an old project years after laying it to rest?

Family Photo Chest Part 12: Early Prototype Workflow

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m recounting the tale of my first working prototype, and how I ruined it before getting it to actually work. Let’s get started!

Pi4 8GB and Cards

I am now the owner of a Pi4 with 8GB of memory, the highest end Pi available at the present time. When I unboxed it, I put it directly into a case with a fan powered by GPIO pins. Some day, I’ll want to benchmark its cooling against my Pi 4 with the huge heatsink case and passive airflow.

The cards I had on order never came in, and their listing vanished in the meantime. I ended up with some 64GB cards from Gigastone that are supposed to be better on paper, but I’m not in a position to benchmark the Raspberry Pi use case. While these new cards only have one SD adapter between them –I’ve been using SD adapters for labels– they did include a USB adapter. It’s only 2.0, though.

Manjaro ARM

I have not fully settled on what distro to go with for this project. TinyCore is great for starting projects, but I have a hard time finishing it there. For the time being, I’ll be prototyping from my Manjaro ARM card. Whenever I need to reset, I can boot a Pi to Raspberry OS and arrange for a direct dd from my master Manjaro card to the newer, larger card.

Side note: While performing my first wipe, I noticed dd did NOT automatically expand the partition like I thought it did. Once I have things working better, that may be a possible improvement to look at for a new master Manjaro card.

Prototype GIMP Configuration

First order of business was installing GIMP and XSANE plugin for GIMP. They worked first try, but XSANE only recognized a shared network printer all in one with a scanner built in, ignoring the local USB scanner I cannot seem to arrange access permissions for correctly. I think I spent half my time this week on this problem exploring dead ends.

The most important missing piece to the puzzle has been a script called Divide Scanned Images. With it, I can feed it scans containing multiple pictures, and it can separate and crop them automatically with the option to deskew (rotate to vertical) if an appropriate utility is found. Link to blog about script: how-to-batch-separate-crop-multiple-scanned-photos. Linked on that page in a comment by user Jan is a Linux-compatible version of Deskew (I have yet to get it to work).

Eager to test what I did have working, I went ahead with using the scanner on the network. I had someone put some picture stand-ins on; I got two seed packs. To my annoyance, the separation script appears to only work with pictures on file and not freshly scanned in, making it a completely separate process. As mentioned above, Deskew refused to work. I suspect I either didn’t put it in the right place or I was working with a copy compiled for an x86 processor while on an ARM based system, though it could be as simple as shadows from the seed packs.

Struggling With the Scanner

I find SANE to be an ironic acronym: Scanner Access Now Easy. I still don’t have Linux scanners figured out. I know there’s an easy way with security implications I’ve stumbled my way through before. I also have learned that differences between distros make the Ubuntu page I keep getting thrown to for help useless in parts. Whenever I post for help on Discord, someone else comes along with another question and mine gets buried.

Along my journeys, I’ve learned about a scanner group. I tried adding it to my profile, and somehow managed to get what appears to be a fake group. After a long time trying to figure out how to safely remove it so I could add the real one, I managed to remove myself from all my default groups including my ability to use sudo, and I don’t believe a root account is set up on this card. It even said the incident would be reported. Any such black mark never had a chance to be transmitted over the Internet –WiFi was down for USB scanner diagnostics– before I dd‘ed the master copy back over it.

Another attempt had me searching through the local files for answers. sane-find-scanner and anything looking at the USB port directly can see the scanner right away, but scanimage -L to list devices SANE sees comes up with nothing when off the network. I can’t reproduce my exact path on the laptop I’m working from, but I found a tip to check /etc/sane.d/ for appropriate config files. If I understand epson.conf there correctly, my problem is either elsewhere, or I need both a product ID and a device ID, the later of which I still have no idea how to locate.

Revised Workflow Proposition

In the light of GIMP seemingly not wanting to split pictures live in memory, it may be a good idea to offload that task to a more powerful computer that can handle two Pi’s operating scanners and saving to a network share hosted on an SSD before saving the pictures to GoldenOakLibry. Touch up can then happen in-place.

Takeaway

While I’m glad to have gotten a subpar prototype operational, it’s only about 60% of the process demonstrated at once. Still, it was the missing pieces demonstrated, and the toughest spot was exactly where I expected from past experience. This is already Part 12 of this ongoing series, and I want to finish before another 12 parts go by. Any month may be the month, and no matter how down I feel at the end of work on it, I still made a major goal this week.

Final Question

What was the biggest mistake you’ve ever made but still had a minimal effort fix for?

Space Engineers: WINE Is Not an Emulator

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am forcing a Windows game to run on Linux. Let’s get started!

WINE Is Not an Emulator

Gaming on Linux hasn’t always had the best reputation. Software written with one operating system in mind may require the presence of libraries only found in that system, and for the past few decades, that’s meant Microsoft Windows has become the standard for PC gaming.

WINE is a compatibility layer. Their mission is to replace the proprietary Windows libraries with their own open source ones that anyone can freely use. Using WINE, Windows exclusive titles run in “Wine bottles” (or Wine prefixes) that make the correct system calls to either Linux or Mac.

Due to copyright reasons, WINE isn’t allowed to copy-paste any code they look at. Imperfections in execution are an inevitability. Some programs work no questions asked, while others refuse to work all together rated from platinum to garbage.

Steam Proton

I remember seeing someone play Portal 1 on Linux about ten years ago. It one of the earliest titles Steam had running on Linux, but all the walls were blackened like some sort of dark mode. It’s been fixed by now, and Steam will now happily run thousands of titles through Proton, their custom version of WINE. For tech savvy users, they have an option to let any game try to run with no promises provided.

Space Engineers: A Pain to Run

The first time I successfully beat a game into running on Linux with no promises given was Among Us. That game only needed an environment variable or something Proton couldn’t find on its own. Space Engineers, though… it’s something else. ProtonDB lists the game as Silver meaning “[A program] works excellently for normal use, but has some problems for which there are no workarounds” according to WINE’s online database, WINEHQ.

I don’t fully understand what I did in my success. No, Space Engineers will not run easily. My journey started in this community thread on Steam, though the GitHub page they link to is referenced several times over. Just follow the instructions there, and you have a portable black box solution.

But it wasn’t so easy for me. I believe I kept having trouble in creating my WINE prefix. I tried the usual black box rites like rebooting, reinstalling the program, and trying multiple solutions. It wasn’t until I slowed down to find some process hanging in the log that Engineer Man user jeffz instructed me in how to kill the process. Prior to this, I was just using Ctrl+C to kill the process directly in the WINE setup terminal and it would appear to work without the actual game going. I think I managed to rack up a total of 8 to 10 minutes of “play time” in Steam while debugging because not every attempt closed itself.

Followup Work Potential

I pretty much called it good as soon as the game started. Linux discussions kept mentioning “flying grass,” so I was prepared when I saw that, though I have it from a fellow player that there are many bugs even when running it in its native Windows. The only bug I keep running into is an early audio cutoff, usually between the words “Inventory Full.”

My computer is barely fast enough to run Space Engineers smoothly. One of my usual optimizations is ditching the Steam client and running whatever game directly. For a WINE game, though, I’m guessing I’d need to go through the whole process again with another setup, avoiding Proton (I’m using a custom Proton fork by Glorious Eggroll, a recommendation from the ProtonDB page). Unless there’s a DRM requirement where I must use Steam, I’m fine not getting the achievements or other end-user visible hooks. I expect I’ll first try getting Among Us to run from Lutris before tackling Space Engineers.

Takeaway

It is my belief that nothing is impossible to get running on Linux given enough time, talent, and compatible hardware are involved. It is also not unheard of for some games to run better on Linux/WINE than on their native Windows due to the lower overhead from the operating system.

I’ve also noticed a discrepancy where WINEHQ has Space Engineers as low as Bronze or Garbage. Looking at both the age and source of available reports, one needs to not draw conclusions about the runability of a particular piece of software in WINE. ProtonDB had reports near both extremes; most reports are summed up as, “It doesn’t work!” but there are more than a few people who gave detailed specifications on how they got it running. I have a working assembly through a lot of hammering until I found the right spot.

Final Question

If you could run any game on a platform it has no buisness running on, what game would you get running on what system?