Virtual Machines: a Preliminary Exploration

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m teaching myself a long-overdue skill for Linux: Virtual Machines. Let’s get started!

Overview

Virtual machines are like a computer within a computer; a host operating system allocates some of its resources to a less powerful computer, often with an entirely different operating system. Applications vary from getting a feel for a new Linux distribution before/without installing it on baremetal to giving scammers a sandbox to waste their time destroying without risking your actual system.

Failing other methods with less overhead, virtual machines are a more brute force way to run old or incompatible software on your machine. One personal example from my past was a 16 bit Bible program I liked the interface for. Windows 7 wasn’t happy running it, but there was a special XP mode I could boot into and run my program. I found the solution slow and clunky, and I didn’t use it but twice. Furthermore: the license didn’t extend for Windows 10, so I refused to use it on principle when I downgraded.

Choosing a VM

Wikipedia is a great place for finding software comparisons. Their list of VM’s is quite lengthy, but I wanted a general purpose VM solution I could use anywhere and run anything, as I had an idea I wanted to try on a Windows machine, but my main focus would be running one Linux from another. I was also trying and failing to keep an eye on weather a VM was using a type 1 hypervisor (better performing) or a type 2 hypervisor (more portable/debugable – I think) to run a guest OS.

Looking into individual results, Oracle Virtualbox came out as having a reputation for being easy, even for beginners, though it does lock away some features for a premium version. Free and Open Source KVM (Kernel Virtual Machine) also came up as a better performing, but harsher on the barrier to entry. Further research from LinuxConfig article “Virtualization solutions on Linux systems – KVM and VirtualBox” warned me that KVM may not be as enthusiastic as advertised when it comes to running outside Linux on Linux VM’s, and that I’ll probably want to learn both at some time when I revisit this topic to straighten things out with QEMU and other elements I’ve glossed over while reading about.

Installation: First Attempt

While my goal when starting research was putting a VM on my father’s Mint machine for unrelated –and soon outdated– reasons, I started on my older, but more capable Manjaro machine. I found VMM, a package in a community repository for monitoring VM’s, so I installed it, though poking about only yielded error messages.

It took a while, but it looks like my CPU doesn’t support everything I need for running VM’s. None of my personal computers do. During my initial investigation, I looked up my CPU model on Intel’s site. From what I saw, it supported Intel Virtualization Technology (VT-x), but not Intel Virtualization Technology for Directed I/O (VT-d). One guide only mentioned the former as a requirement, but no package for KVM proper showed up when I searched for it. Furthermore: any commands that look through my CPU’s info properly don’t see the required component.

Takeaway

So, no. I’m not doing a VM today, but when I looked at my father’s Mint box, the newer CPU did support virtualization, and by extension, ButtonMash should too, though their other resources may limit useful applications.

This week’s research has also given me insight as to why XP Mode was so clunky those years ago. I was sending my hardware in directions it wasn’t designed to go. It can still pretend like it’s up to the task, and for old enough applications it doesn’t matter. But hosting a modern OS on top of a modern OS is not for me at present.

Final Question

Have you ever gotten closure to an old project years after laying it to rest?

Family Photo Chest Part 12: Early Prototype Workflow

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m recounting the tale of my first working prototype, and how I ruined it before getting it to actually work. Let’s get started!

Pi4 8GB and Cards

I am now the owner of a Pi4 with 8GB of memory, the highest end Pi available at the present time. When I unboxed it, I put it directly into a case with a fan powered by GPIO pins. Some day, I’ll want to benchmark its cooling against my Pi 4 with the huge heatsink case and passive airflow.

The cards I had on order never came in, and their listing vanished in the meantime. I ended up with some 64GB cards from Gigastone that are supposed to be better on paper, but I’m not in a position to benchmark the Raspberry Pi use case. While these new cards only have one SD adapter between them –I’ve been using SD adapters for labels– they did include a USB adapter. It’s only 2.0, though.

Manjaro ARM

I have not fully settled on what distro to go with for this project. TinyCore is great for starting projects, but I have a hard time finishing it there. For the time being, I’ll be prototyping from my Manjaro ARM card. Whenever I need to reset, I can boot a Pi to Raspberry OS and arrange for a direct dd from my master Manjaro card to the newer, larger card.

Side note: While performing my first wipe, I noticed dd did NOT automatically expand the partition like I thought it did. Once I have things working better, that may be a possible improvement to look at for a new master Manjaro card.

Prototype GIMP Configuration

First order of business was installing GIMP and XSANE plugin for GIMP. They worked first try, but XSANE only recognized a shared network printer all in one with a scanner built in, ignoring the local USB scanner I cannot seem to arrange access permissions for correctly. I think I spent half my time this week on this problem exploring dead ends.

The most important missing piece to the puzzle has been a script called Divide Scanned Images. With it, I can feed it scans containing multiple pictures, and it can separate and crop them automatically with the option to deskew (rotate to vertical) if an appropriate utility is found. Link to blog about script: how-to-batch-separate-crop-multiple-scanned-photos. Linked on that page in a comment by user Jan is a Linux-compatible version of Deskew (I have yet to get it to work).

Eager to test what I did have working, I went ahead with using the scanner on the network. I had someone put some picture stand-ins on; I got two seed packs. To my annoyance, the separation script appears to only work with pictures on file and not freshly scanned in, making it a completely separate process. As mentioned above, Deskew refused to work. I suspect I either didn’t put it in the right place or I was working with a copy compiled for an x86 processor while on an ARM based system, though it could be as simple as shadows from the seed packs.

Struggling With the Scanner

I find SANE to be an ironic acronym: Scanner Access Now Easy. I still don’t have Linux scanners figured out. I know there’s an easy way with security implications I’ve stumbled my way through before. I also have learned that differences between distros make the Ubuntu page I keep getting thrown to for help useless in parts. Whenever I post for help on Discord, someone else comes along with another question and mine gets buried.

Along my journeys, I’ve learned about a scanner group. I tried adding it to my profile, and somehow managed to get what appears to be a fake group. After a long time trying to figure out how to safely remove it so I could add the real one, I managed to remove myself from all my default groups including my ability to use sudo, and I don’t believe a root account is set up on this card. It even said the incident would be reported. Any such black mark never had a chance to be transmitted over the Internet –WiFi was down for USB scanner diagnostics– before I dd‘ed the master copy back over it.

Another attempt had me searching through the local files for answers. sane-find-scanner and anything looking at the USB port directly can see the scanner right away, but scanimage -L to list devices SANE sees comes up with nothing when off the network. I can’t reproduce my exact path on the laptop I’m working from, but I found a tip to check /etc/sane.d/ for appropriate config files. If I understand epson.conf there correctly, my problem is either elsewhere, or I need both a product ID and a device ID, the later of which I still have no idea how to locate.

Revised Workflow Proposition

In the light of GIMP seemingly not wanting to split pictures live in memory, it may be a good idea to offload that task to a more powerful computer that can handle two Pi’s operating scanners and saving to a network share hosted on an SSD before saving the pictures to GoldenOakLibry. Touch up can then happen in-place.

Takeaway

While I’m glad to have gotten a subpar prototype operational, it’s only about 60% of the process demonstrated at once. Still, it was the missing pieces demonstrated, and the toughest spot was exactly where I expected from past experience. This is already Part 12 of this ongoing series, and I want to finish before another 12 parts go by. Any month may be the month, and no matter how down I feel at the end of work on it, I still made a major goal this week.

Final Question

What was the biggest mistake you’ve ever made but still had a minimal effort fix for?

Manjaro: Kernel Error

Good Morning from my Robotics Lab! This is Shadow_8472, and my main tower broke last night, so I’m getting a head start on this week’s post. So today, I’m teaching myself how to fix it. Let’s get started!

The Error

I wish I was writing this week about my experience getting Space Engineers to run in WINE. I had spent my Monday in another game only for it to glitch and lose a bunch of stuff. I needed a break, but not like this:

Warning: /lib/modules/5.10.2-2-MANJARO/modules.devname not found – ignoring
mount: /new_root: unknown filesystem type ‘ext4’.
You are now being dropped into an emergency shell.
sh: can’t access tty; job control turned off
[rootfs ]# _

A lack of keyboard response wasn’t encouraging. Neither was a lack of a response from my usual Linux help places. Still, it’s polite to at least drop the error off for an Internet search, but that is a lot of information to throw around. Any relevant results I got were obscured by my inability to understand them.

As if I weren’t having a bad enough day, I went to try booting to an install disk, thinking it to be for Manjaro. It was for Ubuntu — and it was corrupt. All I saw was what looked like a spaghettified terminal in the pattern glitched spots on the screen were appearing.

Understanding the Error

I tried finding my GRUB disk, but it’s lost. After eliminating bad USB ports as the cause of the corruption, I burned a new Manjaro drive over Ubuntu using my Manjaro ARM card in the Pi 400.

Line-by-line Internet searching for the error didn’t get me much farther than before, but breaking the first line apart was the key because it contains a file path. I booted to my newly reeducated drive and mounted my file system to look around.

/lib/modules/5.10.2-2-MANJARO/modules.devname sounds like a file path, so that was a good place to start. Sure enough, no such file exists, but there is a modules.devname.old file. Looking one directory up, there are a few other similarly named directories in a similar format — the next most recent being 5.9.16-1-MANJARO. It did contain a modules.devname. I considered renaming the old version or copying in the previous one, but I’d have had no idea what I was doing, and there were some slight differences I later found by overlapping transparent terminals — not just side-by-side comparison.

I already knew Manjaro has the ability to install and switch between multiple kernels, but I had no idea where they’re kept or how they work. Using this new information, I looked up the names, and I’m reasonably certain I ended up with a bad kernel on my system.

Fixing the Problem

Now that I had a lead on what to pay attention to in search results, I learned that people are complaining a lot about this and similar kernels released around New Year’s. I would like to try reverting a single kernel and seeing how that works.

One problem: every guide I’m finding appears to assume I’ve already booted to Manjaro and gives either a GUI or command line options for reverting.

In my experience to date, I would have expected I could just go in, flip some strings in config files around and call it a day. It appears I have no such luxury though. That or the kernel is so central to the system that “luxury” is not having to flip 45 such config files, not all of which are meant to be human readable (I have no idea here). Nevertheless, the tool of choice appears to be a command called chroot.

Chroot –as I understand it– is essentially used to CHange ROOT (the directory, not the superuser). It opens a new shell with a root directory of your choosing within reason. It can be used to repair broken operating systems using an external kernel, but there’s a similar concept I am not covering today called chroot jail where users accessing a system from the outside are only shown a subset of the system’s actual root directory.

For this post, I will be using the guide “How to: Chroot into a broken system via live CD/ISO or alternate Linux system” on the turnkeylinux.org to set up a chroot and the command line instructions from “How to Switch Kernels in Manjaro Linux” to revert my kernel.

Side note: a different guide than linked brought up a program called os-prober. I found it in both Manjaro and Debian. When I ran it on my tower, it quickly found my main Manjaro install, but didn’t find my Windows drive I believe to be connected but not mounted. Other methods showed me evidence that it’s still there.

Attempt 1

I didn’t completely follow the set of instructions on how to set up the chroot. I performed them from my root directory ‘/‘. Things started going wrong when my second chosen set of instructions weren’t clear at all about how to manage multiple kernels from the command line.

I started improvising. I tried to launch the GUI version of systemsettings5. Going through the menu pointed straight to the live USB’s kernel, and all its settings no doubt. Going through the command line was fruitless until I had exited out of the chroot and umounted all extra directories, even when I used su shadow8472 to switch to my normal account, a procedure which root could not reproduce after the chroot ended. In short: I did something right, but I’m not done yet.

Attempt 2

This time, my plans required access to the terminal. My plan was to proceed with the same chroot setup, then work my way into a graphical interface hosted with enough elements from my actual tower to work. Any attempts to use a shortcut using Ctrl+Alt+<function keys> only switch to close representations of the true console, but in reality are running on top of that x server I needed to free up. I followed another set of instructions found here, set up the chroot, and it failed.

Third Try

Left with no other options I begin to understand, I moved on with the command line method of actually uninstalling a kernel. I received a stiff warning about how removing the suspect kernel, linux510, breaks a dependency required by linux-latest.

This attempt was interrupted by someone who sounded like he knew what he was talking about suggesting a few things.

The Actual Solution

Changing direction again, I found this Reddit thread that showed me how to expose a GRUB menu for which kernel to boot. From there, I was able to boot a working kernel and get in.

Out of curiosity, I did try to remove the bad kernel from the GUI tool, and it appeared to work until it was still there after a successful report. I was able to bring up the same failure message from the command line in a dialogue box.

Minor Developments

I’ve had tons of issues with my Wi-Fi card, so I turned it off and this week while focused on it, I’ve opened the case, cleaned it, and pulled the card. Maybe it’s software, maybe it’s hardware, but as of this week, I’m using my Pi 400 as a Wi-Fi to Ethernet receiver full time until a suitable replacement can be arranged.

Takeaway

I never realized how far I could get myself sans help on a problem I’ve never seen before. Perhaps I’ve reached a point where my skills are not readily surpassed by those around me. The one recommendation from a person who sounded like he knew what he was talking about turned out to be a dead end (removing the .old extensions from all the kernel modules).

Computer problems can happen to anyone. They’re just easier to learn how to fix without special training on Linux.

Final Question

When was the first time you solved a complex problem you had never encountered before without help?

Family Photo Chest Part 11: PiCore

Good Morning from my Robotics Lab! This is Shadow_8472, and what a week it’s been! Today, I’m telling you the story of how I’m building up an operating system just to scan photos. This is also a follow up to last week’s topic, where I introduced PiCore as a base. Special thanks to Rich and Juanito of the TinyCore forum who have been answering my questions as I learn about this amazing, little Linux distro. Let’s get started!

Window Manager

I feel like I could do a post or two just on window managers, and I may do so in the future. Long story short, a window manager is responsible for drawing windows within a window server, and not much else.

I now know I used to judge a desktop environment by its window manager without much thought to all the other individual programs going on in the background: a file manager, menus, text and document editors, sometimes even a browser and often a calculator. All of that is bloatware for my purposes. My goal is to have as slim of a system as possible for running GIMP without spending days on end cinching the last megabyte out of the system. After all, I still have to look at it.

Which brings me to my choices in window managers for PiCore. With a smaller tce repository due to being off the main TinyCore branch for X86, I only found a few options. FLWM (Fast Light Window Manager) ships with TinyCore in the main branch. I installed it, and hated all the stuff on the left side. I suppose someone must like it, and I can admit it takes up less vertical space, but I do need at least a few familiar points of reference.

I started researching window managers, but only when I searched tce for keywords: ‘window manager’ did I realize just how short my list was. Least of which was a package that looks like it only multiplexes terminals. One of the admins helping me pointed out FLWM_Topside, but I’m not really a fan of jumping cursors, and I had already selected JWM (Joe’s Window Manager).

I found JWM simplistic and appealing. I had already used it in the past as part of the MATE desktop environment, like what I’m using now. The base version is configured by a text file, which I modified by hand to display a panel at the bottom as opposed to the top. There isn’t even support for calculating that automatically!

I was also sure to remove FLWM packages from tce/optional. There are other spots where it’s left hooks for proper execution, and I’ll weed those out when I see them.

NFS Auto-Mount

Network File System: I’m getting the basics, only for TinyCore, my usual method of mounting shares gets flushed out on reboot. But I can work with the system I have. I can’t give the TinyCore people enough credit: they produced a shell script I could customize and told me how to install it.

Color Depth

Just a note, really: Color depth is a thing. The scanners I’m working with each use 16 bits to say how much of each primary color is present in a pixel. I was warned by user barefootliam on the Gimp Discord server that a XSANE plugin may only use 8 bits. I don’t know for sure at this time, but it’s worth noting.

Amusing Errors

Along the way, the system broke — a number of times even. Early on, I found how to hold Ctrl+C to close xorg as it started and to drop to the command line. Sometimes it was root, other times it was the default user, tc. Now that root has a password, it won’t let me pull that one, thankfully.

Another time, I was changing the default GIMP icons from plain white to color. It crashed JWM. I tried several times, and the result was always the same: All the window decorations around the outside were gone, and I was left with GIMP selected, unable to switch to the terminal so I could save the changes with filetool.sh -b as you do when using TinyCore. I found the sleep command and set a delay to restart JWM using sleep 15s && jwm. At first, I was adding a restart command to jwm, but without JWM running, I was left with a dud command and a brick to forcefully reboot.

Toward the end of this development cycle, I was debugging JWM and moved the config files to a separate directory and restarted the program. Yikes! Turns out it came from the tce repository with a theme of its own. Rebooting once again restored the theme to default, but didn’t solve the empty config file issue.

Future Plans

A computer system is hardly ever a static setup. When I have another SD card, I will want to look into a clean build on the aarch64 version of PiCore. Turns out I’m in the 32 bit version. My dream though is to build it on the 8 GB Pi 4 once I know what I want here.

I may not end up using a Pi at all if it just can’t keep up with my old laptop on its scanner.

Takeaway

TinyCore is a fun operating system. It may be a bit limited due to low popularity, but the community I’ve now contacted gets back pretty quickly. Just know that connections to their site are unencrypted, so if you go there, USE A UNIQUE PASSWORD! It’s just good practice anyway. Here’s a link here to the help thread I’ve been lurking in.

Final Question

What kind of long duration projects have you completed?

Raspberry Pi 400: First Impressions

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am covering the new Raspberry Pi 400 I have. Let’s get started!

The Raspberry Pi 400 is the latest in the Raspberry Pi lineup. Contrary to all its brethren, there will be no aftermarket case for this small computer: the PCB (Printed Circuit Board) has been redesigned to fit inside an official Raspberry Pi keyboard. The specs are comparable to the Pi 4 model B, but with a heat sink built in under the keyboard and a newer CPU with hardware bugfixes, the Pi 400 comes already running a little faster than its Pi 4 siblings.

Unboxing

I made my own Pi 400 kit because I needed some extra peripherals for my other Pi’s, like an extra, longer HDMI to mini HDMI, and a USB C type power switch. I already have a number of SD cards running around, so I don’t need one included.

As I opened the box, I noticed only three USB ports, two blue and one black. I was otherwise using it for a keyboard, so it works out. Still, universal recievers are a thing, and in such a case, a user would be down a port relative to normal operation. Everything else from the sides appears to now be on the back. Note, I am not intimately familiar with the GIPO pins, but there is a bank of them present.

I am, however, familiar with the camera port normally found in the middle of the board, and no such port is apparent anywhere on the case. This is most likely because it’s being marketed more as a desktop on the go, though the educational/experimental reasons for owning one are still present.

LibreELEC

One of the immediate reasons we got the Pi 400 was so we could have our entertainment system with a keyboard. (For those keeping record, the HDMI by the USB C power port is the HDMI LibreELEC wants.) Eventually, I want to rig an IR sensor to respond to a TV remote and I can pull the Pi 400 in favor of a regular Pi 4.

But the problems start here. It’s a small complaint, but the time isn’t quite right. No matter where I look, there’s no place to set the time manually. It wants an Internet connection, and when I looked into it, it was if the operating system –which was fine on a regular Pi 4– didn’t see the Wi-Fi circuitry.

Debugging

I’m disappointed I have to write this section, but I put my Manjaro card in for easier terminal access, and it doesn’t show up no matter how many commands I try under advisement. With such a new computer on the market, the first help forum topics are still being written.

What is still beyond me is that when I put my Raspian SD in, the thing worked normally. I was able to go online and perform a search. I even pulled up content from SpaceX’s latest fireball, SN 8, fresh that day. The circuitry works, but 2/3 cards say it’s not there.

I’m afraid I was unable to carry on diagnostics past this point. The two cards that didn’t see the wireless were 64 bit, and my Raspian card is 32. I had an image of the lost card (supposedly 64 bit Raspberry OS), so I tried burning it to my last clean micro SD. It took a while to flash, but the Internet worked. getconf LONG_BIT said it was 32 bit though. Same story when I tried downloading a fresh copy of Raspberry OS.

I don’t know if I have a defective unit. What I do know is that any warranty I have is ticking. I’d rather not ship off for a replacement until I’m sure of what’s going on. The closest I saw to these symptoms was maybe something on the Pi 4 jamming its own Wi-Fi somehow. I know I’ve been focusing on 64 vs 32 bit, but my tests could just as easily be official vs 3rd party. My next test should probably be to find a 32 bit 3rd party OS to test. If it doesn’t work, maybe some drivers just need tweaking.

Final Question

I’ll make it simple this time: Where is an official 64 bit OS for the Pi? I believe it exists, but the site led me to believe I was downloading it and not something 32 bit.

When To Use LTS

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m moving Derpy over to yet another installation of PopOS. Let’s get started!

PopOS is an Ubuntu-derived Linux distro. It’s website has install images for both the LTS or latest versions, each with options for NVIDIA or other graphics cards. Not knowing anything about which one to pick, I installed the NVIDIA and later the generic latest release images.

To date, I’ve found PopOS to be the easiest Linux distro to install. I don’t find it perfect: their GNOME 3 based desktop environment isn’t aimed at a Windows-like workflow, as I’m used to, but their website has step-by-step instructions to install a number of well-known alternate desktop environments. The GUI package manager has a few issues, but I’m convinced that’s normal in the Debian family and the proper command isn’t all that hard to learn. My biggest complaint, though, is that I’m having random Discord/Internet blinks. They’re common enough to be annoying, but too rare to readily diagnose.

The Structure of Maintenance

In one vision of a perfect world, people would only ever download raw source code to compile locally, a process Manjaro has streamlined. The Debian family’s apt repository nicely emulates another “perfect” vision where software is curated – the correct precompiled package is downloaded and installed automatically.

Neither format has an infinite capacity for continued support. Where Arch family distros simply run whatever is newest, Debian-family distros tend to accumulate changes until they produce a new, discrete version. For added stability, LTS versions are maintained so users have an option to go several years without having to go through the hassle of a major update.

Popularity of Ubuntu

Anyone who knows anything more than the name Linux almost certainly has also heard the name Ubuntu. For some, the name Linux only popped up after looking into that Ubuntu machine in the library that somehow isn’t a Windows or a Mac. Its popularity is in part due to its wide software base, drawing from both Debian’s and its own official repositories, as well as any number of PPA’s people have set up. This popularity snowball extends to software available only by downloaded from trusted websites.

Derpy’s Final? Form

One of my primary reasons for installing PopOS on Derpy Chips is having an Ubuntu-compatible research desktop that can access this software base. Unfortunately, while investigating software I want to use in worldbuilding, I found my version of PopOS simply lacked prerequisite packages. Backports were hard to find and compilation was taking too long.

What I didn’t realize my first time installing PopOS was that downloadable 3rd party software isn’t always compiled for the latest and greatest versions of Ubuntu. It could also be a question of maintenance. If you only have the resources to upkeep a few versions of your software, it makes the most sense to focus on LTS releases. This is why I downgraded to the LTS version of PopOS. I didn’t have much to back up, and I was most of the way back in a single evening.

Takeaway

If you are ever faced with the option of installing an LTS release, you really need to consider the application. If you have a dependence on 3rd party software, you may find yourself more readily at home using an LTS version. If you don’t care about any special software and want the latest and greatest without giving up too much stability, a major release may be for you.

Final Question

Have you ever taken measures to make a correct decision, only for another factor to muddle things up anyway?

I Have a Laptop Again

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am putting my laptop back in it its own case and providing my thoughts on my first week of using PopOS on a desktop. Let’s get started!

Laptop History

I’ve said it before: my laptop is now nine years old, and it has no business still running, yet here it is. Trouble started when the power port broke, and I replaced it with a cheap knockoff. The BIOS refused to charge any batteries from it. I resigned to operating from the power cord on a permanent basis.

As I became more interested in using Linux as my daily driver, I saw no harm in installing to a large, external USB 3 SSD. It actually performed better than the internal hard disk. The only downside was one more cable, and that seemed like nothing with the power cord, and I even opted for a second monitor and a separate keyboard/mouse.

Things started turning around when I got a hold of a genuine power port. Installation was a success, but shortly afterward, I accidentally nuked my Windows drive during an unrelated project. Much later, I swapped out the wiped hard disk for a more modern SSD during an emergency teardown in preparation for this week’s simple, but monumental project.

dd if=/dev/sdd of=/dev/sda –status=progress

The dd command is easily the single most inherently dangerous command I know. As such, I insisted on backing up my home directory in case I had another “nuclear accident.” I developed a series of prerequisites, such as getting an NFS share working as I expected with login credentials of one sort or another. I researched that for a while, but made not quite enough progress in that department to justify continued intense focus.

The typical go-to terminal program for do-it-yourself automatic backups is rsync. I would have used cp, as I’m more familiar with it, but rsync has a number of improvements. It’s supposed to be faster, and it intelligently ignores files already in the target directory. It also has an overwhelming number of additional options I couldn’t begin to cover.

Once I had a copy safely stored on network storage, I had rsync double check it, and I loaded up my PopOS install media –also on a USB 3– to prepare for the final copy action. The BIOS kept loading back to my main install, but instead of arguing with the BIOS, I just unplugged the wrong drive until it was strictly needed.

Both my target and my receiving drive are 1 terabyte a piece, but there’s a little room for variation in the specs. As a final check, I used lsblk –bytes to examine the exact size, and I had a relatively tiny difference in size leaning in my favor. As is becoming my custom for dangerous commands with elevated permissions, I only prefixed it with sudo after checking and double checking the command after entering it. I REALLY don’t like commands where a single character off could be a valid command I do not want to run, especially when I have hard drive designations /dev/sdd and /dev/sdb at the same time.

I only triple checked before executing the critical command. I should have quadruple checked, but everything was in place. I had dd report its progress so I could estimate its time til completion, and it turned out to about 6 hours. I was not there for when it finished, but it booted up just fine first try. My laptop is now a normal laptop again.

PopOS Meltdown and Recovery

PopOS gave me a bit of a scare this week. I customized it by installing KDE, but it started acting really weird when I was trying to play a game with my sister. The trouble started after an update. Discord and Firefox were going on the fritz, sometimes spazzing out two or three times a minute per use. Rebooting didn’t work. Loading up GNOME didn’t work. I had to put it away over Sabbath, and when I upgraded packages again, the issues stopped immediately. I am very thankful to the teams who provided a quick turn around time for whatever bug was making things unusable.

Final Question

I have been looking forward to finishing this milestone for a really long time. I feel like a soft chapter in my hobby here has come to an end. Which of my long-term projects should I return to next?

PopOS and X Drive Recovery

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am reviewing my first impressions of PopOS and compiling another short story I have about data recovery. Let’s get started!

Derpy’s Re-refurbishment

One thing I’ve learned while poking into Manjaro is that if sometimes a “Linux” program really just means it’s packaged for Debian or Ubuntu and other branches are expected to be tech savvy enough to bend compilers to their will. For that reason, I am re-refurbishing Derpy Chips with PopOS, an Ubuntu-based distro made with both privacy and Linux beginners in mind.

When the tale of Derpy Chips was last laid to rest, I had slated it for a new cooling system, as well as a new hard disk and RAM. The later two are trivial for anyone who isn’t afraid of the inside of a computer case. When the parts arrived, they basically went straight in.

The water cooling system, on the other hand, is a lot more involved. I waited until my father was available to do that with him, and I’m glad I did. We unmounted the radiator and the thing looked as if it had ten years worth of dust preventing any and all airflow! And the fans weren’t even blowing the correct direction. After applying a generous amount of vacuum cleaner and toothbrush, we put the radiator and fans back on correctly, but the old, gummed up case fan was on the brittle side. It’s now secured with a length of wire so it won’t spin off-balance.

The cooling system is now working fine. We figure that problem is solved now, but we have a spare just in case.

PopOS

I had a few bumps installing PopOS, but those were all on me. I have a new favorite USB drive to install Linux from, mainly because it’s USB 3. I forgot to umount everything before writing the install image to it, and things seemed to hang for an unreasonable amount of time. Otherwise, it was the smoothest Linux installation I’ve ever done.

I went into this knowing I’d probably be switching out the desktop environment. While the PopOS branding emphasizes the polish they’ve put into tweaking GNOME 3 to boost productivity, I am very particular about my desktop computers looking and feeling like desktop computers and not phones or tablets. Perhaps one day, I’ll get around to using it, but not at this time.

Minor complaint, but for seemingly no reason, CTRL+ALT+t doesn’t bring up the terminal. I have absolutely no idea why not, as that is the number one most important key combination in Linux — far more important than the Windows “Three Finger Salute” used in bringing up Task Manager.

X Drive

Well, It seems I just got a dinosaur of a network storage device working when it dies not two weeks later without fanfare. I spent a short while applying my knowledge from researching NFS to make it properly accessible. I went through and taught myself how to connect special to an SMB drive. I even made a link to it from the desktop. It worked. Now it won’t even acknowledge itself as an internal system.

Along the way, I was imagining a hidden clock in the drive. If the clock reaches zero, the drive stops working. The goal then is to get in, get the data, and hope there’s enough time. I posted for help, and Discord user Ghostrunner0808 walked me through the basics of single-use rsync. It uses the same syntax as cp, the copy program I was otherwise going to use. It can also start again where it left off, in case operations are interrupted.

I set up a little sandbox directory and experimented. While I wasn’t able to get all my root level hidden files to copy, I was able to get everything else. I also looked through all the help prompt and was settling on using the flags: -rtUv. r for recursive, t and U for times last accessed/updated, and v for verbose so I know it’s still doing something.

One of the network shares is meant for general access for all family members. Permissions were such that it made the perfect place to dump X Drive. I copied and modified an appropriate line from a working /etc/fstab to mount it on boot.

The whole reason I had conceived of this idea was because X Drive is little more than a regular hard disk in a plastic case and a special case to hold it on end. The plastic case was getting in the way of the SATA cable from connecting. We forcefully removed it, only to find someone showing how we could have done it without damaging the snaps.

DO NOT TRY THE FOLLOWING AT HOME. One trick I was thinking about trying was triple bagging the disk and sticking it in either the fridge or freezer. Some people have reported saving a failing drive this way, but further research found this is only effective when the read head is sticking to the platter; I could feel the platter spinning inside the case.

This one is above me. I need some form of professional help.

Sad to end on a bum note, but two out of three isn’t bad, though I will add that PopOS has only two downloads: one for systems with NVIDIA cards, and one for all other systems. I downloaded the NVIDIA one, thinking Derpy’s card was such, but I was wrong. Good thing the installation process is super easy.

Final Question

What priorities do you value in an operating system?

Parts Ordered: Derpy’s Second Makeover

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am shopping for parts to re-refurbish Derpy Chips, an older tower I turned into an Ubuntu box for a while. Let’s get started!

Overview

My computer tower is old — going on nine years here. Some day, it will be going up for total replacement, but not today. Still, I find myself in need of a second tower for my personal use — something with a little more heft than my laptop maxes out around.

My father built a computer tower long before I got into Linux. It served as the family’s main computer for several years, but it had a habit of crashing with the vague error: “Power Kernal Failure.”

I eventually took over on the machine to turn it into a Minecraft server. I swapped out its 2 TB HDD for a much more modest 250 GB SSD, and the crashes stopped. I still named it Derpy Chips, part in endearment, part in reference to its past. It has since fallen back into disuse for lack of RAM when I swiped it for Button Mash, our new dedicated Minecraft server, and more recently, I grabbed the replacement SSD for installing Manjaro on my production machine following long after its liquid cooling pump started making a horrendous noise (presumed worn out).

That about brings it to today. I would like to move Derpy back into use, but it needs cooling, a hard drive, and RAM. I did some preliminary investigations, and estimated a cost of about $250 with minimal thought for compatibility. All I need is a holdover to the next rig.

CPU cooling

The options are almost limitless, and almost all of them are incompatible. Derpy’s case is pretty deep, so I’m not terribly worried about height, but different CPU sockets have slightly different shapes.

While it’s entirely possible to hire someone to pour liquid nitrogen on the CPU all day, even a fancy water cooling system would be more reasonable. A properly functioning liquid cooling system would definitely be quieter, which is something I need. All I have to say to that is that I was impressed with my sister’s Noctura cooler, so we ordered one from them.

Hard Drive

As stated earlier, Derpy’s first hard drive, a spinning platter HDD, was defective and the problem wasn’t isolated within the warranty period. Lacking a cooling block that wasn’t the size of a teacup, I yoinked its second hard drive, an SSD, for another computer I have yet to finish stabilizing.

Western Digital is a respected brand, and I’m happy with their products thus far. I don’t need a full terabyte for a holdover machine, but it would be nice to have one in my production model. We therefore went ahead and bought a 1 TB with the plan of migrating the original replacement to it and reworking the smaller SSD again.

RAM

Probably the most picky of the components I’m looking at today, RAM is the sort of thing I hear you can try and see if it works and as long as you aren’t using so much force that things are physically cracking, you should be fine come your “smoke test.”

The most noticeable factor in RAM is what kind it is. The current standard is DDR4, but the technosystem I’m tending is all on DDR3. Other factors come into play as well, such as how fast the motherboard runs it, the voltage supplied, how much memory per stick there is, and in the case of matched sets, how much memory there is total. When shopping online, many sites will have some sort of filters so you can hopefully find the perfect match for your needs.

One important consideration I wanted to take into account was how good Derpy’s original 4 sticks of 4GB of RAM compared against Button Mash’s motherboard and CPU. Obviously, they’re compatible, but I wanted to know if the RAM was too good for the computer it was in. We ended up looking up the specific parts and found it was a perfect match for max clock speed and overall capacity, leaving me free to explore other options.

We pulled in the original box for Derpy’s motherboard and looked up the specs. Turns out it can take up to 32 GB of RAM across four slots where it previously was fully booked with a total of 16. While shopping, I considered buying either another set of four 4 GB or a set of two 8 GB sticks. Hypothetically speaking, If I expected to rotate a future new tower in, I could put my relatively newer tower over where Derpy is now being staged, and the possibility for future upgrade could be left open.

This plan ran into a time wall and was laid down before fully explored. I learned how CPU generation matters to RAM. Derpy has a 2nd generation i7 while I’m presently on a 3rd generation i5. Such a set of RAM would need to service both, and while such a product exists, we ended up just grabbing a 4 set of RAM intended for just Derpy without making sure it shipped from the US.

Of note, we also learned about RAM latency timings. Long story short: while lower numbers are better here, they aren’t as important. “Latency for the less-complex DDR2 RAM can be lower, but it can’t process data nearly as quickly as a modern DDR4 chip.” [MakeTechEasier (author comment)]

Of course, some motherboards might have a capacity to auto detect the voltage and adapt. All I know is that I’m tired of numbers for now and I’ll be happy when I see everything working smoothly.

Final Question

Have you ever built a computer from components and had something go wrong, even if things technically worked?

A Collection of Raspberry Pi Projects: Volume 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am playing with some SD cards I got to try out additional systems for my Raspberry Pi 4. Let’s get started!

The Plan

I would like a computer where I can format drives without having to worry about nuking any drives I can’t easily repair should I get a single keystroke wrong again. I need a quarantine machine.

Earlier this year, I got a Raspberry Pi 4 to serve as a head/firewall for a model supercomputer (still pending). Back then, I found three microSD cards, but only one was good for booting Raspian, and I ended up with a neat, little Wi-Fi to Ethernet router. I like my reverse wireless router. I don’t want to give up my reverse wireless router.

I started shopping for microSD cards, and came across a then-recent Tom’s Hardware article where they tested several brands for use with Raspberry Pis and compared them in different areas. Going off their recommendation, I selected the Silicon Power 3D NAND. I figured I may want more than one, so I got a 5 pack. The next size up was 10, and I’m not quite that avid a Pi user at the present time.

Besides a Quarantine Machine, additional applications include:
a general purpose operating system,
an actual firewall/supercomputer head
a media center
a home-network wide ad blocker
And it would be good to have a backup of my Quarantine card in case I really goof it.

Manjaro ARM

While I was installing Manjaro on my desktop, I noticed they have a version for ARM processors, such as the Raspberry Pi. They even maintain an image for installation on the Pi. I went with XFCE to preview for an upcoming project.

Installation was a nightmare due to user error. I must have tried three or four times to load Manjaro onto the first partition on a microSD card. Along the way, I found a thread where someone was reporting issues with installing the current version, 20.08 (named after the year.month), and people told him to try 20.06. My advice: if you’re thinking about trying to install any version of Manjaro, don’t try to downgrade. I don’t remember how I did it, but I got an older image to update later.

Once I went back to the official documentation and saw I was supposed to aim the dd command directly to the drive itself, I got it next try using the older image. The interface to finish installing felt unintuitive, leaving me to research keyboard standards. I would not recommend for anyone new to Linux.

Where the installer lacked polish, the XFCE desktop environment made up for it with some nicely preconfigured settings. Perhaps I was a bit harsh on it before. I was especially happy to be rid of the ugly, black lines around the screen present on Raspian. And of course, once I found the pacman command to update and checked the version, 20.10 had been released.

I went to hone this install in, reviewing some of my past lessons. A brief search didn’t help me set a static IP, but I moved on anyway. SSH was enabled by default, but it gave me an infuriating time confirming the host key fingerprint. I ended up caving to move on, but I did learn something about the improved security of a newer standard called Ed22519.

Raspup

Puppy Linux –in theory– is an excellent choice for an expendable Linux install on a Quarantine Machine because. It’s small, it’s enough to get you by as a daily driver if you can stand its slightly offbeat control scheme, and most importantly, it’s easy enough to reinstall. It’s also made for x86.

That’s where the people over at Raspup stepped in earlier this year. Installation was much easier than Manjaro, but that was where the polish ended. It’s so new, they don’t show up on Distrowatch. Their site doesn’t have a recognized security certificate, and has some oddball domain going on.

As for the operating system itself, I found its lack of Ctrl+Alt+T bringing up a terminal to be the greatest shortcoming in terms of my user experience. It also seemed obsessed with using linked GUI windows for everything initial setup. It also took way too long to boot up. While it did have those black bars around the screen, it did have a utility to adjust them between reboots. I don’t have the patience for it right now.

The most impressive thing about Raspup is their claim to work on any Raspberry Pi version, though the compute module remains untested. I honestly wish this project the best of luck, but at present, I can only recommend this cute, little project if you’re bored, want to poke around with something new, and have a spare microSD card for your pi. [Link to Raspup]

Other Projects

My goal was to also include a media station, but that didn’t install correctly as my research window for this week was closing. I’d also like to see about extracting an IR sensor from a dead piece of hardware, but that project can easily fill its own month of blogs.

While doing my write-up, I considered Tiny Core again, and there appears to be one for the Pi. I may do this one on the sly without reporting on it.

Six plus microSD cards is a lot to manage for a single Raspberry Pi. The five pack cards each came with an adapter, so I borrowed a label maker and applied labels to those.

Closing thoughts

One of the quirks I noticed with Manjaro on the Pi was that my USB SD card reader was showing up as /dev/sdc. Normally, SD family cards, such as the one inserted directly onto the motherboard at the time, have a different designation, so that’s something to look into. On the other hand, this discrepancy may be just what I’m after in terms of a safe computer to blast away at disk destroying operations. It only took one wrong keystroke, and if I don’t pursue Tiny/MicroCore Linux again, this may have the safety margin I need where I can disconnect any unneeded drives without opening any cases.

Final Question

What other Pi distros would you like to see reviewed on here?