When To Use LTS

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m moving Derpy over to yet another installation of PopOS. Let’s get started!

PopOS is an Ubuntu-derived Linux distro. It’s website has install images for both the LTS or latest versions, each with options for NVIDIA or other graphics cards. Not knowing anything about which one to pick, I installed the NVIDIA and later the generic latest release images.

To date, I’ve found PopOS to be the easiest Linux distro to install. I don’t find it perfect: their GNOME 3 based desktop environment isn’t aimed at a Windows-like workflow, as I’m used to, but their website has step-by-step instructions to install a number of well-known alternate desktop environments. The GUI package manager has a few issues, but I’m convinced that’s normal in the Debian family and the proper command isn’t all that hard to learn. My biggest complaint, though, is that I’m having random Discord/Internet blinks. They’re common enough to be annoying, but too rare to readily diagnose.

The Structure of Maintenance

In one vision of a perfect world, people would only ever download raw source code to compile locally, a process Manjaro has streamlined. The Debian family’s apt repository nicely emulates another “perfect” vision where software is curated – the correct precompiled package is downloaded and installed automatically.

Neither format has an infinite capacity for continued support. Where Arch family distros simply run whatever is newest, Debian-family distros tend to accumulate changes until they produce a new, discrete version. For added stability, LTS versions are maintained so users have an option to go several years without having to go through the hassle of a major update.

Popularity of Ubuntu

Anyone who knows anything more than the name Linux almost certainly has also heard the name Ubuntu. For some, the name Linux only popped up after looking into that Ubuntu machine in the library that somehow isn’t a Windows or a Mac. Its popularity is in part due to its wide software base, drawing from both Debian’s and its own official repositories, as well as any number of PPA’s people have set up. This popularity snowball extends to software available only by downloaded from trusted websites.

Derpy’s Final? Form

One of my primary reasons for installing PopOS on Derpy Chips is having an Ubuntu-compatible research desktop that can access this software base. Unfortunately, while investigating software I want to use in worldbuilding, I found my version of PopOS simply lacked prerequisite packages. Backports were hard to find and compilation was taking too long.

What I didn’t realize my first time installing PopOS was that downloadable 3rd party software isn’t always compiled for the latest and greatest versions of Ubuntu. It could also be a question of maintenance. If you only have the resources to upkeep a few versions of your software, it makes the most sense to focus on LTS releases. This is why I downgraded to the LTS version of PopOS. I didn’t have much to back up, and I was most of the way back in a single evening.

Takeaway

If you are ever faced with the option of installing an LTS release, you really need to consider the application. If you have a dependence on 3rd party software, you may find yourself more readily at home using an LTS version. If you don’t care about any special software and want the latest and greatest without giving up too much stability, a major release may be for you.

Final Question

Have you ever taken measures to make a correct decision, only for another factor to muddle things up anyway?

I Have a Laptop Again

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am putting my laptop back in it its own case and providing my thoughts on my first week of using PopOS on a desktop. Let’s get started!

Laptop History

I’ve said it before: my laptop is now nine years old, and it has no business still running, yet here it is. Trouble started when the power port broke, and I replaced it with a cheap knockoff. The BIOS refused to charge any batteries from it. I resigned to operating from the power cord on a permanent basis.

As I became more interested in using Linux as my daily driver, I saw no harm in installing to a large, external USB 3 SSD. It actually performed better than the internal hard disk. The only downside was one more cable, and that seemed like nothing with the power cord, and I even opted for a second monitor and a separate keyboard/mouse.

Things started turning around when I got a hold of a genuine power port. Installation was a success, but shortly afterward, I accidentally nuked my Windows drive during an unrelated project. Much later, I swapped out the wiped hard disk for a more modern SSD during an emergency teardown in preparation for this week’s simple, but monumental project.

dd if=/dev/sdd of=/dev/sda –status=progress

The dd command is easily the single most inherently dangerous command I know. As such, I insisted on backing up my home directory in case I had another “nuclear accident.” I developed a series of prerequisites, such as getting an NFS share working as I expected with login credentials of one sort or another. I researched that for a while, but made not quite enough progress in that department to justify continued intense focus.

The typical go-to terminal program for do-it-yourself automatic backups is rsync. I would have used cp, as I’m more familiar with it, but rsync has a number of improvements. It’s supposed to be faster, and it intelligently ignores files already in the target directory. It also has an overwhelming number of additional options I couldn’t begin to cover.

Once I had a copy safely stored on network storage, I had rsync double check it, and I loaded up my PopOS install media –also on a USB 3– to prepare for the final copy action. The BIOS kept loading back to my main install, but instead of arguing with the BIOS, I just unplugged the wrong drive until it was strictly needed.

Both my target and my receiving drive are 1 terabyte a piece, but there’s a little room for variation in the specs. As a final check, I used lsblk –bytes to examine the exact size, and I had a relatively tiny difference in size leaning in my favor. As is becoming my custom for dangerous commands with elevated permissions, I only prefixed it with sudo after checking and double checking the command after entering it. I REALLY don’t like commands where a single character off could be a valid command I do not want to run, especially when I have hard drive designations /dev/sdd and /dev/sdb at the same time.

I only triple checked before executing the critical command. I should have quadruple checked, but everything was in place. I had dd report its progress so I could estimate its time til completion, and it turned out to about 6 hours. I was not there for when it finished, but it booted up just fine first try. My laptop is now a normal laptop again.

PopOS Meltdown and Recovery

PopOS gave me a bit of a scare this week. I customized it by installing KDE, but it started acting really weird when I was trying to play a game with my sister. The trouble started after an update. Discord and Firefox were going on the fritz, sometimes spazzing out two or three times a minute per use. Rebooting didn’t work. Loading up GNOME didn’t work. I had to put it away over Sabbath, and when I upgraded packages again, the issues stopped immediately. I am very thankful to the teams who provided a quick turn around time for whatever bug was making things unusable.

Final Question

I have been looking forward to finishing this milestone for a really long time. I feel like a soft chapter in my hobby here has come to an end. Which of my long-term projects should I return to next?

PopOS and X Drive Recovery

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am reviewing my first impressions of PopOS and compiling another short story I have about data recovery. Let’s get started!

Derpy’s Re-refurbishment

One thing I’ve learned while poking into Manjaro is that if sometimes a “Linux” program really just means it’s packaged for Debian or Ubuntu and other branches are expected to be tech savvy enough to bend compilers to their will. For that reason, I am re-refurbishing Derpy Chips with PopOS, an Ubuntu-based distro made with both privacy and Linux beginners in mind.

When the tale of Derpy Chips was last laid to rest, I had slated it for a new cooling system, as well as a new hard disk and RAM. The later two are trivial for anyone who isn’t afraid of the inside of a computer case. When the parts arrived, they basically went straight in.

The water cooling system, on the other hand, is a lot more involved. I waited until my father was available to do that with him, and I’m glad I did. We unmounted the radiator and the thing looked as if it had ten years worth of dust preventing any and all airflow! And the fans weren’t even blowing the correct direction. After applying a generous amount of vacuum cleaner and toothbrush, we put the radiator and fans back on correctly, but the old, gummed up case fan was on the brittle side. It’s now secured with a length of wire so it won’t spin off-balance.

The cooling system is now working fine. We figure that problem is solved now, but we have a spare just in case.

PopOS

I had a few bumps installing PopOS, but those were all on me. I have a new favorite USB drive to install Linux from, mainly because it’s USB 3. I forgot to umount everything before writing the install image to it, and things seemed to hang for an unreasonable amount of time. Otherwise, it was the smoothest Linux installation I’ve ever done.

I went into this knowing I’d probably be switching out the desktop environment. While the PopOS branding emphasizes the polish they’ve put into tweaking GNOME 3 to boost productivity, I am very particular about my desktop computers looking and feeling like desktop computers and not phones or tablets. Perhaps one day, I’ll get around to using it, but not at this time.

Minor complaint, but for seemingly no reason, CTRL+ALT+t doesn’t bring up the terminal. I have absolutely no idea why not, as that is the number one most important key combination in Linux — far more important than the Windows “Three Finger Salute” used in bringing up Task Manager.

X Drive

Well, It seems I just got a dinosaur of a network storage device working when it dies not two weeks later without fanfare. I spent a short while applying my knowledge from researching NFS to make it properly accessible. I went through and taught myself how to connect special to an SMB drive. I even made a link to it from the desktop. It worked. Now it won’t even acknowledge itself as an internal system.

Along the way, I was imagining a hidden clock in the drive. If the clock reaches zero, the drive stops working. The goal then is to get in, get the data, and hope there’s enough time. I posted for help, and Discord user Ghostrunner0808 walked me through the basics of single-use rsync. It uses the same syntax as cp, the copy program I was otherwise going to use. It can also start again where it left off, in case operations are interrupted.

I set up a little sandbox directory and experimented. While I wasn’t able to get all my root level hidden files to copy, I was able to get everything else. I also looked through all the help prompt and was settling on using the flags: -rtUv. r for recursive, t and U for times last accessed/updated, and v for verbose so I know it’s still doing something.

One of the network shares is meant for general access for all family members. Permissions were such that it made the perfect place to dump X Drive. I copied and modified an appropriate line from a working /etc/fstab to mount it on boot.

The whole reason I had conceived of this idea was because X Drive is little more than a regular hard disk in a plastic case and a special case to hold it on end. The plastic case was getting in the way of the SATA cable from connecting. We forcefully removed it, only to find someone showing how we could have done it without damaging the snaps.

DO NOT TRY THE FOLLOWING AT HOME. One trick I was thinking about trying was triple bagging the disk and sticking it in either the fridge or freezer. Some people have reported saving a failing drive this way, but further research found this is only effective when the read head is sticking to the platter; I could feel the platter spinning inside the case.

This one is above me. I need some form of professional help.

Sad to end on a bum note, but two out of three isn’t bad, though I will add that PopOS has only two downloads: one for systems with NVIDIA cards, and one for all other systems. I downloaded the NVIDIA one, thinking Derpy’s card was such, but I was wrong. Good thing the installation process is super easy.

Final Question

What priorities do you value in an operating system?

Trust and Privacy in a Digital World

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am doing a one-off about digital privacy. Let’s get started!

In all reality, this subject is totally improvised. My projects for this week never reached a satisfying milestone, but my father kept insisting I may have something here. Point being: this is highly under developed, and my views will likely change in the upcoming months.

Most people I know appreciate at least a little privacy every now and again. Social norms vary depending on time and place, but in general, it’s as easy as shutting a door, closing the curtains, or hollering at violators when an honest mistake happens. Businesses partition off appropriate areas for customers to enact transactions. Doctor-patient confidentiality holds medical professionals accountable in case they disperse any of a treasure trove of possible gossip their jobs afford them. We’re good at intuitively carrying on reasonable privacy measures in the real world because our physical world has been engineered to suit our privacy needs.

But for all the privacy measures average Internet goers know, they may as well be jogging naked through the park wearing nothing but a fashionable belt advertised as guarding privacy when surfing the web (paraphrased from countless VPN ads).

The digital world is constantly being restructured and redesigned, and as such, so are the tools to compromise end-user privacy. Without naming any companies or products, more than a couple companies come out and outline their intrusions in that agreement hardly anybody has the attention span to digest, while others are caught committing outright espionage against their clients for foreign countries.

As I said at the beginning, this is still a rough rough draft, but I’d like to propose a number of categories to use when evaluating a piece of software:

Category 1. Malware

Nobody who even thinks he or she understands what is happening here will mistake it for anything else. Barring a user deliberately installing one of these programs to study its operation, these programs are unwelcome, and they’re out to get you.

Category 2. Trojans

This range describes software that performs a desired function, but only as a cover for spiriting in undesirable code. They rely on people either blindly installing it, either by lying about the payload or hiding the truth in ten pages of tiny type and calling it either an EULA or a privacy statement — either of which may as well be written in ancient Sumerian for all the typical user can digest.

Category 3. Trust

In the middle of this spectrum are programs that should be safe if they’re from a trustworthy party. Wise computer users evaluate how much trust a program needs vs how much it deserves.

The more access a program needs on the system or network, the more the user should trust it before letting it loose. The longer people have used it without someone sounding the alarm, the safer the program is. Transmission confirmation is also ab important factor.

Category 4. Open Source

Open source is your friend. By exposing the source code for all to vet, a developer can get tens of thousands more eyes looking for bugs that a relatively tiny professional team may have missed. Because once a piece of software is in circulation, bad people will start prying at it. And while open sourcing makes their job much easier, it also invites good people to do the same on an equal footing.

Category 5. Learning Tools

There is little more you can do to be worthy of trust in computing than to not only expose your well-documented code aside from using it to explain an example for students to learn. By stepping through the source line by line, a lesson explicitly aims to demystify the program in question.

Conclusion

While these categories are a general when applied to a spectrum, they have a lot of overlap. An open source virus is still a virus — but I have heard of an online museum displaying neutered malware that people spend artistic energy on, landing an otherwise category 1 program among category 4 or 5 programs. A technically savvy user can theoretically use some category 2 programs by controlling them with a fire wall.

Most people feel safe as long as they don’t have category 1 software infecting their computers. Ideally, I would like to see everyone going up to category 4, but 3 is more realistic. Unfortunately, digital landscaping companies these days pumps out category 2 software, putting profits above user safety. Without a working, up-to-date intuition on active threats, digital privacy is something that takes months or years of study to obtain.

As for me, I feel like the dim-sighted leading the blind in this matter. I’ve been upset for a while about certain companies with seemingly no accountability for the digital gossip they scoop up wholesale and sell to advertisers. I don’t like being sold.

Final Question:

How well dressed are you really when it comes to living in the digital world?

Kerberos: the Three Headed Authenticator

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I believe I finally have the key holding back a number of my projects. I haven’t finished this week’s research as of starting writing, so I have no idea of how it turns out. Let’s get started!

Critical Path

Right now, my immediate long-term goal is getting photo archive pictures scanning, preferably with a custom script to streamline the process. For that, I need a reliable computer for processing photos — my laptop would work, but I’d feel a lot better if I had its long-term memories inside the case instead of hanging out for the world to snag. I would clone them inside with dd, but I’ve been holding out until I can perform a backup to the NAS. And that is where I’ve been stuck for a month or two: authentication to the NAS instead of by individual IP… until recently.

Kerberos

I often work on topics in parallel, switching on the last day if I don’t complete something. Last week, someone brought up Kerberos, an authentication protocol that’s been around since the mid-80’s, and if the Computerfile video, “Taming Kerberos,” is to be believed, it’s resistant to quantum attacks. And here is another video explaining Kerberos while addressing its weak points, something that most explanations overlook.

Named after the three-headed dog of Greek mythology, Cerberus, Kerberos follows a three-part model for authentication: client, host, and a mutually trusted authentication server. In simplified terms, you and the authentication server has two steps, one that uses cryptographic keys so both client and authentication server are sure they know each other, and another that issues tickets, or temporary passes, for desired hosts. Given a ticket, the host then does some final checks before conducting its normal business with the client.

Kerberos has more practical applications than just NAS authentication. It would appear OpenSSH is perfectly happy authenticating with it. I already have around 7 to 9 SSH-enabled devices and I don’t care to manually generate and distribute keys amongst everyone. And then there’s conflicting host keys when not everything is tied down to its own static IP and my Pi 4 I’m now swapping MicroSD cards on. I’ve resolved to just having one machine I SSH into that holds most of the keys I need and then SSH into the appropriate machine from there. In the future, I may even apply this technology for that website I’ve been working towards for almost a year now.

Compiling Kerberos

There are plenty of explanations of how Kerberos works out there. So many to the point where they clogged up my searches for actually finding an implementation of the method to use as a server. I eventually found what I believe to be the preferred download over at MIT. It does, however, come with a warning about exporting cryptographic source code from the US without permission, and that it’s illegal for it to go to a short list of countries or their nationals, though I did see some possible leeway for Canadians.

I intend to set my PackMan ghost-themed Raspberry Pi 3 (Codename: BlinkyPie) up as the host for this authentication server. With a significantly more powerful Pi in the house, it’s no longer my go-to for small scale computing, but since this is a very small scale job, processing speed and Ethernet bandwidth shouldn’t matter so much. I pulled it out of storage in plain sight and set it next to the NAS, near where I’ll need it for another project I’ve been putting off for years now. Of course, I’ll be wanting to lock that thing up tight once I have it going long-term. For now, I’ve set it up with a static IP where such a future system will go — right in there with our other network services.

The source code came in a tar file. I extracted it, but then remembered that it’s good practice to verify the download when possible. MIT provides a PGP signature. I’ve only worked with Shasums before, but it’s a similar idea: protect against corruption and tampering.

The PGP implementation I found to read about is called GPG. I had to look no farther than Blinky’s own Raspian operating system to find it was already included. It’s usually nice when a little tool you learn about is already within arm’s reach. I struggled around with reading the manual, its internal help page, and looking up use-cases online. I found their missing public key, tracked down an instruction set under “documentation” on how to compile from scratch, and was finally able to manually verify the signature for the tarball. Trying to verify the signature in a separate directory failed to produce the key, as expected.

My first attempt at building Kerberos failed. I stepped back to research details in the instruction set I previously glazed over, such as requiring ANSI C compliance.

At this point, I started to get some direct answers from members on the EngineerMan discord server. EngineerMan himself personally explained how ANSI C is just a longer name for the C language, but mostly older standards. He named c89, but said, “you won’t hear people call c18 ansi c though.”

My understanding of the situation is so: Imagine I have a big program I intend to maintain for 50+ years. I can write it to the standards present here in 2020. But if a new standard comes out in the meantime, I can either go through the bother of updating EVERYTHING each time –leaving myself open to any bugs that might crop up as a result– or I can just keep using the 2020 standards. If someone needs to compile it, the newer compiler can always be manually configured to run the older standard. Now just backdate the situation to 1989 or so and October 2020.

I tried recompiling a few times without changes, trying to sift through the overabundant console output. I think it was my third attempt where I stretched my terminal across two screens and beyond that I noticed several messages referencing missing files.

One iteration, I redirected the first phase’s output to a log file using the > operator, but two warnings about missing files went to the console anyway. One was for something called Tcl, and the other for OpenSSL. OpenSSL showed up when I looked for it, but Discord user jeffz popped in and talked me through getting the developer version. Additional packages had to be added, preferring development versions when available.

Eventually, I stabilized the configuration script (I think), at which point I copied the whole build directory to a backup. With help, I worked on brute forcing the actual compilation. Every time, a huge error log would plague me, sometimes with lines so long, it would take four screens to show the full line in tiny text. User yemou on the Nixhub Discord server spotted a tiny typo in the command fragment make CFLAGS=-std=c89. Most, but not all log activity went away after that. Oh, and jeffz pointed out some MIT Kerberos compilation instructions tailored for Debian-based distros.

Installing Kerberos From a Repository

I got close —real close– to successfully compiling. I learned a lot to file away, but when I explored that link jeffz gave me about MIT Kerberos installation on Debian, it went straight to the apt repository. I installed it and promptly messed up the admin-server package configuration.

Initial MIT Kerberos 5 configuration is unforgiving. I tried removing the packages and putting them back, but they found some other remnants to lock on to. I used the page linked above to track down and remove more mentions, but each time I plucked something out, it broke even worse. I saw someone in a similar situation asking for help. I quickly figured out how to purge when dealing with the repository, and was soon greeted with the configuration screen, though it still must have found something because it didn’t ask everything again. I read farther, and found where they were suggesting a clean install. Not what I wanted to hear, but it’s probably for the best.

This is where I sprouted off to work on last week’s topic, where I arranged for a personal master image of Raspberry Pi OS. On taking another look at the install script for Kerberos, I feel like I’m trying to install an engine to a horse and buggy — in theory it could work, but I’m getting asked questions about where the intake manifold or the gas tank are going to go. I’m spending more effort dodging the work of setting up static IP’s than I am just doing the original work.

Kerberos needs to go wait out in the dog house for a while. Who knows? Maybe I’ll revisit it in five years or so and play ball then.

Final Question

While exploring available Kerberos downloads, I happened across a development branch. They warned multiple times against the wrong people using it with decreasingly relevant warnings. They finally skill gated it by giving a set of instructions to keep the hopeless beginners out of trouble. Have you ever run across any skill gates?

Raspberry Pi OS: Review and Clean Install Image

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am going over the new official Raspberry Pi operating system and producing a custom image I can deploy in case I ever need a clean install. Let’s get started!

Raspberry OS Review

BlinkyPie is my Raspberry Pi 3B+ with a Pacman ghost case I printed and finished myself (with help). It’s supposed to eventually host an OpenCV powered feline deterrent system I’m still aiming to deploy some day. Today, I’m starting by installing the new official operating system for all Raspberry Pi’s: Raspberry Pi OS on a fresh MicroSD card.

Installation was fraught with several simple mistakes. My proper procedure is to: 1. Have an empty MicroSD card. 2. Download the OS image (over a WIRED connection) and verify it with its hash. 3. Quarantine the image and SD card on a computer I don’t mind rebuilding (no sdX other than the target SD in an adapter, internal MicroSD is mcblk0). 4. dd the image to /mnt/sdX. 5. Boot the image. 6. If it doesn’t work, lower the bit rate for the dd command (yes, it saved the project this time).

I didn’t skimp on resources this time: I got the full desktop image with recommended software. Long-term, I’m figuring it will be easier to maintain. Since I’ll be using this image over and over several times, I took a day or two to do things like localization settings, a lefty mouse, enabling SSH, and customizing the UI.

I’m looking to move away from using default, non-root accounts. Raspberry Pi OS (I’m just going to call it rPiOS from now on, if that’s okay) and Raspian before it come with one called ‘pi.’ To change it, I had to first enable SSH, log out any and all sessions for the pi user, and change over the account and home directory names — preferably without logging into the GUI as root and creating a bunch of normal user files.

I poked around rPiOS without an agenda for a bit. There was always the games folder I never paid much attention to before, so I checked that out. Of special interest were the collection of simple Python games. They had no fewer than three Tetris clones, a normal one, one with one block pieces, and one with five block pieces, like the original Soviet precursor. The five-block per piece game runs a little fast, so after some time, I realized I’d like to try and slow it down a bit. I also found a “Bookshelf” where you can read up on Pi projects. It also came preloaded with a bunch of programming tools. I know it should be obvious, but rPiOS is built for learning. It has just enough there to be plug and play for a browsing machine, I know the Pi3 chugs a little under watching my church’s Livestream on Sabbath mornings, but it should be more than enough computer for anyone without serious computation needs.

A Small, Custom rPiOS Image

I have to admit, this week’s topic was a node for a much larger project I’ll be covering next week. Suffice it to say, I’m in a position where I may need to reinstall rPiOS several times in rapid succession. That is why I’m making an image file.

Once I had tweaked rPiOS mostly to my liking, I shut it down and brought the card over to my Manjaro Pi 4. dd was not happy with me making backing up my MicroSD to a 1 TB external hard disk for some reason. Everyone helping me seemed sure it was corruption on the MicroSD card. As much as I respect their advice, I can’t help but be skeptical this time. It’s a new card with low usage so far. It boots fine, and I was trying to copy between two devices on the same USB 3 component. I’ll file this away as mystery unsolved for the time being, but for whatever reason, it confused BASH to the point where ls wouldn’t work with relative paths until I changed directories.

I eventually used the internal rPiOS card duplicator and made a physical copy over top my RasPup install. I tested it, and It booted normally. Moving things back over to the original card, I followed a tip I found in this video and used dd if=/dev/sdX | gzip > imageName.gz to make a compressed copy of my pi SD directly on itself. I copied it over the network with scp, and unzipped my image.

Now, many guides on rPi backups invoke partition tools I’m still too scared to touch, especially on a timeframe. That’s why I downloaded and used PiShrink, a shell script available on GitHub that downsizes your Pi images so you’re not saving empty space in storage — or even worse: transmitting it to a friend. Apparently it’s popular for use with RetroPi, a Pi distro for emulating old games.

My MicroSD cards are 32 GB each. My final image is down to 9. If I had more time, I may try again without the default desktop background, since I’m planning on never using it, and I think my custom desktop image is a good chunk of the added size.

Final Question:

What useful finds have you made when looking for something similar?

Parts Ordered: Derpy’s Second Makeover

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am shopping for parts to re-refurbish Derpy Chips, an older tower I turned into an Ubuntu box for a while. Let’s get started!

Overview

My computer tower is old — going on nine years here. Some day, it will be going up for total replacement, but not today. Still, I find myself in need of a second tower for my personal use — something with a little more heft than my laptop maxes out around.

My father built a computer tower long before I got into Linux. It served as the family’s main computer for several years, but it had a habit of crashing with the vague error: “Power Kernal Failure.”

I eventually took over on the machine to turn it into a Minecraft server. I swapped out its 2 TB HDD for a much more modest 250 GB SSD, and the crashes stopped. I still named it Derpy Chips, part in endearment, part in reference to its past. It has since fallen back into disuse for lack of RAM when I swiped it for Button Mash, our new dedicated Minecraft server, and more recently, I grabbed the replacement SSD for installing Manjaro on my production machine following long after its liquid cooling pump started making a horrendous noise (presumed worn out).

That about brings it to today. I would like to move Derpy back into use, but it needs cooling, a hard drive, and RAM. I did some preliminary investigations, and estimated a cost of about $250 with minimal thought for compatibility. All I need is a holdover to the next rig.

CPU cooling

The options are almost limitless, and almost all of them are incompatible. Derpy’s case is pretty deep, so I’m not terribly worried about height, but different CPU sockets have slightly different shapes.

While it’s entirely possible to hire someone to pour liquid nitrogen on the CPU all day, even a fancy water cooling system would be more reasonable. A properly functioning liquid cooling system would definitely be quieter, which is something I need. All I have to say to that is that I was impressed with my sister’s Noctura cooler, so we ordered one from them.

Hard Drive

As stated earlier, Derpy’s first hard drive, a spinning platter HDD, was defective and the problem wasn’t isolated within the warranty period. Lacking a cooling block that wasn’t the size of a teacup, I yoinked its second hard drive, an SSD, for another computer I have yet to finish stabilizing.

Western Digital is a respected brand, and I’m happy with their products thus far. I don’t need a full terabyte for a holdover machine, but it would be nice to have one in my production model. We therefore went ahead and bought a 1 TB with the plan of migrating the original replacement to it and reworking the smaller SSD again.

RAM

Probably the most picky of the components I’m looking at today, RAM is the sort of thing I hear you can try and see if it works and as long as you aren’t using so much force that things are physically cracking, you should be fine come your “smoke test.”

The most noticeable factor in RAM is what kind it is. The current standard is DDR4, but the technosystem I’m tending is all on DDR3. Other factors come into play as well, such as how fast the motherboard runs it, the voltage supplied, how much memory per stick there is, and in the case of matched sets, how much memory there is total. When shopping online, many sites will have some sort of filters so you can hopefully find the perfect match for your needs.

One important consideration I wanted to take into account was how good Derpy’s original 4 sticks of 4GB of RAM compared against Button Mash’s motherboard and CPU. Obviously, they’re compatible, but I wanted to know if the RAM was too good for the computer it was in. We ended up looking up the specific parts and found it was a perfect match for max clock speed and overall capacity, leaving me free to explore other options.

We pulled in the original box for Derpy’s motherboard and looked up the specs. Turns out it can take up to 32 GB of RAM across four slots where it previously was fully booked with a total of 16. While shopping, I considered buying either another set of four 4 GB or a set of two 8 GB sticks. Hypothetically speaking, If I expected to rotate a future new tower in, I could put my relatively newer tower over where Derpy is now being staged, and the possibility for future upgrade could be left open.

This plan ran into a time wall and was laid down before fully explored. I learned how CPU generation matters to RAM. Derpy has a 2nd generation i7 while I’m presently on a 3rd generation i5. Such a set of RAM would need to service both, and while such a product exists, we ended up just grabbing a 4 set of RAM intended for just Derpy without making sure it shipped from the US.

Of note, we also learned about RAM latency timings. Long story short: while lower numbers are better here, they aren’t as important. “Latency for the less-complex DDR2 RAM can be lower, but it can’t process data nearly as quickly as a modern DDR4 chip.” [MakeTechEasier (author comment)]

Of course, some motherboards might have a capacity to auto detect the voltage and adapt. All I know is that I’m tired of numbers for now and I’ll be happy when I see everything working smoothly.

Final Question

Have you ever built a computer from components and had something go wrong, even if things technically worked?

A Collection of Raspberry Pi Projects: Volume 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am playing with some SD cards I got to try out additional systems for my Raspberry Pi 4. Let’s get started!

The Plan

I would like a computer where I can format drives without having to worry about nuking any drives I can’t easily repair should I get a single keystroke wrong again. I need a quarantine machine.

Earlier this year, I got a Raspberry Pi 4 to serve as a head/firewall for a model supercomputer (still pending). Back then, I found three microSD cards, but only one was good for booting Raspian, and I ended up with a neat, little Wi-Fi to Ethernet router. I like my reverse wireless router. I don’t want to give up my reverse wireless router.

I started shopping for microSD cards, and came across a then-recent Tom’s Hardware article where they tested several brands for use with Raspberry Pis and compared them in different areas. Going off their recommendation, I selected the Silicon Power 3D NAND. I figured I may want more than one, so I got a 5 pack. The next size up was 10, and I’m not quite that avid a Pi user at the present time.

Besides a Quarantine Machine, additional applications include:
a general purpose operating system,
an actual firewall/supercomputer head
a media center
a home-network wide ad blocker
And it would be good to have a backup of my Quarantine card in case I really goof it.

Manjaro ARM

While I was installing Manjaro on my desktop, I noticed they have a version for ARM processors, such as the Raspberry Pi. They even maintain an image for installation on the Pi. I went with XFCE to preview for an upcoming project.

Installation was a nightmare due to user error. I must have tried three or four times to load Manjaro onto the first partition on a microSD card. Along the way, I found a thread where someone was reporting issues with installing the current version, 20.08 (named after the year.month), and people told him to try 20.06. My advice: if you’re thinking about trying to install any version of Manjaro, don’t try to downgrade. I don’t remember how I did it, but I got an older image to update later.

Once I went back to the official documentation and saw I was supposed to aim the dd command directly to the drive itself, I got it next try using the older image. The interface to finish installing felt unintuitive, leaving me to research keyboard standards. I would not recommend for anyone new to Linux.

Where the installer lacked polish, the XFCE desktop environment made up for it with some nicely preconfigured settings. Perhaps I was a bit harsh on it before. I was especially happy to be rid of the ugly, black lines around the screen present on Raspian. And of course, once I found the pacman command to update and checked the version, 20.10 had been released.

I went to hone this install in, reviewing some of my past lessons. A brief search didn’t help me set a static IP, but I moved on anyway. SSH was enabled by default, but it gave me an infuriating time confirming the host key fingerprint. I ended up caving to move on, but I did learn something about the improved security of a newer standard called Ed22519.

Raspup

Puppy Linux –in theory– is an excellent choice for an expendable Linux install on a Quarantine Machine because. It’s small, it’s enough to get you by as a daily driver if you can stand its slightly offbeat control scheme, and most importantly, it’s easy enough to reinstall. It’s also made for x86.

That’s where the people over at Raspup stepped in earlier this year. Installation was much easier than Manjaro, but that was where the polish ended. It’s so new, they don’t show up on Distrowatch. Their site doesn’t have a recognized security certificate, and has some oddball domain going on.

As for the operating system itself, I found its lack of Ctrl+Alt+T bringing up a terminal to be the greatest shortcoming in terms of my user experience. It also seemed obsessed with using linked GUI windows for everything initial setup. It also took way too long to boot up. While it did have those black bars around the screen, it did have a utility to adjust them between reboots. I don’t have the patience for it right now.

The most impressive thing about Raspup is their claim to work on any Raspberry Pi version, though the compute module remains untested. I honestly wish this project the best of luck, but at present, I can only recommend this cute, little project if you’re bored, want to poke around with something new, and have a spare microSD card for your pi. [Link to Raspup]

Other Projects

My goal was to also include a media station, but that didn’t install correctly as my research window for this week was closing. I’d also like to see about extracting an IR sensor from a dead piece of hardware, but that project can easily fill its own month of blogs.

While doing my write-up, I considered Tiny Core again, and there appears to be one for the Pi. I may do this one on the sly without reporting on it.

Six plus microSD cards is a lot to manage for a single Raspberry Pi. The five pack cards each came with an adapter, so I borrowed a label maker and applied labels to those.

Closing thoughts

One of the quirks I noticed with Manjaro on the Pi was that my USB SD card reader was showing up as /dev/sdc. Normally, SD family cards, such as the one inserted directly onto the motherboard at the time, have a different designation, so that’s something to look into. On the other hand, this discrepancy may be just what I’m after in terms of a safe computer to blast away at disk destroying operations. It only took one wrong keystroke, and if I don’t pursue Tiny/MicroCore Linux again, this may have the safety margin I need where I can disconnect any unneeded drives without opening any cases.

Final Question

What other Pi distros would you like to see reviewed on here?

Family Photo Chest Part 10: SANE Front Ends.

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m trailblazing the next phase of this project. Let’s get Started!

Ten months now, I’ve trickled on with this project. All so I don’t burn out on it. And as always, just as I crest one peak, another valley of research to explore awaits me on the other side. Hopefully, I’ll get around to scanning the first photo before this project’s first birthday.

My goal for this project is preservation. I’m obsessed with not losing any details and getting the cleanest image possible, and even I don’t fully understand what that means yet. In theory, that could mean grabbing the best scanner on the market and cranking it up to max. That’s more or less what happens to old movies when they’re being digitally remastered for use on HD or 4K big screens. But once something has been digitized, the amount of captured information is locked (barring AI upscaling, but that’s a topic for another time), and I want this project to be able to stand up in a generation or two, even if all physical originals are lost.

Early Research

This week’s research focused on the one known missing link, the software. I would like to have a feature to keep front and back scans together, and that probably means either writing my own interface or modifying an existing program. I went to find the source for the scanner software for Linux I otherwise have a minutia of experience with, and I can’t find it. The closest I can find is an RPM package for Arch, and I read that that’s not a guarantee: such packages can contain precompiled machine code, and editing that would be a bit above my skill level.

I started thinking about the chain of software that accomplishes the task of getting anything scanned. Down at the bottom, just above the operating system, there’s the driver controlling the scanner. Higher up, there’s a middleware layer called SANE. On top of that is a front end in either a GUI or the terminal. If I am to write my own program from scratch, it would be tailored for only this one scanner and be feeding commands to the terminal front end.

I dug up a few alternate front ends for SANE with the intention to attempt modification. Options include XSANE, a program I gather works, but is a little generic, and SwingSane, a project that aims to unlock all options for any compatible scanner, but hasn’t been maintained in five years.

GIMP

I was a little surprised to hear that GIMP has a plugin for scanning pictures directly in for prompt, user-guided restoration works to correct scratches, dust, fading, or other problems without a potentially over-aggressive fully automatic algorithm giving you “scans that look worse than the original” (Link to citation). As an added bonus, a picture can believably be scanned in as multiple layers and easily saved to file as front and back with no extra modification needed. Same goes for if I want to scan a whole album.

Where Are the Bottlenecks?

Scratch that last detail. My whole robotics blog here has turned into being more about how I’m using Linux to keep aging computers out of the e-waste bin, often with a touch of Frankenstein thrown in. If I want to go preserving every little detail I can down to the grain, these pictures WILL be packing away the RAM, and chewing through any remaining swap space, probably crashing any machine I have to throw at it. I’m afraid a more modern machine will need to be specced out.

Such a machine would be built for the task, and intended to have a nice, long retirement as a regular gaming rig. I’d like this thing to still be in service ten years from now. As I understand it, we’re at the relative beginning of Intel’s LGA 1200 slot, so late-life, second hand upgrade isn’t out of the question, and M.2 is still fairly new in terms of how long PATA and SATA lasted. For all I know, I’ll be running out its final clock cycles in some kind of cluster or a cloud computing application twenty or thirty years from now.

Final Question

What is the longest project you’ve ever persevered at before any results started showing?

Miscellaneous Bits and Bobs

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I only have a bunch of unfinished projects I’d like to write about, but not are really ready, so I’m picking one or two and expanding on them with a mini-update. Let’s get started!

NFS

In my last photo archive post, I talked about mounting a file share manually. This is fine and all, but my intention this week was to get it connecting automatically on boot. I had a bit of a bump when I tried connecting to a share called “Photo Trunk” or something like that. When mounting, I would properly escape the space in the file path with a backslash, but the file system table didn’t get the memo. Long story short, it needs “040” instead of an escaped space. I don’t know why, but it just does. Shortly thereafter, I ran into a permissions barrier I have yet to resolve.

Update:
In a random search I wasn’t expecting to go anywhere, I managed to find that I needed to replace the space with “\040” and not just “040”. I tested it with sudo mount -va. It’s working now.

Manjaro Shakedown

Arch has a reputation of being unstable, but so up to date that software any newer is yet to compile. Manjaro Linux aims to shift the balance around to be more friendly to people who don’t know a compiler from a package manager. I’m also being a tad dangerous –or so I hear– by running the KDE desktop environment on an NVIDIA card, especially with the proprietary drivers.

I’ve noticed several times while playing modded minecraft that my desktop environment decides, “You know what? We’re just going to crash and start over from the login screen, even if login on boot is disabled. Oh, and we’re going to make it look like it might be a power blink and spook Shadow each time it happens.” –Note: I now have login at boot. I’ve reached out for help across Discord, but nothing conclusive has shown up. I managed to capture a log file right after a crash, but nobody helping me found anything conclusive. The best guess was that because the traceback for some problem in the xorg logs was handled by a library to manage input, these crashes may be related to the random stutter my wireless keyboard is experiencing, though I’m skeptical.

I’ve had this kind of keyboard for years, and I’ve always had stuttering problems, regardless of OS. Nevertheless, whenever we looked, it was the only wireless, ergonomic keyboard on the market that we could find. I’ve even experienced these frustrating stutters while writing here, now, yet I’ve put up with this particular keyboard til I’ve worn the matte finish on many keys until they’re mirror smooth.My N and Numpad_2 keys are totally missing their decals, and several other keys are already unrecognizable.

It’s possible my keyboard is crashing the desktop environment. Every time I’ve had the crash, I’ve been actively playing, and at least two of those times, I was fighting keyboard/mouse lag at the time. I’ve had another small glitch where a small, almost square rectangle of pixels along the bottom of my screen goes black for everything but my cursor. Unplugging my monitor and plugging it back in didn’t help, but the issue was gone after a reboot.

The more I use Manjaro, the more feels familiar. Really, the biggest difference from Debian I know about is the repositories inherited from Arch. The terminal acts a little differently in a way I don’t like, but I know that if I spent the time to bother changing it, I could. On the whole, I’m familiarizing myself with both a distribution and a desktop environment. A far future project may be to assemble my own experience from elements of everything I’ve seen.

Modded Minecraft Update

This one would never get a post to itself, but I’m after filler. Just tonight, I added a couple mods we were talking about, one to make the ender dragon leave an egg each time, and another to prevent endermen from making off with your grass, dirt, TNT, or other blocks they list as portable. The hardcore modding culture hasn’t moved past 1.12.2 yet, and it was 1.13 that introduced datapacks that really opened up lightly modified gameplay to otherwise strictly vanilla servers. The built-in function involving mob griefing is great, but nerfs creepers and ghasts — two otherwise harmless mobs when totally ignored. To ambitious players, endermen’s abilities to move blocks are more annoying than fun.

The Importance of Help

I’ve been chugging along regularly for three years now, and I only remember being late on a post by a single day once. I have proven to at least myself that I can write regularly. Along the way, I’ve found communities to solicit help from. At some point, I’d like to get a little more serious about my presentation because a default WordPress theme and a weekly wall of text wouldn’t insight confidence in me if I were looking at a tech blog. I know I chose this platform because I could get access to the low-level code, but I’ve only glanced at it once or twice. I don’t know any web design, and I can’t get excited about learning it. I’m not even sure I have the comments section working correctly!

Final Question

I decided to include the extra formatting this post on a whim to better separate the discrete partial projects, but I think I might use it in the future. What do you think of it?