Never Buy Junky New Computers

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am seriously thinking about a long-awaited computer upgrade. Let’s get started!

I’ve had a lot of fun learning Linux and not only doubling (or more) the useful life of my workstations. With out the full bulk of Windows, they’ve been able to keep up with my modest needs decently well, if I don’t say so myself. Through the occasional upgrade, I figure I’ve made the systems under my care last at least another generation worth of computers per what’s typical in my house – maybe two.

But over the past year, I’ve come across the stray project or game… As it stands, working with AI art on an NVIDIA GeForce GTX 970 is like trying to race go karts with pedal power. Some interesting applications may as well be a NASCAR event. A few applications I’ve been wise enough not approach yet. I approached my father to ask him if a graphics card was right for me while I plan on saving up for a nicer desktop or if I should just go for a full workstation now.

Shopping for a GPU

I came up with three tasks I want a new graphics card to do: AI art, VR, and gaming. My current card got me into AI art, but renders take minutes – too long to hold my interest long-term. I’ve used Virtual Reality headset one time; while it was fun, I’ll be after an open source driver (privacy concerns) probably on a tethered model, which my card likely isn’t up to rendering satisfactorily. Finally, I have a recent game I wish to blog onto Linux for an upcoming post without exploring the depths of its “potato mode.”

AI art relies on something called CUDA, which I know very little about except it’s tailored to NVIDIA cards, so I’m shopping NVIDIA. A big question is if I want to patch myself over until my next big machine for $300, or do I want to blow around $1600 for a card I will take to the next system. My father pulled up a benchmark by Tom’s Hardware showing that only the top of the market cards are gaming 4K at 60 fps. And while the bottom of the market may be an upgrade from where I am now, it didn’t look as massive a leap forward as I took it to be at first glance the night before.

But of course I’ll get the best experience with a balanced system. PCI slots may be the same shape, but my motherboard has generation 3 PCI and we’re on generation 6. I’d need a whole new machine to take proper advantage. So, why not look for one?

Shopping for a Full System

I’ve been interested in getting a System76 machine with Linux preinstalled for a while now. It wasn’t long before we were looking at the laptops, as that’s another gap to fill in my present tech loadout. For Autism reasons, I need a heavy laptop. I find the weight comforting. These ultra-portables weighing less than a Frisbee won’t do it for me. This leaves the top of the line “Desktop Replacements.” And would you know it, System76’s top tier laptop is listed as having the kind of graphics card I was looking at.

Takeaway

Growing up I had the following rule ground into me in regards to computers: don’t buy junk. If you buy the computer hardware you will need X years down the road, you’ll end up saving money once you count up the cost of a parade of budget computers bought along the way – as well as enjoying the turbo boosted performance along the way.

Final Question

What factors do you consider when shopping for computer hardware?

I look forward to hearing from you in the comments below or on my Socials!

I Have a Cloud!!!

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am celebrating my Project of the Year. I have Nextcloud installed! Let’s get started!

My NextCloud Experience in Brief

This week, I achieved a major milestone for my homelab – one I’ve been poking since at least early March of this year when I noted, “Nextcloud has been a wish list item since I gave up using Google’s ecosystem,” [1].

As I learned more about the tech I’m working with, I added specifications for storing scanned pictures on high capacity, but slow hard disks while making smaller files available with the speed of solid state – only to learn later how rootless Podman is incompatible with NFS. I studied “Docker Secrets” to learn best practices for password management – only to move the project to an older version of Podman without that feature. One innovation to survive this torrent of course corrections is running the containers in a pod, but even that was shifted around in the switch from Rocky 8 to Debian.

Two major trials kept me occupied for a lot of this time. The first involved getting the containers to talk to each other, which they weren’t willing to do over localhost, but they were when given a mostly equivalent IPv4 loopback address.

The second was the apparent absence of a database despite multiple attempts to debug my startup script. Nextcloud was talking to MariaDB, but it was like MariaDB didn’t have the database specified in the command to create its container. For this, I created an auxiliary MariaDB container in client mode (Thank-you-dev-team-for-this-feature!) and saw enough to make me think there was none. I wasn’t sure though.

One Final Piece

#remove MariaDB’s “dirty” volume
podman volume rm MariaDBVolume
#reset everything
./resetVolumes
./servicestart

There was no huge push this week. I came across an issue on GitHub [2] wondering why there was no database being created. By starting a bash shell within the MariaDB container, I found there were some files from some long-forgotten attempt at starting a database. All I had to do was completely reset the Podman volume instead of pruning empty volumes as I had been doing.

Future Nextcloud Work

Now that I have the scripts to produce fresh instances, I still have a few work items I want to keep in mind.

I expect to wipe this one clean and create a proper admin account separate from my main username, a practice I want to better get into when setting up services.

Adjacent, but I’ll want access on my tablet, which will entail getting the Bitwarden client to cooperate with my home server.

I still want my data on GoldenOakLibry. I’m hoping I can create a virtual disk or two to satisfy Podman which in turn RedLaptop can relay over NFS to network storage.

Final Question

Have you taken the time to set up your own personal cloud? I look forward to hearing about your setup in the comments below or on my Socials!

Works Cited

[1] Shadow_8472, “I Studied Podman Volumes,”letsbuildroboticswithshadow8472.com,March 6, 2023. [Online]. Available: https://letsbuildroboticswithshadow8472.com/index.php/2023/03/06/i-studied-podman-volumes/. [Accessed Oct. 2, 2023].

[2] rosshettel, thaJeztah, et. all, “MYSQL_DATABASE does not create database,” github.com, July 9, 2016. [Online].Available: https://github.com/MariaDB/mariadb-docker/issues/68. [Accessed Oct. 2, 2023].

How I would Relearn Linux #4: Package Management

Good Morning from my Robotics Lab! This is Shadow_8472 with part four of How I Would Relearn Linux, where I answer the questions I had and offer the tips I wish I knew a few years ago when I started exploring Linux. Let’s get started!

Why Package Managers are Important

Linux software is often distributed in modular units known as packages be they applications, drivers, or kernel modules. If you’ve ever assembled a modpack for a game, there’s a chance you’ve had to chase down a dependencies or address conflicts before you could use it. Now imagine doing that for a full operating system, and you get the idea of what using Linux was like. A single packaged game might require tens or hundreds of prerequisite packages by the time you settle everything. This problem was quickly automated with package managers.

How to Use a Package Manager

Specific features of a package manager will vary, but their primary goal is to install, update, and remove packages. It will have a repository list telling it where to install software from. Android is Linux… kinda. Its default package manager is the Play Store. Its job is to install, update, and remove packages per user direction while making the process as invisible as possible. Unlike Android, a package manager in Linux is typically invoked using another program called Sudo (my preferred pneumonic is “Super User DO” even though I use the su-dough pronunciation).

When installing packages, a package manager has a list of repositories to contact. It will reach out to them with each package being installed and usually ask for clarification if multiple matches are found. It then does the same thing for all the listed prerequisites until it generates a full list of the missing packages needed to make your desired package run. On user confirmation, it begins download/installation.

Updating software is similar to initial installation. New versions may add features that rely on new or different library packages. A package manager will recalculate the missing packages and notify the user if any installed packages are either missing or no longer needed (This is true for the Debian/Ubuntu family’s package manager, apt. I’m not as familiar with pacman or dnf to speak for their specific behavior – as in they might just auto-remove them and I’d be none the wiser).

Uninstalling packages is similar to updating, only an explicitly installed package is being removed along with its unneeded dependencies. This may also be a necessary step if a conflict arises which the package manager cannot automatically resolve, though additional force may be required to bypass dependency checking.

Secondary Package Managers

Going back to Android for a moment, it is possible to “sideload” apps by downloading them from outside the Play Store. It’s even possible to download other app stores, such as F-Droid, which specializes in Free and Open Source Software and the Aurora Store, which uses the Play Store’s repositories, but with an emphasis on being privacy conscious. Similarly, there exist Linux package managers aimed at less-trusted community packages or packages that take care of more of their prerequisites. In this way, multiple versions of a program my be installed simultaneously, which may lead to confusion if a broken copy isn’t cleaned up in the hopes that an update will fix it.

Caution is often advised though when considering adding multiple primary package managers. I’ve not seen or looked for a solid explanation, but I can guess it’s so you don’t break your system by installing everything twice and having things call the wrong programs using twisted system links.

Takeaway

Whatever package manager your installed distro uses, you will get more out of your experience if you take the time to learn how to search for packages (a case where you don’t need sudo). When an exact version matters, you can usually extract that information from your package manager. If you cannot find a package, www.pkgs.org is also your friend. Your package may be in the repositories for the next or previous distribution, or in a repository you have to add special.

For example: in another project of mine, I found and looked up the version of Podman I had on Debian to learn it doesn’t support the Secrets feature I had been using on Rocky Linux: I had to adjust my approach accordingly.

Final Question

What package manager(s) came with your system(s)?

I look forward to hearing from you in the comments below or on my Socials!

What are Relational Databases?

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am targeting Nextcloud for major milestone completion. Let’s get started.

Previously on my Nextcloud saga, I finally managed to run a pod with both Nextcloud and MariaDB in their own containers and get them to talk. The words they exchanged were along the lines of “Access Denied.” I also have a script to add a temporary third container for database command line access.

My next immediate job is learning enough MySQL to diagnose Nextcloud’s refusal to create an admin account. I found a few commands to try and learned that the database I expected to be there on container creation didn’t appear to be. Container logs didn’t report any errors. I need some more background research, even if that’s the only thing I do this week.

Most important things first: what is the relationship between MariaDB and MySQL? I’ve been treating the former as if it were an implementation or distribution of the later – like the difference between Debian and Linux. But according to MariaDB’s site, they’re a fork that avoids “vendor-lock” while maintaining protocol compatibility with MySQL [1]. So MySQL help should still work for MariaDB on a “close enough” basis, sort of like how Debian solutions may still work for Ubuntu systems. When available: use a matching guide.

Contrary to what I said in my closing words last time I handled Nextcloud, commands being written in all caps is only convention – the software is usually a lot more forgiving. SELECT equals sELeCT which equals select and SelEct as well as the other 60 possible combinations.

MariaDB is what is called a “relational database.” Fancy phrase to me, but here goes an explanation. Data is information. Information can be organized into tables for later retrieval. Zoom out one level, and now tables with information themselves become data that interacts with information in other tables. MariaDB can track these “relations.”

Takeaway

Needless to say, relational databases gets messy fast. Considering how my database is meant to be locked up tight within a pod on a server that blocks direct access, a containerized webUI I can expose will do nicely. MariaDB’s website has a lengthy list of graphical clients, but only phpMyAdmin showed up as having this feature when I used “find on page” [2]. Importantly, it also shows up in Docker.io as a Docker Official Image, which I can run on Podman.

Final Question

As of posting, I’m planning on spending a week or few each for learning phpMyAdmin and refining my MariaDB understanding. Otherwise, if someone can answer what my problem is that Nextcloud isn’t finding the expected database. The following is a command from my pod creation script.

podman create \
        --pod NextcloudRedLaptop \
        --name MariaDBContainer \
        -v MariaDBVolume:/var/lib/mysql:Z \
        -e MARIADB_ROOT_PASSWORD="<gibberish1>" \
        -e MARIADB_DATABASE=NextcloudDB \
        -e MARIADB_USER=nextcloudDbUser \
        -e MARIADB_PASSWORD="Gibberish2" \
        --restart on-failure \
        docker.io/library/mariadb:latest

And here is what I think is the important output from MariaDB:

MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.010 sec)

If I understand what I need correctly, I should have had one show up titled “NextcloudDB” on container creation.

And so, my final question is this: Why doesn’t it work? Am I even looking in the right spot?

If I oversimplified or got something wrong about relational databases, be sure to tell me all about it in the comments!

Works Cited

[1] MariaDB, “MariaDB vs. MySQL The difference between choice and vendor lock-in,” MariaDB, [Online]. Available: https://mariadb.com/database-topics/mariadb-vs-mysql/. [Accessed Sept. 18, 2023].

[2] MariaDB, “Graphical and Enhanced Clients,” MariaDB, [Online]. Available: https://mariadb.com/kb/en/graphical-and-enhanced-clients/. [Accessed Sept. 18, 2023].

Patriot Day, 2023

This morning of September 11 marks the 22 year anniversary of one of the greatest tragedies within living memory for the American people. In 2001, Islamic terrorists hijacked airplanes and crashed them into the Twin Towers in New York, leading to collapse of both and the loss of many lives. People around the world came out in solidarity with the US as fitting. Queen Elizabeth broke centuries-long tradition by playing the Star Spangled Banner at Buckingham palace not just on September 13, 2001, but again on the 20 year anniversary of the attacks.

I pitched the idea of a Stable Diffusion image and a short post, and my sister, Tzarina8472, suggested a firefighter saluting a US flag at half mast.

Producing this image required an update and a reboot (something I’ve put off for weeks). There were around a couple dozen “solid nope!” attempts before it as I was fighting with Automatic1111, the Stable Diffusion WebUI I am learning. The flag came out backwards and the firefighter has some metal socket where his backbone should be – not to mention his shoulders are all wrong, but I’m happy enough with how it turned out that I’m not upset my graphics card refused to customize it further without optimizations I don’t have time for because this piece invokes the emotional response I set out to invoke.

Thank you to firefighters and other first responders everywhere for your public service.

We never know what tomorrow may bring. We may wake up and it will be a whole new world.

Leo_8472 on the night of Sept. 10, 2001.

My Nextcloud Sees its Database, but…

Good Morning from my Robotics Lab! This is Shadow_8472, and today is another Nextcloud shot. Let’s get started!

It almost needs not an introduction at this point. I’ve lost track of how long I’ve been after a Nextcloud instance – and even the number of times I’ve complained about the same. I’ve demoed it using an integrated database, and I’ve since gotten its container into a Podman pod. My task today is getting the paired MariaDB container to behave.

Let’s start with my first error. Nextcloud is starting for the first time and it’s time to make an admin account.

Failed to connect to the database: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory

The container has been launched with flags providing environment variables pointing Nextcloud at the database and specifying the username/password to get in. I found someone having the exact same problem 4 years ago on Stack Overflow, and if I get user2880156’s explanation, “localhost” is not the same as spelling out a loopback IP address, like 127.0.0.1 [1]. While some networked applications don’t care how they interact as server and client when on the same computer [namespace?], Nextcloud and MariaDB are not such a pair. It has something to do with types of network sockets, but I don’t understand beyond that. What matters to me now is that replacing “localhost” with “127.0.0.1” got me to my next error.

MySQL username and/or password not valid You need to enter details of an existing account.

I’ve checked my environment variables more times than I can count. The required usernames and passwords were copy-pasted between the respective environment variables. The best lead I have is that the container may be bugged. People online have reported success with manually creating the full permissions/non-root account the database container is supposed to create on initial creation.

This insight led me to seek a manual fix from within the pod – specifically from a MariaDB command line as mentioned in the container’s documentation. After trying several different ways to modify a terminal command to run with Cockpit, I gave up, re-added a couple flags I was avoiding, and got the interface I was after rendered to both bash and Cockpit. I additionally split the command back into a separate script.

The MariaDB command line is unlike any I’ve used before. My first impressions are that commands are in all caps and variables are lower case. My final achievement before running out of time was to extract a list of user names (root, healthCheck, and databaseUser), where they’ve logged in from (variations on locations within the pod), and a partial hexadecimal output of their passwords.

Takeaway

Solving localhost vs 127.0.0.1 addresses has given me insight into Cockpit’s curious behavior when remoting into to itself with localhost. Command line access to the database from within the pod is also huge.

Final Question

I’m not taking bets, and I’m afraid to ask, but what additional roadblocks should I expect?

Work Cited

[1] akrea, et. all, “nextcloud and mariadb (both) on docker: SQLSTATE[HY000] [2002] No such file or directory,” stackoverflow.com Mar. 10, 2019-Mar. 22, 2023. [Online]. Available:https://stackoverflow.com/questions/55088828/nextcloud-and-mariadb-both-on-docker-sqlstatehy000-2002-no-such-file-or-d. [Accessed Sept. 4, 2023].

Happy Birthday Stable Diffusion!

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am spending a week with Stable Diffusion to improve my skills at it. Let’s get started!

The science of AI art goes back to around the time complete CPU’s were first integrated into a single computer chip in the late 60’s/early 70’s. At least a couple waves of AI craze came and went, but on August 22, 2022, Stable Diffusion was released as free and open source software.

In the year since, Stable Diffusion has proven to be quite the disruptive technology. I’ve never had the cash to commission an online artist, but with a little effort, a decent amount of patience, and only an ounce of experience, I’ve gotten subjectively better results than commissioned works posted by low-end digital artists. I feel sorry for the people losing their dream jobs to machines, but at the same time this is a frontier I can have fun exploring.

One Week of Study

I’m setting myself a goal of spending two hours dedicated to learning Stable Diffusion every day this week. We’ll see what happens.

Monday

We won’t talk about what didn’t happen on Monday.

Tuesday

I finally started researching for this topic after midnight. I started up Easy Diffusion, an intuitive webUI for Stable Diffusion, generated a number of images with a project for my sister in mind.

I ended up looking up tips and tutorials. Looks like the hot-shot web UI these days is Automatic1111. It has more options, but is proportionally harder to use. I might try it later in the week. Otherwise, most of my time actually working today was writing the introduction.

Wednesday

Easy Diffusion is definitely the way to so if all you’re looking to do is goof around, because that is exactly what I did. So far as I can tell, I am at the exact bottom of graphics cards that can do this. I’m finding it of use to go smaller for faster feedback while learning to prompt. Conclusion: img2img has a tendency to muddle things.

Still, the draw of potentially more powerful techniques is calling. I found a piece of software called Stability Matrix, which supports a number of web UI’s – including Automatic1111, which every Stable Diffusion tutorial out there tends to go after. I ran into trouble with its integrated Python while setting it up (portable, in the end). I’m hoping I can replace it with a later version tomorrow.

Thursday

I switched approach from last night and did an online search for my error:

error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory

Multiple results pointed from people trying Python projects on Arch family systems like the one I’m on. One source from December 2022 recommended a multi-step process involving the AUR. I figured riffling through the project’s GitHub issues was worth a shot – to report it if nothing else. I searched for ‘libcrypt.so.1’, and the fix was to install libxcrypt-compat; I found it in the more trusted pacman repository [1].

AUR: Arch User Repository

I installed Automatic1111 using Stability Matrix and loaded it up. My first impression when compared to Easy Diffusion: Wall of controls. Easy is easy in both the setup AND the relatively intuitive control scheme, but it seemingly doesn’t support a lot of the tools I’ve seen and want to learn.

Per tradition, I made a photo of an astronaut riding a horse. It was a flop, but I got an image nonetheless. Its immediate followup didn’t finish when I told it to fix faces and I ran out of vRAM memory on my graphics card (to be fair, I didn’t have next to everything closed).

Sabbath starts tomorrow, and I’ve been writing these mostly late at night. I can tell I’m not likely to meet my time goal of a couple hours every day, but I feel getting to this step is a major accomplishment. Word count says 700+ words, so I could end it here and feel fine about it. I’ll see what happens. I want to find the control that tells it my graphics card is barely up to this stuff.

Friday

Time to start optimizing! For cotext, I’m on an NVIDIA graphics card with 4GB of vRAM, which is enough to get a feel for the software if you have a minute or two of patients per image, but having more would be prudent. After trying a couple online videos, I found AUTOMATIC1111’s GitHub had a list of optimizations [2] I’ll be listing as –flags to the COMMANDLINE_ARGS variable in my start script. I don’t have time this evening for a full test, but perhaps tomorrow night or Sunday I can do some benchmarking.

vRAM: Video RAM (Random Access Memory) *So glad to have finally looked this one up!*

xformers

For NVIDIA cards, we have a library xformers. It speeds up image generation and lowers vRAM usage, but at the cost of consistency, which may not be a bad thing depending on the situation.

opt-split-attention/opt-sub-quad-attention/opt-split-attention-v1

A “black magic” optimization that should be automatically handled. I’ll be selecting one via the webUI, though.

medvram/lowvram

This optimization breaks up the model to accommodate lesser graphics cards. The smaller the pieces though, the more time it will need to swap pieces out. Side note, but I believe it’s MEDvram as in MEDium as opposed to the naive pronunciation I heard with MEDvram as in MEDical.

opt-channelslast

Some procedures are exparimental optimization that is literally unknown if it’s worth it at this time. I’m skipping it.

Saturday Night

I took it off

Sunday

I joined my father on shopping trip and we ran out of gas at a car wash. By the time I sat down to work on Stable Diffusion, I wasn’t up to much more than an unguided self-tour of the settings. I don’t know what most of the categories are supposed to do! I’ll look each one up in time.

Monday Morning

As usual in recent months, I spend a while writing the Takeaway and Final Question, dressing up the citations, and copying everything out of LibreOffice and into WordPress for publication at noon.

Takeaway

Progress! It might not be what I expected this week, but I’m still satisfied that I have something to show off. The point I’m at is to get to the same place as I was with Easy Diffusion before looking up the toys I came to Automatic1111 for.

As one final note, this week is also the anniversary of this blog. It caused a bit of a delay in getting this post scheduled by noon, but that would make it the third instance I can remember of a late post in twice as many years. I feel bad about it, but at the same time, it’s still a decent track record.

Final Question

Do you have a favorite interface for using Stable Diffusion?

[1] PresTrembleyIIIEsq, et. all, “SD.Next / ComfyUI Install: Unexpected Error #54,” github.com, July 30, 2023. [Online]. https://github.com/LykosAI/StabilityMatrix/issues/54. [Accessed Aug. 8, 2023].

[2] AUTOMATIC1111, “Optimizations” github.com, August, 2023. [Online]. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimizations. [Accessed Aug. 8, 2023].

Browser Tabs vs. Bookmarks

Good Morning from my Robotics Lab! This is Shadow_8472 with a side project of the week. Let’s get started!

When you’ve got RAM to spare, tabs and bookmarks seem almost interchangeable.

A few favorites kept open long-term turns into leaving forgotten research open with a few really good memes mixed in. Let’s just agree that a hundred or more tabs constitutes a problem I’ve been shirking for a while.

I’m working on just my primary daily driver, my upstairs workstation running EndeavourOS. My Internet tab organization on this machine uses three windows: one for ‘fun’, one for ‘blog’, and a ‘???’ one because I sometimes need two pages at once – such as an online game and a special calculator for it.

As with any other downsizing project, I used three categories: Close, Bookmark, and Recycle. Some tabs were obviously irrelevant, so I weeded them out. Most were saved as a bookmark organized by topic. Other tabs I wasn’t sure about or didn’t have a category, so I left them open for a while.

In total, I created categories for around 25 blog projects in phases ranging from concept to needing follow up or maintenance. These will need a second pass as many tabs only got a few seconds’ judgment call and in several cases, I made a “close enough” match.

Other finds of interest were memes I stashed away separately, duplicates I closed when I noticed them already bookmarked, and I even found and consolidated half a dozen web interfaces from my homelab alone.

Takeaway

Heavy duty tab management is something I should be doing more than once every few years – or even once an operating system in my case on this machine. I managed to get my three browser windows down to where two don’t have to scroll and the “fun” window only has two screen widths.

Final Question

How do you organize your online browsing habits?

What I do with Trash Computers

Good Morning from my Robotics Lab! This is Shadow_8472 and today I am exploring a computer my father found. Let’s get started!

Taking Inventory

While attending a work bee at my church’s school, my father was throwing something away in the dumpster when he noticed an old computer with the hard drive still inside. He took it home for me to look at with the stipulation that I’d need to be sending it along within a week.

The computer in question didn’t promise much on the outside by today’s standards. 1GB of memory. 250GB storage. OEM sticker for Windows XP Media Center Edition 2005. Least promising of all was the sunken power socket. Two disk drives are present – one a LightScribe CD burner and the other a DVD reader. It has no interesting PCI expansion cards. The front panel is two plastic clips from falling off out of an original six to hold it on and is covered in several unsightly stickers. The rest of the case has some scratches, but the side panel opens (and closes) beautifully.

Powering Up

The first thing when working with an unknown system is to try booting it. My first instinct was to locate and install a backup power supply – no-thank-you Dell for the non-standard connections in our stockpile.

A previous project of mine left a free power supply. I confirmed its compatibility (only difference being 600Watt vs 300Watt), swapped it out, screwed it in, and connected it up to all the components – until I got to the dedicated CPU power socket. The old motherboard adheres to a standard with 2×2 pins, and the new standard is 2×3. While the socket itself might fit in terms of power and shape of pins, a stray capacitor blocked this approach.

As noted before, the original power supply’s socket was sunken. Its plastic “screw wings” are broken and there’s not anything good inside to brace it against as is the case for my red laptop. Now, this next part is generally inadvisable, but my father grabbed a spare power cord and we carefully opened up the power supply. The broken part would only need a few solder points and it would work again. As a proof of concept, we connected the cord back up, and I re-installed the original supply.

The computer booted into an admin account on Windows 8.1.

Digital Archeology

First of all, I noticed how the system clock (including time and date) was only around half an hour slow. Also: props to whoever cleaned off this computer. The only clues to its history were in the system logs and the product ID keys for Internet Explorer and Windows.

This computer appears to have had a service life starting some time around 2004/2005 (OEM sticker). It was upgraded –presumably to Windows 7– around 2013 (IE key), where it was used regularly until being upgraded to Windows 8.1 in January, 2018. This upgrade must not have done it so well, as it was only booted a couple times since for around four days total in May and July, 2018 before my activity in 2023 showed up.

At this point, I easily could have run some more invasive file recovery program as a demonstration of why you shouldn’t dispose of your computers improperly, but I decided against it before I even began work on it.

Installing Linux [Hard Mode]

I grabbed the keys and loaded the computer into BIOS to point it at my trusty Ventoy USB. Only, the version of BIOS this thing runs is so old, it only supports booting to internal hard drive and CD. On a whim, I located and force-fed it my GRUB disk after opening the DVD drive by sticking a dulled safety pin up its manual eject hole. From there, I was able to load Ventoy. And from Ventoy, I was able to attempt loading one of my images.

This process took several minutes where it should have been seconds. I had many failed attempts (one casualty of which was a Debian installation I had on a USB stick I turned into installation media), but eventually managed to load Bodhi Linux 7 – specifically a late-cycle release candidate. For whatever reason I was never able to figure out, I had to boot using GRUB2 mode as opposed to “normal mode” from Ventoy. My guess as I write this is it has something to do with the BIOS not supporting USB booting.

The live experience wasn’t impressive. I chose to do a split-partition install simply because I’d never done one before. Otherwise, it booted Linux and ran poorly, but better than over a USB 2.0. I installed Firefox with the intention of it serving again as a backup, but is the system even needed?

Disposal

Technology marches on. 10-15 year-old desktops built near state-of-the-art quality may remain relevant as adequate get-me-online machines after a new hard drive, graphics card, and progressively slimmer versions of Linux, there comes a time when an almost 20 year-old budget crunch system struggles to land a kiosk job – that is if it’s worth the owner’s time to find a buyer.

For this tower, it is time to send it on its way in the Great Material Continuum. In this case: a local e-waste drop off location. My father and I spent some time sorting our tech stockpile and identifying stuff as broken, working, or unknown. We stripped down the systems we sending off for what RAM they had remaining, a couple CPU’s, a bunch of computer screws, and the hard disk from the star of our goodbye party.

An hour or two before we left, I’d lost my de-pointed safety pin from earlier. I had to disassemble it to extract my boot CD. I ended up prying open the back of the case and tilting the unit around until my disk fell out the back.

Takeaway

I don’t know what secrets this computer might have had on its hard drive before I started poking at it. Perhaps nothing. It was certainly tidied up beyond a casual inspection before it found its way to me, but the criminally curious might have applied more powerful recovery tools in the hopes of finding some personal information to steal. This is why some computer recyclers will destroy your hard drive with a drill press while you watch. Better still would be scrambling the hard drive with random bits a few times beforehand.

Final Question

Would you have had a better re-use for the computer I worked on this week?

I look forward to hearing from you in the comments below or on my Socials.

How I would Relearn Linux #3: Terminal Navigation

Good Morning from my Robotics Lab! This is Shadow_8472 with my third installment of How I would Relearn Linux, a series where I pick an entry-level skill I would want to re-learn first if I were to begin my Linus journey over again. The terminal is the heart of the Linux admin experience. Let’s get started!

Command Line Interface (CLI)

The most familiar paradigm of using a computer is the GUI, or Graphical User Interface. Before the GUI, computers more closely resembled increasingly fancy typewriters writing from left to right, top to bottom. Just as GUI’s are still here after VR has been on the market for years, so too has the Command Line Interface (CLI) endured. Modern terminal emulator programs are wildly customizable, but that is beyond the scope of this post as well as my capabilities.

Basic Navigation

I’ll assume you have experience with a GUI file manager. The Linux terminal can do everything those do, but instead of hitting buttons or double clicking folders, you use commands to jump around to directories. My experience is with a popular default shell known as bash.

The Command Prompt

Most beginner-level Linux distributions’ terminals display a prompt like this:

<username>@<hostname>:<directory>$ 

There may be variations like different colors or square brackets present. These are heavily customizable and often are by distribution authors. The point of the prompt is to display who is logged in to what machine and where it is working. For example:

[shadow8472@ButtonMash ~]$ ▯

In this case, username shadow8472 is logged in to ButtonMash. The ~ is a shortcut for the current user’s home directory. The hollowed out cursor means I clicked out of my terminal emulator to another window.

File Paths

Similar to other operating systems, Linux has a file system resembling the trunk, limbs, branches, etc. of a tree. Starting at the root directory, /, (often simply called “root”) absolute file paths list directories separated by additional /’s.

/home/shadow8472/Downloads/Games/TicTakToeStrat.sh

Relative paths the present working directory and work from there. So, if I were in my Downloads directory, the following path would point to the same:

Games/TicTakToeStrat.sh

A leading . (period) in a file name means a file is hidden. A lone . points to the present working directory, and a pair of periods .. points to a parent directory. These are mostly used when working with a file in the present working directory or moving the present working directory up one level, respectively, though if so inclined, a user can use them as part of a larger path:

/home/././shadow8472/Downloads/./Games/../Games/../../Downloads/Games/./TicTakToeStrat.sh

Spaces cannot be used in file names because the space character dictates the flow of control when working with commands. To get around this, the \ (backslash) may be used as in “\ ”, or the entire file path may be placed in quotation marks. I find it much simpler to use capital letters for new words.

Commands

Linux commands are special system programs accessible from anywhere. As a general rule, they follow the syntax (pattern):

$ <name> -<flags> <additional parameters>

If everything before the $ is who and where you are, everything after is what you do. I like to imagine myself as walking around on the file system as though it were a maze of rooms. More practically, pressing tab will complete a command or file name (or list the possibilities based on syntax if user presses tab multiple times).

Flag and parameter syntax will be specific to an individual command/program. In general, flags control the mode a program operates in; -h or –help is a common flag to quickly access a brief program description, acceptable syntax, other flags, and whatever the programmer thinks it is most important for users to know at a glance. Parameters are more about data – especially file paths. And if you weren’t confused enough, flags can have parameters of their own.

Point of clarification: 
With the exception of Java, multi-letter flags use two hyphens. WordPress's automatic conversion is wrong.
--help
ls (LiSt the contents of a directory)
cd <file path> (Change Directory)

When using the terminal, I will imagine myself walking around the file system as though it were a maze of rooms. I use ls to “Look Surroundings.” The command cd takes steps from one room to another – either by providing a name or using .. to take one step backwards, toward the root. While not a command per se, I envision using tab to complete directory names as of feeling around with my hands; three characters before using tab is usually enough to identify a name or otherwise save a lot of time and mental energy typing and spell checking an exact name.

cd -

I properly learned this trick specifically for this week’s post. It returns you to the previous present working directory. I’ve known about something like this for a while and I intend to use it more when “teleporting.”

pwd (Displays absolute path of Present Working Directory)
whoami (Who Am I? outputs who you are working as)

These commands may seem redundant with the prompt, but some Bash shells reduce clutter and omit the path leading up to the present working directory, which can get confusing when working around multiple instances of a program. whoami exists because some prompts only start at the dollar sign, $.

man <program name> (MANual)

The man command brings up the “man page[s]” for a program detailing exactly how to use it when -h or –help aren’t enough. If a program still eludes you, Internet search and hope your results aren’t just the man page all over again.

Takeaway

The Linux command line cannot be done justice in a single blog post. Manipulating it effectively can be considered a form of art at higher skill levels or on a VM/spare machine you can afford to break. Shell scripts effectively put the computer at its own terminal, and there exist command programs written specifically for this situation. I’m planning on a followup where I go over file manipulation. In the meantime, feel free to look up a cheat sheet and keep it somewhere safe.

Final Question

Are you familiar with the terminal? What assumptions would you say are safe make about people who have only ever used a GUI file manager?

I look forward hearing about them in the comments below or on my Socials.