Family Photo Chest Part 14.2: Prepared to Scan At Last

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am bringing my workstations online. Let’s get started!

Calibration

Last week, I detailed a workstation I assembled using found parts. I’ve gotten used to the trackball mouse, but click and drag is next to useless.

Also, just now while writing, I was looking for a place to put my tablet — between two scanner workstations, the desk is full– and thought it would be a good idea to slap it on this monitor from a more cubic era. Moments later, I’ve switched to dark mode, and it appears to be messing with the colors on some of my icons in the upper left part of my screen. Turns out it was magnets in the tablet. Degaussing fixed it, but when I removed the tablet, the colors went off again. I have since degaussed again and everything is normal.

Workflow And Training

I’d rather look back wishing I had done a better job than look back I had done any job. For this reason, I am shelving a lot of research and deliberation I got myself lost in. Perhaps in a few years, I can redo the master digital archive with better-supported equipment. In the meantime, I’ve selected a resolution that should be good enough to enlarge, yet small enough to store.

I wrote a set of instructions detailing my prototype workflow and started training family members in how to operate the scanner. As I went, I noted where they got confused and adjusted the instructions accordingly.

My first set of instructions is in how to start a work session. Make sure the scanner is on, start XSANE, set the resolution correctly and check that the persistent settings are correct.

Structure in the analog archive I’m digitizing is sporadic, but when it’s present, I’d like to respect it. Work will be divided into batches. My instructions detail how to name each batch and to make a metadata file describing the batch and the container it was found in, like B&W vs color and print sizes.

Finally, a third set of instructions is all about individual scans. Line things up, don’t go over the scan area, get a preview, and don’t bother with zooming in on that preview because there’s no sideways scrolling and no way to quickly alternate between zooming in and out. A final inspection checks for dust or hair/fur, and I have a little something in there for when pictures have notes on the back.

Challenges

I’m scanning to TIFF files, but I want the ability to include the backs of prints too. Ideally, I would just add .front or .back to the filename, but XSANE’s automatic numbering is stubborn. It wants a four (or more) digit number at the end of the file name, and refuses to recognize multiple file extensions. I’ve resolved to manually setting the file type to TIFF and using the front/back extensions.

XSANE has a preview feature. I am using it to select occupied parts of the scan bed to reduce scanning time. But that doesn’t work without click and drag. I’ve since added a wireless USB mouse, and the trackball is good for 2D scrolling.

Speaking of scrolling in all four directions, while setting up my laptop for the same procedure, I had to get into the touchpad settings. It was something I had found a little annoying, but it was an easy fix when I bothered to look for it.

Final Question

Have you ever needed to write instructions for others to follow? How much did you need to change, even though you thought you thought everything out ahead of time?

Family Photo Chest Part 14.1: A Workstation From the Parts I Have at Home

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am setting up my second scanner workstation. Let’s get started!

Prologue

As seems to be my new tradition when getting/assembling a new workstation, I am recording this week’s observations from said machine. It’s been interesting. Especially since I don’t have a budget for new hardware.

The Tower

Once upon a time, I set up a Minecraft server for my family to play on. I named it ButtonMash. I did a lot of maintenance to keep it running, but eventually, we stopped playing on our own family-owned servers. Ever since, ButtonMash has been running in the background, catching dust.

With a renewed effort towards finally starting production on scanning, ButtonMash is the perfect fit for a workstation. It’s presently running on 16 GB of RAM, and while I thought it might benefit from an old graphics card I found lying around, a quick test proved it to be a downgrade from integrated.

The Hard Drive

A long time ago, I talked my laptop into scanning without sudo. So long ago –in fact– it was before I cloned Debian to run internally, so it would still be on the external SSD on USB 3.0.

ButtonMash’s motherboard lacks a USB 3 port, so we ordered up a card special. Unfortunately, it straight up refuses to boot from either of the card’s USB slots. No configuration works. lspci will detect the card once booted, but not even my GRUB disk can find my Debian external drive. I could get lost trying to troubleshoot, but for now, the bootable USB 2 slots must suffice.

The Keyboard and Mouse

There must be about seven wireless keyboards hovering about my house, each of which is orphaned from its dongle and paired mouse. Each of them always seems to have some sort of buffering problem.

On the other hand, I managed to spot a keyboard from when I was little hanging out in our treasure trove of old tech stored in the garage. It’s in relatively good shape — most likely because its from the days before USB became king. Fortunately, ButtonMash is old enough to have not one, but two PS/2 ports: one for keyboard, one for mouse. On the other hand, the integrated tickle pad uses a different standard I don’t have the proper port for.

My greatest gripe is that it’s a little hard to write on because it has the pipe and backslash key to the right of the Right Shift key. My computing habits have evolved since using this keyboard. When selecting text, I move my right hand so my thumb presses Ctrl and Shift, and my other fingers operate the arrow keys. On this model, I have to angle my hand at awkward angles to not hit a key I never use outside the command line.

The mouse was a little harder to find, but I managed to pull up a trackball with three less-than-perfectly-reliable buttons. It too is on a PS/2 connector, and so wasn’t designed for “plug and play.” However, I have tested it, and ButtonMash’s motherboard was able to pick them up after unplugging them without rebooting when I tested it for myself.

The trackball is different to use. I’m a lefty, and while I can normally manipulate the thumb buttons with my ring finger, I don’t have the dexterity to spin a sphere in the same place with my pinkie. I’d also have to contend with more extreme button placement, in addition to said buttons being unreliable. I’m finding I get a more consistent click if I press near the base of the button, though click and drag requires a particularly good streak of luck. Either way, it’s pretty fun to spin the ball super fast and watch the pointer not know what to make of it until things slow down.

The Display

ButtonMash has been running headless, only sharing a monitor with my father’s tower when needed. However, I found another treasure from the garage from when I was little: an old CRT monitor I remember being amazed with because of how flat the surface was. It’s just an extra panel of glass in front of the tube.

The long standing standard of VGA means that ButtonMash has no problem outputting to this old monitor. Honestly, it has no business lasting as long as it did.

I started the monitor for the first time in what must have been around 10 to 15 years, give or take, and cringed, thinking it was burning itself out as it crackled to life. It’s probably just thermal expansion. The picture still looks good, though there is noticeable flickering in large patches of white. I calibrated the screen so I’m using the full usable area, but I noticed it drifting a little after a while. My particular monitor has a degauss option hidden in its menus, and in addition to making everything go funny for a few seconds, it removes latent magnetic interference that builds up over time. It also produces a satisfying TWANG! After a few days of regular use, the picture appears to have mostly stabilized.

One thing I was sure to enable is a screensaver. This art form isn’t as important with the Liquid Crystal Displays permeating the market these days, but on old Cathode Ray Tubes, they would prevent a static image from burning itself into the surface of the vacuum tube if left alone too long. The pickings were a little slim on Debian, so I installed the xscreensaver package and picked one called Galaxy.

Using this older aspect ratio brings to light a few design decisions I’ve called into question on a few older pieces of software, namely XSANE, GIMP, and the default panel settings on GNOME2/MATE. On a wide screen, XSANE feels needlessly skinny and lost. GIMP looks silly with its tools detached from the main program body to the point where it doesn’t look like a cohesive program. And panels in MATE flat out waste space when hogging both the top and bottom.

But with a narrower aspect ratio, I find I’m only paying attention to the middle strip of the screen. XSANE windows can be rearranged with the important one taking up roughly half the screen and the others put where needed on the other. While I’ve yet to try GIMP in fragmented mode, I expect it too makes more sense this way. I even found myself restoring MATE’s upper panel to alleviate the icon traffic jam my one panel at the bottom was suffering from.

The Internet Connection

The Internet Connection was about the one part I had already figured out months ago. I grabbed my Pi 400 router from where it has been serving a more permanent workstation. I already copied the card and assigned it one a different IP address.

I tried going straight from Pi to tower without a switch in-between. I wasn’t expecting it to work because of the way I understand network cables to work, but it would seem one system or the other has figured the arrangement out.

Takeaway

The nineties… I can hear them calling… but no, they can’t have their workstation back because I have enough of an anchor in the twenties that spontaneous, meme-powered time travel will not be happening in the foreseeable future. I’m just excited to finally see the last bits of hardware fitting into place after over a year of planning.

Final Question

Have you ever amused yourself by spinning a trackball faster than the computer knew what to do with it?

Family Photo Chest Part 14: The Tracks are Built, Bring on the Locomotive

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am technically ready to start my first batch of scanning photos. Let’s get started!

Early Start

I am tired of this project going seemingly forever. Whatever I’m getting done, I want working this week. My plan has been to scan directly into the DivideScannedImages script for GIMP, and for that I need the XSANE plugin (Scanner Access Now Easy for Xorg GUI server). Every version I found was ancient and obsolete. Turns out installing the plain XSANE included its own GIMP plugin, as confirmed by xsane -v and looking for a line about GIMP. Just know that if you’re trying to check the version over SSH, that it really wants an Xorg server: export DISPLAY=’:0′ worked for me.

GIMP has a powerful scripting language built in. With it, you can automate most anything, all be it with a little difficulty. You can even use it to script events when launching GIMP with the -b flag (b as in batch). I took a look at it. It doesn’t look that bad to learn. It’s heavy on the parentheses, but I’d hesitate to directly call it LISP.

I got as far as calling the XSANE plugin on boot from within the DivideScannedImmages script. I was a little short on time to struggle through getting it just right, so I reached out for help on a GIMP Discord, but then I began to reconsider everything.

Progress Rejected

I’ve been a bad programmer. So many dead ends. So many side projects distracting from the main goal. I have an unknown deadline for this project, and I really need to cut the fancy stuff I’ve been working on and do something that actually works.

I also got to thinking, Who am I designing this for? I had been working on a command line setup for me, and my mother has graciously offered to help with scanning a little bit at a time. She doesn’t do the command line outside Minecraft. As a good programmer, I need to consider my end user’s needs, and she needs a graphical workspace.

Thinking like my mother would think, I made a directory on my desktop for shortcuts related to this project. So far, I’ve made launchers for XSANE, the network share for the pictures, and GIMP. I may develop it later.

My new vision is to just use the tools I have: XSANE to scan and save locally and GIMP to separate and deskew pictures automatically and store them in the digital archive before someone either deletes or offloads the original scans. I can make a text file with miscellaneous metadata for each batch. A manual review can flag photos that will need additional touchup.

Testing the Workflow

I used a couple pictures from when I was little to test my workflow. I laid them on the scanner with a bit of tweak. I spent several attempts learning about the limits of the scanner. The scan head doesn’t actually reach the full scan bed. If I’m not careful, pictures will get pinched under the edges. It’s very easy to accidentally overlap pictures. All good reasons for finding a preview of each scan.

Deskew isn’t a miracle, but when I did a side-by side comparison on a sample size of my two test pictures, it got one almost perfect and reduced the the tweak from the other, but my sister said the deskewed one might be a little fuzzier.

Takeaway

I cannot emphasize this point enough: good programmers build software for end users. It’s fine to hack together a piece of software you understand, but if you want to share your creation with someone else, you’ve got to make a relatable front end.

Final Question

What elements of a project have you given up for the sake of an end-user?

BitWarden Operational and SSH Housecleaning

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am giving my BitWarden server a bit of a shake down, and since that didn’t take as long as expected, I have a story or two from rearranging my SSH keys. Let’s get started!

Server Fully Operational

Picking up from last week, I installed a BitWarden home server on BlinkiePi and set it up with a static IP making sure it had a unique hostname. To test it, I plugged it directly into my home router. I had to generate and install a self-signed security certificate so the browser plugin could recognize my server once I had directed its traffic appropriately.

I started early this week, expecting the firewall to be crazy complicated and maybe an exercise in futility, but that wasn’t the case. I found a package literally named “uncomplicated fire wall” (ufw). It installed no problem and I was easily able to reject unrecognized traffic by default, then allow ports for SSH and BitWarden.

I then went ahead and installed BitWarden plugins on my remaining computers, trying and failing to follow all the important steps from memory until I gave in and looked up the tutorial again. Later on in the week, I wanted to ensure my setup could withstand a power blink, so I cut power and and later restored it. I expected I’d need to spend a few hours trying to figure out how to get it auto started, but it’s almost like this project wants to short me of content, because I was able to reach its web interface no problem.

SSH Keys Between My Computers

I don’t like entering passwords every time I want to log into a system. SSH keys are way faster and more secure because the host machines are essentially letting you in as you essentially scan an ID instead of stopping to perform a secret handshake that can be more easily faked.

I did some research a while ago, and I found questions as to if the rsa method of making keys was still okay to use. To be honest, if it wasn’t, OpenSSH would probably push an update blocking its usage or at least notifying users that it’s been cracked wide open.

Nevertheless, when I redid my SSH easy access network, I used ed25519 to make my keys, and I transferred them over with ssh-copy-id to move them from one computer to another. I have three workstations I flip flop between, as well as my new password server and my Pi400 hack router. Now that I think about it, I could include the NAS and the Pi4 serving as our entertainment center, but that will wait for a later date.

One nice surprise I found was when I was copying a key from my main desktop on the 400’s subnet to one of my machines on the wider home network, and when my desktop didn’t recognize the computer, but the Pi400 did, the router vouched for the host I was reaching out to.

Takeaway

I suppose I could improve my setup with auto updates. That will mean another hole punched in the firewall, but in all reality, that’s a topic across my network for another day.

Final Question

If you were to spend a week in space, what games would you feel obliged to play along the way?

BitWarden: My New Password Manager

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am switching my password manager from LastPass to Bitwarden. Let’s get started!

Introduction: Password Strength

It’s almost comical when a digital security expert starts a talk in a packed auditorium and asks, “How many of you use the same password everywhere you go?” and half the people raise their hands. A facepalm or two later and the speaker may start comparing it to how that’s like a company keying all their locks to the same key, regardless of department or security level. It’s a stupid, stupid, stupid idea, and I am guilty of doing it up until two or three years ago.

The absolute worst password you can use is one someone else has without permission. The next worst password is one someone else can quickly guess. Web Comic XKCD – Password Strength gives a concise explanation: long, simple passwords are easier to remember and harder to guess than short passwords butchered by special characters.

But you could have the strongest password in the world, and still be vulnerable if you’re using that password for all your accounts. If just one of your sites is compromised, an attacker now has a key ring to go try all the popular sites to try and let himself in, and you will need to spend a long time cleaning up.

Password Managers

But then, convenience. The human mind would rather not remember tens or hundreds of passwords that may be up to date or replaced. That is where a password manager comes in. You log in with your one master password, and it automatically fills in passwords as you go. Set up properly, it’s even faster than entering your one password each time everywhere you go, and a basic setup isn’t all that hard to do.

At this point, a password manager should sound like a major security vulnerability, akin to a nicely organized key cabinet in the lobby, but a properly designed password manager never knows your passwords except when and where they’re needed. Your master password is used to help scramble and unscramble your passwords on your own computers. The rest of the time, it’s a bunch of otherwise meaningless garbage to anyone trying to poke at it.

Furthermore: don’t “log in with <Platform X>”. Ever. Only if there’s no other way, and even then: take pause. Merged accounts are worse than using the same password because they are by definition using the same username as well. A break-in to one is a break-in to all linked accounts.

From LastPass to BitWarden

I am displeased to announce that LastPass today is chasing off a lot of their free users by making them choose between types of devices: desktop/laptop and mobile. I personally only use a tablet for one or two things, like reading my Bible or viewing PDF’s. This won’t affect me but maybe once a month or two when I’m not bothering to walk to a desktop. Still, I don’t like it. It’s not like they’re getting any of my money anyway.

I chose BitWarden because it kept coming up as a good alternative. Not only is it open source, but their code has been audited, and I can self-host it as well: all are highly desirable features whereas LastPass is -at most- only audited.

The actual switch once I had my personal server up was easier than getting the dogs ready for a walk. All my passwords were moved in a single transaction, categories and all.

Personal BitWarden Server

First of all, IF YOU DON’T KNOW WHAT YOU’RE DOING, JUST SET UP A REGULAR ACCOUNT! That said, I want to challenge myself, and I believe this is reasonably within my grasp. I closely followed sensiCLICK’s Full Guide to Self-Hosting Password Manager Bitwarden on Raspberry Pi on my BlinkiePie, my Pi 3B+ using a fresh, minimal install of a Raspberry OS.

I don’t really have much to say here because I don’t understand a lot of the new stuff I did. There were some instructions that had changed in the months since the video was released, but there were notes in the chapter titles. The tutorial ironically didn’t encourage its viewers to change the default password of ‘raspberry’ as you should. I changed the hostname, gave it a static IP, and not much else. I’ll need to save locking it down for another week when I have more time to propagate BitWarden across the rest of my devices that need it.

Takeaway

Passwords, like locks, are a balance between how badly people want in vs how badly you want to keep them out. Short passwords are easier to enter (if they can be remembered), long passwords keep attackers out longer.

Final Question

How many unique passwords do you use?

Family Photo Chest Part 13: Early Prototype Workflow

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am merging a couple projects into one and hoping they stick. Let’s get started!

Overview

I’ve been piecing bits of my photo trunk project workflow together now for way too long. Right now, the architecture is looking like I’ll be scanning sets of pictures to a directory on a Network Attached Storage, then I can use a cluster of dedicated microcomputers from another unfinished project to separate and deskew the raw data into individual files. These files will then live permanently on the NAS.

Progress is rarely linear though. My end goal for this week hopped around quite a bit, but in the end I felt like I did nothing but figuring out how not to proceed: ground work with no structure. Routers are hard when you’re trying to learn them on a schedule!

Lack of Progress

In a perfect world, I would have been well on the way to configuring a cluster node by now. In a less unreasonable one, I would have had my Pi 4 OpenWRT ready to support the cluster. Late-cycle diagnostics chased me into an even more fundamental problem with the system: Wi-Fi connectivity.

During diagnostics, I’m learning about how different parts of the system work. Physical connection points can be bridged for a single logical interface, and Ethernet cables can support separate ipv4 and ipv6 connections. I can’t configure the Ethernet (on either logical interface) the way I want because that’s how I’m connecting to the Web UI and SSH. I end up stuck using two computers besides the two router Pi’s (OpenWRT and a Raspian hack router that actually works) because I don’t like switching my Ethernet cables around on the switch, but I need to do that anyway when I have to copy a large block of text. In short: the sooner Wi-Fi gets working, the better.

I understand I am essentially working with a snapshot. It’s been tidied up a bit, but bugs still exist. Wi-Fi is apparently one of those things that’s extra delicate; each country has its own region, among other complexities. On the other hand, I don’t know if that’s actually the case, as diagnostics are ongoing.

Takeaway

I’m probably going to work on this one in the background for a while. The OpenWRT help forum’s polish at least in part makes up for routers being dull to learn. If it takes too long, I do have other projects, so I may need to replace the cluster with a more readily available solution.

Final Question

Have you ever had upstream bugs that kept you from completing a task?

“Beowulf Cluster:” Part 6: OpenWRT Installed

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m installing OpenWRT Linux router distribution on a card for a Raspberry Pi 4. Let’s get started!

Background

A while ago, at the beginning of lockdown, I was gifted a few microcomputers I wanted to arrange into a cluster, maybe even turn them into a model supercomputer. I was planning on using OpenWRT, but it wasn’t –and technically still isn’t– available for the Pi 4 outside the use of snapshots. I compromised by configuring a minimal Raspian installation, but I’ve yet to figure out how to program the firewall to disallow computers on its Local Area Network (LAN) from going anywhere online without my say so in addition to keeping them hidden from the Wide Area Network (WAN).

My efforts back then were still possibly the most useful project I’ve done to date: I’ve been using that card as my main Wi-Fi receiver for my workstation. I conjecture it should be just fine with a Pi 4 (1 GB RAM), but since all my more qualified Pi 4’s are busy, my fancy Pi 400 has been serving in that capacity.

Installation

As noted above, OpenWRT for the Pi 4 is only officially available as a snapshot. These builds often lack recommended packages, including any GUI I might want to explore. This is where community builds come in. My research converged on one by wulfy23.

The GitHub’s readme’s took me a while to understand, in part because of all the options. I gathered that there were “factory” builds and “system” builds. Factory builds are for fresh installs, and system builds are for upgrading existing systems. At that time, there were as many as three builds for download, and choosing the right one seemed almost arbitrary.

My first time installing, I totally forgot to check the provided SHA256SUM before unzipping it and dd’ing it to SD card and booting. I landed in a terminal that kept mixing the prompt with other messages. Reaching out to a support thread on the OpenWRT forum, I learned about the web interface, and how to connect to it.

The URL I was given failed every time, even my workstation alone with the router on my switch. I ended up going directly for the IP: 192.168.1.1. I was met with an inadequate dark mode I couldn’t find the settings for. I expect they’re probably there, and I spent a small amount of time looking for them by tossing reasonable sounding URL’s around and hoping for the best, but comparing notes among other tabs in the interface, I think the chances of happening across the specific one I need are slim.

Installation Take Two

I went through the same process another day, and found only a single version for download from the same place. The SHA256SUM checked out, and instead of unzipping it first, I learned about zcat, a little command line utility that can unzip a file to be piped into another command. I piped it directly into dd per an example installation I spotted my first go around installing OpenWRT.

I provided a root password and found a different theme that didn’t force a partial dark mode on me in short order. I found built in tools for ad blocking network wide, settings for managing network interfaces, and most importantly to this project: a fire wall. Alas, the fire wall remains something I have little practical understanding of. I’d like to believe I have a mid-range understanding of what it can do, but my only real hope is copying lines and hoping they do what I want – the last thing one wants in a firewall intended for actual security. No. A custom firewall is at least a week in and of itself.

Takeaway

I really like trying to do two weeks worth of topics in a single week. It usually doesn’t work. Granted: I did have a rusty introduction to both parts of the topic I wished to fuse into one this week. I’m looking forward to remembering zcat in the future.

Final Question

What neat, little tips and tricks have you picked up during a larger project?

Stabilizing Derpy Chips at Last

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m addressing an annoying trio of issues I’ve had with Derpy Chips since I installed PopOS on it. Let’s get started!

The Problems

I have a number for gripes to myself about Derpy. I frequently have to stare at an ugly, gray login screen for to a minute and a half before I can select a user account. Tabs sometimes crash in FireFox, but only while I’m using it. Discord sometimes blinks, and I lose any posts in progress – possibly representing minutes of work.

Additionally, my mother uses a separate account to play Among Us on Derpy, and I have my account set up with a left-handed mouse she can’t use easily. Unfortunately, Derpy tends to crash whenever I try switching users, so I’ve been using a full power cycle. And that means we need another long, featureless login screen before the actual login. Some day, I really want to figure out how to change a login screen. Aside from how long this one takes, I’d much rather use the KDE one over GNOME 3.

The Plan

Of the three issues I’m setting out to address, long login is the most reproducible. Fickle FireFox and Ditzy Discord happen often enough to make Derpy frustrating to use as a daily driver, but sporadically enough to resist debugging on-demand. So I am planning on spending up to the full week on Derpy ready to catch the errors when they happen.

Going off what I have to start with, I’m assuming my FireFox and Discord issues are related. Both use the Internet for their every function, and the glitching tends to happen at times when a packet is logically being received: for FireFox, when a page is either loading or reloading, and Discord when someone is typing or has sent a post. If I had to hazard a guess, I would have to say Lengthy Login is directly caused by my NFS being mounted in /etc/fstab, and I’m not sure if there’s anything to be done about it except working the surrounding issues.

For this week, I an reaching out to the the Engineer Man Discord and a Mattermost community I found for PopOS. I don’t know much about the latter, but I heard the PopOS dev team frequents that forum.

The Research

I started by posting about my issues. Help was super-slow, and I often got buried. I don’t remember any self research making any sense. Anyone helping me in the PopOS support chat seemed obsessed with getting me to address Blank Login first, even though it was the least annoying of my three chosen issues, if only other stuff didn’t bug out on me.

Someone gave me a journalctl command to check my logs, and I did so shortly after a target glitch. It came back with a segfault error of some kind. I added this to my help thread and humored them about disabling my NFS fstab lines.

RAM or Motherboard?

When researching further for myself, I came across a number of topics I didn’t understand. I didn’t make any progress until someone told me to try memtest86+. What a headache! I installed the package, but had to dip into GRUB settings so I could boot into the tool. Even then, it kept crashing whenever I tried to run it with more than one stick of RAM at a time, as in the whole thing froze within 8 seconds save for a blinking + sign as part of the title card.

I was hoping at this point it was just a matter of reseating RAM. Best case: something was in there and just needed to be cleaned off. Worst case: a slot on the motherboard might have gone bad, meaning repair might be one of tedious, expensive, or impossible.

I tried finding the manual of Derpy’s motherboard, but the closest was the one for my personal motherboard, a similar model. Both come with 4 slots of RAM: two blue, two black. I used the first blue slot to make sure each stick of RAM passed one minute of testing, followed by a full pass of testing, which typically took between 20 and 30 minutes. I wasn’t careful with keeping my RAM modules straight, in part because I helped clean my church while leaving a test running.

I identified the fourth stick from a previously tested one I’d mixed it up with by how it lit up the error counter, starting just past one minute in. I tried reseating it several times, with similar results: the same few bits would sometimes fail when either reading of writing. If I had more time, I would have a program note the failing addresses and see if they were the same each pass as they kept adding up.

Further testing on the motherboard involved putting a good stick of RAM into each slot. Three worked, but one of the black slots refused to boot, as did filling the other three slots. I landed with leaving one blue slot empty for a total of 12 out of 16 gigs of RAM.

NFS Automount with Systemd

I still want relatively easy access to the NAS from a cold boot. “Hard mount in fstab has quite a few downsides…” cocopop of the PopOS help thread advised me. Using the right options helps, but ‘autofs’ was preferred historically and systemd now has a feature called automounts. I thought I might as well give the latter a try. cocopop also linked a blog post On-Demand NFS and Samba Connections in Linux with Systemd Automount.

I won’t go into the details here, but I highly recommend the above linked blog. It didn’t make sense at first, but after leaving it for a day, my earlier experiences with fstab translated to this new method within the span of about an hour total. I missed an instruction where I was supposed to enable automounting once configured, but it felt almost trivial.

Results

I haven’t had any problems with Discord or FireFox since setting the defective RAM aside in the anti-static bag it came in. As a bonus, switching users works correctly now as well.

NFS mounting is now much more streamlined with systemd. While I cannot say which method would have been more challenging to learn first, the tutorial I was following made this new method feel way more intuitive, even if file locations were less obvious. I didn’t even need any funny business with escape characters or special codes denoting a space in a file share name.

Takeaway

It really should go without mention that people will only help each other with what they know. I find myself answering rookie questions all the time when I’m after help with a more difficult one. Working side by side this week on a future topic, I had such a hard question, people kept coming in with easier questions, and I ended up asking mine enough times someone commented about the cyclic experience. The same thing kept happening with the easy part of my question about login.

Final Question

Do you ever find yourself asking a multi-part question, only to have everyone helping you with just the easiest parts you’ve almost figured out?

Virtual Machines: a Preliminary Exploration

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m teaching myself a long-overdue skill for Linux: Virtual Machines. Let’s get started!

Overview

Virtual machines are like a computer within a computer; a host operating system allocates some of its resources to a less powerful computer, often with an entirely different operating system. Applications vary from getting a feel for a new Linux distribution before/without installing it on baremetal to giving scammers a sandbox to waste their time destroying without risking your actual system.

Failing other methods with less overhead, virtual machines are a more brute force way to run old or incompatible software on your machine. One personal example from my past was a 16 bit Bible program I liked the interface for. Windows 7 wasn’t happy running it, but there was a special XP mode I could boot into and run my program. I found the solution slow and clunky, and I didn’t use it but twice. Furthermore: the license didn’t extend for Windows 10, so I refused to use it on principle when I downgraded.

Choosing a VM

Wikipedia is a great place for finding software comparisons. Their list of VM’s is quite lengthy, but I wanted a general purpose VM solution I could use anywhere and run anything, as I had an idea I wanted to try on a Windows machine, but my main focus would be running one Linux from another. I was also trying and failing to keep an eye on weather a VM was using a type 1 hypervisor (better performing) or a type 2 hypervisor (more portable/debugable – I think) to run a guest OS.

Looking into individual results, Oracle Virtualbox came out as having a reputation for being easy, even for beginners, though it does lock away some features for a premium version. Free and Open Source KVM (Kernel Virtual Machine) also came up as a better performing, but harsher on the barrier to entry. Further research from LinuxConfig article “Virtualization solutions on Linux systems – KVM and VirtualBox” warned me that KVM may not be as enthusiastic as advertised when it comes to running outside Linux on Linux VM’s, and that I’ll probably want to learn both at some time when I revisit this topic to straighten things out with QEMU and other elements I’ve glossed over while reading about.

Installation: First Attempt

While my goal when starting research was putting a VM on my father’s Mint machine for unrelated –and soon outdated– reasons, I started on my older, but more capable Manjaro machine. I found VMM, a package in a community repository for monitoring VM’s, so I installed it, though poking about only yielded error messages.

It took a while, but it looks like my CPU doesn’t support everything I need for running VM’s. None of my personal computers do. During my initial investigation, I looked up my CPU model on Intel’s site. From what I saw, it supported Intel Virtualization Technology (VT-x), but not Intel Virtualization Technology for Directed I/O (VT-d). One guide only mentioned the former as a requirement, but no package for KVM proper showed up when I searched for it. Furthermore: any commands that look through my CPU’s info properly don’t see the required component.

Takeaway

So, no. I’m not doing a VM today, but when I looked at my father’s Mint box, the newer CPU did support virtualization, and by extension, ButtonMash should too, though their other resources may limit useful applications.

This week’s research has also given me insight as to why XP Mode was so clunky those years ago. I was sending my hardware in directions it wasn’t designed to go. It can still pretend like it’s up to the task, and for old enough applications it doesn’t matter. But hosting a modern OS on top of a modern OS is not for me at present.

Final Question

Have you ever gotten closure to an old project years after laying it to rest?

Family Photo Chest Part 12: Early Prototype Workflow

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m recounting the tale of my first working prototype, and how I ruined it before getting it to actually work. Let’s get started!

Pi4 8GB and Cards

I am now the owner of a Pi4 with 8GB of memory, the highest end Pi available at the present time. When I unboxed it, I put it directly into a case with a fan powered by GPIO pins. Some day, I’ll want to benchmark its cooling against my Pi 4 with the huge heatsink case and passive airflow.

The cards I had on order never came in, and their listing vanished in the meantime. I ended up with some 64GB cards from Gigastone that are supposed to be better on paper, but I’m not in a position to benchmark the Raspberry Pi use case. While these new cards only have one SD adapter between them –I’ve been using SD adapters for labels– they did include a USB adapter. It’s only 2.0, though.

Manjaro ARM

I have not fully settled on what distro to go with for this project. TinyCore is great for starting projects, but I have a hard time finishing it there. For the time being, I’ll be prototyping from my Manjaro ARM card. Whenever I need to reset, I can boot a Pi to Raspberry OS and arrange for a direct dd from my master Manjaro card to the newer, larger card.

Side note: While performing my first wipe, I noticed dd did NOT automatically expand the partition like I thought it did. Once I have things working better, that may be a possible improvement to look at for a new master Manjaro card.

Prototype GIMP Configuration

First order of business was installing GIMP and XSANE plugin for GIMP. They worked first try, but XSANE only recognized a shared network printer all in one with a scanner built in, ignoring the local USB scanner I cannot seem to arrange access permissions for correctly. I think I spent half my time this week on this problem exploring dead ends.

The most important missing piece to the puzzle has been a script called Divide Scanned Images. With it, I can feed it scans containing multiple pictures, and it can separate and crop them automatically with the option to deskew (rotate to vertical) if an appropriate utility is found. Link to blog about script: how-to-batch-separate-crop-multiple-scanned-photos. Linked on that page in a comment by user Jan is a Linux-compatible version of Deskew (I have yet to get it to work).

Eager to test what I did have working, I went ahead with using the scanner on the network. I had someone put some picture stand-ins on; I got two seed packs. To my annoyance, the separation script appears to only work with pictures on file and not freshly scanned in, making it a completely separate process. As mentioned above, Deskew refused to work. I suspect I either didn’t put it in the right place or I was working with a copy compiled for an x86 processor while on an ARM based system, though it could be as simple as shadows from the seed packs.

Struggling With the Scanner

I find SANE to be an ironic acronym: Scanner Access Now Easy. I still don’t have Linux scanners figured out. I know there’s an easy way with security implications I’ve stumbled my way through before. I also have learned that differences between distros make the Ubuntu page I keep getting thrown to for help useless in parts. Whenever I post for help on Discord, someone else comes along with another question and mine gets buried.

Along my journeys, I’ve learned about a scanner group. I tried adding it to my profile, and somehow managed to get what appears to be a fake group. After a long time trying to figure out how to safely remove it so I could add the real one, I managed to remove myself from all my default groups including my ability to use sudo, and I don’t believe a root account is set up on this card. It even said the incident would be reported. Any such black mark never had a chance to be transmitted over the Internet –WiFi was down for USB scanner diagnostics– before I dd‘ed the master copy back over it.

Another attempt had me searching through the local files for answers. sane-find-scanner and anything looking at the USB port directly can see the scanner right away, but scanimage -L to list devices SANE sees comes up with nothing when off the network. I can’t reproduce my exact path on the laptop I’m working from, but I found a tip to check /etc/sane.d/ for appropriate config files. If I understand epson.conf there correctly, my problem is either elsewhere, or I need both a product ID and a device ID, the later of which I still have no idea how to locate.

Revised Workflow Proposition

In the light of GIMP seemingly not wanting to split pictures live in memory, it may be a good idea to offload that task to a more powerful computer that can handle two Pi’s operating scanners and saving to a network share hosted on an SSD before saving the pictures to GoldenOakLibry. Touch up can then happen in-place.

Takeaway

While I’m glad to have gotten a subpar prototype operational, it’s only about 60% of the process demonstrated at once. Still, it was the missing pieces demonstrated, and the toughest spot was exactly where I expected from past experience. This is already Part 12 of this ongoing series, and I want to finish before another 12 parts go by. Any month may be the month, and no matter how down I feel at the end of work on it, I still made a major goal this week.

Final Question

What was the biggest mistake you’ve ever made but still had a minimal effort fix for?