Family Photo Chest: Part 15.1: Incremental Improvement

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am working with my father, Leo_8472, on creating a master archive of our family photo collection. Let’s get started!

A Project in Motion

I’m starting this post off with a section written by Leo as my first-ever guest writer, since he has been doing most of the scanning.

Hi. Leo_8472 here, and today I am writing about my experience learning curve on using the scanning system set up by Shadow_8472 for our family photo archive. This has been a long process getting set up and now we are starting up production scanning. Shadow_8472 showed me the basics of XSANE scanning software, but here is where I go solo.

The first thing XSANE does when starting is to look for scanning devices. So, the flatbed scanner needs to already be on and fully booted up, or else XSANE will not find it. My next battle was to turn on the ‘Acquire Preview’ so that I can scan only the area of the flatbed with my photo, not the entire glass.

I eventually found the Acquire Preview Window option, but not before going down several rabbit holes. One of these rabbit holes holes pulled me in when I clicked on a gamma adjustment control on the XSANE menu and the gamma controls exploded, making the menu extend off of the bottom on the screen. XSANE would not let me move the menu up higher to reach any controls at the bottom of the menu, like the scan button. So in an effort to get more screen real estate out of the garage-salvaged VGA monitor, I swapped on an HD LCD monitor from my usual computer. Ahhh, an HD image at last. Almost. The image Button Mash was sending to my HD LCD monitor was exactly the same as what was sent to the VGA monitor. ARRGH! To get higher resolution I had to dive into the operating system’s monitor settings and raise the resolution to maximum for my new monitor. This mostly worked and I was able to see more of the XSANE menu, but not all of it.

Shadow_8472 came to my rescue to help me resolve the menu difficulties. He did not know the solution off hand, but started the process of looking up instructions for the gamma controls on the internet, ans we found a “Candelabra” toggle in the XSANE menu which will shrink the menu to a civilized size.

With the menus tamed and the Acquire Preview window showing, it is time to scan. I select a photographic print from the archive and examine the front and back of the print. Some of the photos have valuable information written on the back of the photo. (Old style Metadata.) I can often decode some of the info on the back of the photo such as a date or a name and sometimes the information is in Russian, which gives me a hard time. Anyway, photos with writing on the back get scanned both front and back. We decided to add the letter “F” for “front” and “B” for “back” to the end of the file name so that we can keep the files together.

I scan the front of the photo first, save it to a descriptive directory and then scan the back of the photo. We started scanning small photos at 1600 dpi and found the result was huge and filled an HD monitor display. 1600 dpi will pick up the texture of the surface of the photographic paper, so there is plenty of latitude for future cropping, if anyone wants to make an enlargement of this photo in the future.

The scanning process is repetitive, so I try to get into a rhythm of the steps required. Saving the scan to our Network Attached Storage (NAS) was taking at least as long as scanning and making for longer wait times. To shorten the wait, we repaired a length of CAT6 cable with a new end connector and put the scanning system on a hard wire cable rather then relying on WIFI.

We performed a test to determine our improvement in network speed by scanning a sample snapshot and timing the save using WIFI and using the CAT6 cable. We found that our sample snapshot of about 70 MB took about 34.5 seconds to save using WIFI and a second scan of 90 MB took about 4.5 seconds to save using the CAT6 cable. Saving 30 seconds per scan for this sized file is a great improvement as it is almost 10 times faster using the hard wire connection.

Future potential improvements to the scanning process would be to implement our ideas on the physical handling of the prints that we are scanning. Another important thing for this scanning project is to try to make progress every day to keep the momentum going. Eventually we will get through the whole archive.

Ethernet Enabled

Shadow back here. A while back, I was given the remnant from a spool leftover from when someone ran a network wire under our church. Originally, I was going to use it for a model supercomputer, but that project is on hold pending a better understanding of packet routing; I don’t want the individual nodes seeing other computers on the home network.

I had an idea while originally brainstorming the setup to run that cable from the router to the room where we’re set up for scanning, preferably before we cut it up into a bunch of little patch cords. We learned how to put the connectors on, but the cable didn’t work.

We eventually got confirmation from a continuity tester we ordered that a couple wires were switched, and one didn’t connect at all. In an effort to speed up saving each individual picture, we pulled out the tools to redo the ends. Leo managed to find the exact video for our crimping kit, and when the first one was done, I went ahead and tested continuity again. By chance we had fixed the bad end, and the results matched a known good cable.

I ran the cable from the Button Mash workstation directly to the router. Once I had it adjusted and everything, it had maybe a few inches to spare. We ran two tests: a ping test, and a speed test. When pinging the router at 10.0.0.1, we were hearing back about four times faster. The speed test had Leo scanning a picture and saving it once over Ethernet, and once over Wi-Fi. Ethernet finished saving in about 3 seconds. When we went for Wi-Fi, XSANE crashed while switching over, so we rescanned and saved a similar file, and it took around 30 seconds. My Pi 400 is stepping down from Wi-Fi duty.

AI Enhancement

And now for something I spent a while on, but have yet to get working. I thought it would be funny to pull a fast one on Leo by using an AI covered in one of Two Minute Papers’ YouTube videos last year. He explained how researchers came up with a new way to colorize black and white photos that addressed many issues with missing data, citing subsurface scattering where light bounces around within a subject’s skin before coming back out.

I managed to find the GitHub from the project and clone it. I tripped during configuration, not understanding the dependencies had a step to install them using an environment tool called Conda until someone pointed it out to me. Conda was a bit of trouble in and of itself. I don’t know exactly how I got it working, but one time I repeated an instruction and got a working result when expecting another failed one.

I set up the environment and switched to it, but eventually ran into an error I’m not able to blitz my way through: CUDA out of memory. All four gigabytes of my GPU (Graphics Processing Unit or graphics card) were gobbled up by the test pictures, and the model kept asking for more.

I even went to the lengths of asking my sister if I could put my hard drive in her computer to borrow her graphics card, but the thing ate through all 8 GB of hers and burped out the same error without seemingly making any additional progress.

My trouble is I don’t know what kind of hardware I need for this. I’m imagining a dedicated server with four top of the line GPU’s running headless (no monitors, accessed over SSH). There’s still the chance it could be looking at my card and thinking it can take everything and make up the difference with relatively slower resources elsewhere on the system. In that case, I’d be pouting at me over having to share with my GUI (Graphical User Interface).

Takeaway

You don’t need to wait to start a project until conditions are perfect, otherwise things will never start as new ideas are half-developed and never tested. Think small thoughts starting out as you continue dreaming big. Invest slowly as you need it.

Final Question

What sort of features would you like to see if/when I see about a site redesign?

Family Photo Chest: Part 15: Day 1, for Real This Time

Good Morning from my Robotics Lab! This is Shadow_8472, and today is officially –no questions asked– Day 1 of actual scanning my family’s photo archive. Let’s get started!

Scanner in Motion

We’re scanning! We’re actually, finally scanning! It feels worth almost nothing that most of my research on fancy techniques is in the bit bucket. It’s been a long year for me, yet when I picked it up, my father had had it for quite a bit longer.

It came to my attention that my main hangup is names. I don’t know who most of these people are, where they are, or when they were there. Meanwhile my father keeps doing show and tell. In the end, this is still his project just as much as it has been mine. I have researched and assembled the equipment and software, but he is going to be the main force behind putting names to everything.

Our System

We are taking some advice we found about grouping photos by location and approximate year. Batches are now defined by these groupings in their own directories, for example: LosAngeles-1952. We’ll also have a metadata file we should be able to do something with at a later time if we need it.

While the microfiber gloves aren’t working out for us to wear, they’re still microfiber, and they can still clean dust. Even then, there’s only so much that can realistically be done. Rips, scratches, and mold have destroyed original data.

Scanning is unfortunately picture by picture. There is a possibility I might try something with the GIMP XSANE plugin later, but for now, bare XSANE works well when used as intended: preview, select scan area, scan. We’re dumping our scans directly onto the NAS (Network Attached Storage), changing the name of XSANE’s preview as we save it to denote front or back when necessary.

Scanning at 1600 dpi, a postage stamp displayed at full resolution would nicely fill the old, boxy, VGA tube monitor I pulled in from the garage; a postcard natively fills a more conventional HD monitor. I was dreaming it might go up to 4K or 8K; technology progresses, but the archive will not.

To come is digital editing for the best photos. Acidic paper slowly eats itself away, giving old photos that characteristic yellow. Rips, scratches, mold, and dust will take time to remove, but it’s perfectly doable with a brush. And of course, edited photos will get a manual deskew and crop when needed, which is most of the time as I can’t stand to lose detail off the scan area, and it doesn’t line up with the glass perfectly.

Takeaway

Writing on the back is what really killed most of my grand plans. XSANE really didn’t want to accommodate in that department, my hack confused my mother when my work in progress confused her, and I never got around to touching the script I saw mention of for putting front and back side by side in the same file. On top of that, scanning backs is prone to mismatching when using the script Divide Scanned Images.

This project is a team effort now. As noted earlier, I’m being the tech support here, and my father is moving to be the star of the show. We’ll be trying for at least a package of pictures per day, and if it all goes well, we’ll bring the second workstation into the mix.

Final Question

Have you ever worked on a project, only to have someone seemingly take over, but realized it was turning into more of a group project with distinct roles?

I Got Git, But No Pi

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am starting out with Git. Let’s get started!

Previously…

Last week, I was all set to install Git on BlinkyPie, my Raspberry Pi 3B+, but it had some problems with accessing the apt store. I felt lucky getting away with barely fixing what I broke, but the cause remains unidentified.

Git

Disclaimer: this will not be a comprehensive tutorial starting out. There are plenty of those already. If you’re just starting out, I recommend a shorter tutorial. I viewed three videos ranging from 15 minutes to an hour (I still haven’t finished the hour long one) and found the information density roughly the same with longer tutorials covering features and configurations I’m guessing aren’t important for a single user first project. Focus on the main loop, then expand. For any one who missed my last post, GitHub is a site hosting the most well known and widely used Git server. I won’t be touching it today.

Every time I do any research about Git, it seems about ten times bigger then last I remembered. It’s free, it’s open source, and it’s even industry standard for tracking changes in a project (Version Control). I should have been using it ten years ago, but like pretty much any tool originally written with Unix or Linux in mind, its initial impressions when using it on Windows were that of an aquarium at a dog shelter; it’s been made to work, but it hardly looks like it belongs. I strongly recommend project or two in a Unix like environment to learn the command line if you want to become fluent in Git without relying on a graphical interface.

Git Server Installed, Not Configured

I have yet to see what is of BlinkyPie, but I was easily able to install Git server onto GoldenOakLibry, the Network Attached Storage (NAS), from repositories over the web interface. Had this been restricted, like what happened with my Bitwarden server, I would have proceeded with rigging a Pi to host while storing the data proper on the NAS.

Proper server side configuration will be a matter for another week. My goal was to have a test repository accessible from all my workstations. However, it appears I need to do such things as make a git user and set up credentials and user keys. I’m not prepared to do that this week.

A Simple Start

I know it doesn’t sound like much, but I have taken my first steps toward intuitively understanding Git. I made a directory and used git init to tell Git it is a repository. Git in turn made a hidden administrative directory called .git, which I don’t touch on pain of messing up the project history. I made a simple text file and walked myself through adding it, and committing it with some difficulty following on-screen instructions when it wanted a note about the commit.

The next step would have been to push it to a repository I’m at a loss for setting up. While writing, I went through the same steps again on GoldenOakLibry with no idea how I’m supposed to use a server except possibly using clone on all my client machines. I even went through the confusion that is vim terminal text editor, since nano wasn’t around and won’t be for the time being.

Takeaway

Git is big. Every programmer should learn it. Just don’t go learning both Git and the command line at the same time. I had trouble learning which was which way back when. And if you reasonably can, install Linux. All it takes is a small distribution on a thumb drive; an old computer is even better.

Final Question

Got Git yet?

I Didn’t Git It

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I tried adding a Git server to my Bitwarden server, and blew most of my time on a detour. Let’s get started!

Git

Git is an open source version control platform, not to be confused with GitHub, a website and hosting service implementing the software. Git is sort of like an Undo/Redo history, but you can switch off old changes and put the new ones on. It will be very useful –if not project saving– while building my social robot. The only problem is that I need to learn how to use it from a developer’s point of view.

I want to install a Git server. My target is my Raspberry Pi 3B+ hostname: BlinkyPie operating system: Raspberry OS lite. It’s already running my Bitwarden server to host passwords, and has plenty of space on its SD card for starting out and ample room in its CPU cycles.

Failed installation

As always, the first step in installing any piece of software in Debian family Linux distributions (distros) is to update repository information. sudo apt-get update The log put up a warning about the locale. I didn’t think much of it until Git didn’t install.

Logs are not always the easiest things to understand. They’re also not the easiest for search engines to decode when nobody has made attended to the exact wording. Even now, I have no idea what’s going wrong. What I do know is that when I tried messing with the locale configuration settings under raspi-config, the whole system would stop responding to any command I sent remotely logged in over SSH other than exit (ending the session). Further attempts to connect would fail until I rebooted from the physical switch and I would repeat the cycle.

System Failure and Backup

Things got really messed up to the point where I started worrying if I’d ever get my passwords back. I mentally went into recovery mode, trying to salvage what I could. I started by using dd to Duplicate the Disk onto the NAS so I could restore it if I messed it up worse. I also observed that my Bitwarden plugin has a limited-time retention of my password vault in case it can’t reach its host server. My vault was soon exported, where it would be safe in case of a total system wipe and reset.

With the only irreplaceable nugget safely tucked away, I unsuccessfully turned my attention to extracting the Docker image hosting Bitwarden, and settling for the contents of the home directory. This whole time, I was also researching what operating system I was going to replace Raspberry OS with. I seriously looked at Arch, Manjaro, and Apache, though that’s not something I can just do in a single day and expect satisfactory results: I gave repairing what I had one more chance.

Net Work: Nothing

After sifting through the tons of free tools for checking SD card integrity, I landed on linux.goeszen.com/how-to-test-an-sd-card-on-ubuntu. It tells of a tool that “comes with most debian based *nix systems” called dosfsck (also present on Manjaro). I told it to repair the SD card and test for bad clusters. It found a “dirty bit.” I won’t pretend to understand, but it appears to involve the computer’s CPU tracking modified data in RAM.

There was another error it found and repaired I wasn’t able to understand myself, but when I booted it back up, everything was back to the way it was before. The next step is to make another image and do some research before hacking another OS in there.

Takeaway

In retrospect, I probably should have checked if GoldenOakLibry, my Network Attached Storage (NAS) unit, can do Git easily. I’ll probably need more than the 64 GB of storage in the server’s card. Besides, I’m asking for trouble if I don’t at least schedule regular backups for something as important as passwords, even if I weren’t thinking in terms of potentially faulty SD card.

Final Question

On a scale of annoyed to paralyzed, if you had to roll a many sided die and yank that hard drive as if it were now destroyed, how much pain would you be in?

Let’s Build Robotics…

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I have a very exciting project announcement, one I’ve been looking forward to working on since I started this blog. Let’s get started!

Introduction

I’ve always dreamed of having my own social robot, and while I’ve spent the past few months researching the subject in my spare time, I still feel like I don’t know the second thing about actually building one. Way back when, I heard Linux was the operating system of choice for the craft, and I’ve been here ever since, and I’d like to move on.

I want to make an open-source, ambulatory social robot that can hold on a conversation, operate independent of a host computer, and be buildable/affordable by an avid robotics hobbyist. I know I don’t have the skills, and technology may need to march on before all my goals are feasible at once, but I fully expect this project to take over ten years. I respect the open source community, and I want to start giving back.

Background

The field of robotics is about as developed as computers were in the 50 to 60 years ago: large, clunky models confined by their price tags to businesses or universities; commercial applications cultivate a public awareness in the commercial sector leading up to early adoption in the home with the number of use cases blossoming as the technology develops.

One future of robotics under development has robots built for socialization. Where robots of today may feel like little more than computers personified, social robots can bring a character to life in the real world. Imagine: an automated service may benefit from a relatable face. One robotic puppy now on the market is aimed at forgetful seniors who might be at risk of neglecting a living therapy animal.

Who says a robot must have a riged frame, run on servos, or use other parts that boost their cost? A number of artificial muscle designs are cheaper, lighter, and simpler to make. I’ll go into more detail in future posts, but the kind I’ve been studying offers a world of design possibilities motors struggle to replicate.

Finishing Thoughts

It is my hope that I can work on this mega project as I can, and post video updates every three months or whenever I reach a major milestone. I will still have other topics as projects come up, and I may even do side projects directly intended to practice/develop skills for this project. For this reason, I won’t be using my usual numbering system. The parts for this robot are mostly out there. I just need to assemble them and make up the difference.

Final Question

What would you do with a personality robot you could take anywhere?