Attempting a Personal Wiki

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am trying to set up a personal wiki to understand my own site better. Let’s get started!

Searching for a Wiki

Pick a game, show, or fandom, and chances are you will find a wiki for it — sometimes more. These online subject encyclopedias are normally open for readers to edit, and otherwise need no introduction.

I already knew there existed at least one version of the software free and open source, but Wikipedia has a whole list of them [1]. The table was static [note: only on mobile], so I had to mentally track for process of elimination across categories like open source vs proprietary or even its target audience.

My use case is to make a wiki about my sister’s fanfiction series. She’s built up her own take on the world of Sonic, and we decided it could be fun to learn how to organize elements of her story into a familiar format. If successful, the same process will make it easy for us to grow similar wikis from other stories, like when we’re worldbuilding for a possible role play game.

I landed on pursuing Foswiki [2]. It’s not limited to a single user, it’s free and open source, and it has most or all the features tracked in the comparison cited above. I did not notice at the time, but it expects you to work with Perl, a programming language of which I know little more than the name. I am open to learning, though.

Foswiki Host

For the record, I currently lack a dedicated home server I have complete control over while ButtonMash is on picture scanning duty and my fleet of Pi’s is dealing with unknown issues: either a poorly chosen brand of SD cards or one of the machines going bad. Besides, I was hoping some of you finding this post might be interested in doing this at home, so I wanted to use one of my workstations.

I landed with using DerpyChips. Even though it’s not always the most stable, it is the most readily accessible of my workstations from any point on the home network. I encountered some trouble updating though: a repository in its listings was no longer signed. It wasn’t until the next day of work that I noticed it was just a repository for Celestia, a computerized home planetarium. So far, the issue has been ignored, but it ate up a lot of the time I otherwise would have used towards the main project. In the meantime, I had reached out to three or four places, including the PopOS Matermost chat and the Celestia Discord once I identified the problem.

Apache2

A web site cannot run on bare operating system alone, apparently. There’s an extra layer called a web server. Apache Web Server is a name I’ve heard in relation to the web for a while. I gather it’s a popular choice. It was also mentioned in the installation documentation for Foswiki, and only after I installed it by repository did I notice that it’s only one of a few supported options.

Apache installed smoothly. sudo apt-get install apache2 and Firefox was able to reach the test page for Ubuntu at Derpy’s IP.

My only reservation at this point is a possible mix up between Apache and Apache2. It’s just a version difference, but I have not spotted Apache2 being called out in Foswiki’s documentation.

Takeaway

I fully hoped this would be a one and done project. Once understood and scooted the problem to the side, Foswiki appears more involved than I expected. I don’t want this to become another long-term project, so I may take another look at that list again and see if there’s something geared a little more towards a brother and sister organizing thoughts.

I find it odd that Foswiki doesn’t capitalize their W when most other wikis appear to.

Final Question

How would you use a personal wiki to organize information?

Works Cited

[1] “Comparison of Wiki Software,” Wikipedia. Accessed: June 13, 2021. [Online]. Available: https://en.wikipedia.org/wiki/Comparison_of_wiki_software

[2] Foswiki. Accessed: June 14, 2021. [Online] Available: https://foswiki.org/

Family Photo Chest: Part 15.1: Incremental Improvement

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am working with my father, Leo_8472, on creating a master archive of our family photo collection. Let’s get started!

A Project in Motion

I’m starting this post off with a section written by Leo as my first-ever guest writer, since he has been doing most of the scanning.

Hi. Leo_8472 here, and today I am writing about my experience learning curve on using the scanning system set up by Shadow_8472 for our family photo archive. This has been a long process getting set up and now we are starting up production scanning. Shadow_8472 showed me the basics of XSANE scanning software, but here is where I go solo.

The first thing XSANE does when starting is to look for scanning devices. So, the flatbed scanner needs to already be on and fully booted up, or else XSANE will not find it. My next battle was to turn on the ‘Acquire Preview’ so that I can scan only the area of the flatbed with my photo, not the entire glass.

I eventually found the Acquire Preview Window option, but not before going down several rabbit holes. One of these rabbit holes holes pulled me in when I clicked on a gamma adjustment control on the XSANE menu and the gamma controls exploded, making the menu extend off of the bottom on the screen. XSANE would not let me move the menu up higher to reach any controls at the bottom of the menu, like the scan button. So in an effort to get more screen real estate out of the garage-salvaged VGA monitor, I swapped on an HD LCD monitor from my usual computer. Ahhh, an HD image at last. Almost. The image Button Mash was sending to my HD LCD monitor was exactly the same as what was sent to the VGA monitor. ARRGH! To get higher resolution I had to dive into the operating system’s monitor settings and raise the resolution to maximum for my new monitor. This mostly worked and I was able to see more of the XSANE menu, but not all of it.

Shadow_8472 came to my rescue to help me resolve the menu difficulties. He did not know the solution off hand, but started the process of looking up instructions for the gamma controls on the internet, ans we found a “Candelabra” toggle in the XSANE menu which will shrink the menu to a civilized size.

With the menus tamed and the Acquire Preview window showing, it is time to scan. I select a photographic print from the archive and examine the front and back of the print. Some of the photos have valuable information written on the back of the photo. (Old style Metadata.) I can often decode some of the info on the back of the photo such as a date or a name and sometimes the information is in Russian, which gives me a hard time. Anyway, photos with writing on the back get scanned both front and back. We decided to add the letter “F” for “front” and “B” for “back” to the end of the file name so that we can keep the files together.

I scan the front of the photo first, save it to a descriptive directory and then scan the back of the photo. We started scanning small photos at 1600 dpi and found the result was huge and filled an HD monitor display. 1600 dpi will pick up the texture of the surface of the photographic paper, so there is plenty of latitude for future cropping, if anyone wants to make an enlargement of this photo in the future.

The scanning process is repetitive, so I try to get into a rhythm of the steps required. Saving the scan to our Network Attached Storage (NAS) was taking at least as long as scanning and making for longer wait times. To shorten the wait, we repaired a length of CAT6 cable with a new end connector and put the scanning system on a hard wire cable rather then relying on WIFI.

We performed a test to determine our improvement in network speed by scanning a sample snapshot and timing the save using WIFI and using the CAT6 cable. We found that our sample snapshot of about 70 MB took about 34.5 seconds to save using WIFI and a second scan of 90 MB took about 4.5 seconds to save using the CAT6 cable. Saving 30 seconds per scan for this sized file is a great improvement as it is almost 10 times faster using the hard wire connection.

Future potential improvements to the scanning process would be to implement our ideas on the physical handling of the prints that we are scanning. Another important thing for this scanning project is to try to make progress every day to keep the momentum going. Eventually we will get through the whole archive.

Ethernet Enabled

Shadow back here. A while back, I was given the remnant from a spool leftover from when someone ran a network wire under our church. Originally, I was going to use it for a model supercomputer, but that project is on hold pending a better understanding of packet routing; I don’t want the individual nodes seeing other computers on the home network.

I had an idea while originally brainstorming the setup to run that cable from the router to the room where we’re set up for scanning, preferably before we cut it up into a bunch of little patch cords. We learned how to put the connectors on, but the cable didn’t work.

We eventually got confirmation from a continuity tester we ordered that a couple wires were switched, and one didn’t connect at all. In an effort to speed up saving each individual picture, we pulled out the tools to redo the ends. Leo managed to find the exact video for our crimping kit, and when the first one was done, I went ahead and tested continuity again. By chance we had fixed the bad end, and the results matched a known good cable.

I ran the cable from the Button Mash workstation directly to the router. Once I had it adjusted and everything, it had maybe a few inches to spare. We ran two tests: a ping test, and a speed test. When pinging the router at 10.0.0.1, we were hearing back about four times faster. The speed test had Leo scanning a picture and saving it once over Ethernet, and once over Wi-Fi. Ethernet finished saving in about 3 seconds. When we went for Wi-Fi, XSANE crashed while switching over, so we rescanned and saved a similar file, and it took around 30 seconds. My Pi 400 is stepping down from Wi-Fi duty.

AI Enhancement

And now for something I spent a while on, but have yet to get working. I thought it would be funny to pull a fast one on Leo by using an AI covered in one of Two Minute Papers’ YouTube videos last year. He explained how researchers came up with a new way to colorize black and white photos that addressed many issues with missing data, citing subsurface scattering where light bounces around within a subject’s skin before coming back out.

I managed to find the GitHub from the project and clone it. I tripped during configuration, not understanding the dependencies had a step to install them using an environment tool called Conda until someone pointed it out to me. Conda was a bit of trouble in and of itself. I don’t know exactly how I got it working, but one time I repeated an instruction and got a working result when expecting another failed one.

I set up the environment and switched to it, but eventually ran into an error I’m not able to blitz my way through: CUDA out of memory. All four gigabytes of my GPU (Graphics Processing Unit or graphics card) were gobbled up by the test pictures, and the model kept asking for more.

I even went to the lengths of asking my sister if I could put my hard drive in her computer to borrow her graphics card, but the thing ate through all 8 GB of hers and burped out the same error without seemingly making any additional progress.

My trouble is I don’t know what kind of hardware I need for this. I’m imagining a dedicated server with four top of the line GPU’s running headless (no monitors, accessed over SSH). There’s still the chance it could be looking at my card and thinking it can take everything and make up the difference with relatively slower resources elsewhere on the system. In that case, I’d be pouting at me over having to share with my GUI (Graphical User Interface).

Takeaway

You don’t need to wait to start a project until conditions are perfect, otherwise things will never start as new ideas are half-developed and never tested. Think small thoughts starting out as you continue dreaming big. Invest slowly as you need it.

Final Question

What sort of features would you like to see if/when I see about a site redesign?

Family Photo Chest: Part 15: Day 1, for Real This Time

Good Morning from my Robotics Lab! This is Shadow_8472, and today is officially –no questions asked– Day 1 of actual scanning my family’s photo archive. Let’s get started!

Scanner in Motion

We’re scanning! We’re actually, finally scanning! It feels worth almost nothing that most of my research on fancy techniques is in the bit bucket. It’s been a long year for me, yet when I picked it up, my father had had it for quite a bit longer.

It came to my attention that my main hangup is names. I don’t know who most of these people are, where they are, or when they were there. Meanwhile my father keeps doing show and tell. In the end, this is still his project just as much as it has been mine. I have researched and assembled the equipment and software, but he is going to be the main force behind putting names to everything.

Our System

We are taking some advice we found about grouping photos by location and approximate year. Batches are now defined by these groupings in their own directories, for example: LosAngeles-1952. We’ll also have a metadata file we should be able to do something with at a later time if we need it.

While the microfiber gloves aren’t working out for us to wear, they’re still microfiber, and they can still clean dust. Even then, there’s only so much that can realistically be done. Rips, scratches, and mold have destroyed original data.

Scanning is unfortunately picture by picture. There is a possibility I might try something with the GIMP XSANE plugin later, but for now, bare XSANE works well when used as intended: preview, select scan area, scan. We’re dumping our scans directly onto the NAS (Network Attached Storage), changing the name of XSANE’s preview as we save it to denote front or back when necessary.

Scanning at 1600 dpi, a postage stamp displayed at full resolution would nicely fill the old, boxy, VGA tube monitor I pulled in from the garage; a postcard natively fills a more conventional HD monitor. I was dreaming it might go up to 4K or 8K; technology progresses, but the archive will not.

To come is digital editing for the best photos. Acidic paper slowly eats itself away, giving old photos that characteristic yellow. Rips, scratches, mold, and dust will take time to remove, but it’s perfectly doable with a brush. And of course, edited photos will get a manual deskew and crop when needed, which is most of the time as I can’t stand to lose detail off the scan area, and it doesn’t line up with the glass perfectly.

Takeaway

Writing on the back is what really killed most of my grand plans. XSANE really didn’t want to accommodate in that department, my hack confused my mother when my work in progress confused her, and I never got around to touching the script I saw mention of for putting front and back side by side in the same file. On top of that, scanning backs is prone to mismatching when using the script Divide Scanned Images.

This project is a team effort now. As noted earlier, I’m being the tech support here, and my father is moving to be the star of the show. We’ll be trying for at least a package of pictures per day, and if it all goes well, we’ll bring the second workstation into the mix.

Final Question

Have you ever worked on a project, only to have someone seemingly take over, but realized it was turning into more of a group project with distinct roles?

I Got Git, But No Pi

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am starting out with Git. Let’s get started!

Previously…

Last week, I was all set to install Git on BlinkyPie, my Raspberry Pi 3B+, but it had some problems with accessing the apt store. I felt lucky getting away with barely fixing what I broke, but the cause remains unidentified.

Git

Disclaimer: this will not be a comprehensive tutorial starting out. There are plenty of those already. If you’re just starting out, I recommend a shorter tutorial. I viewed three videos ranging from 15 minutes to an hour (I still haven’t finished the hour long one) and found the information density roughly the same with longer tutorials covering features and configurations I’m guessing aren’t important for a single user first project. Focus on the main loop, then expand. For any one who missed my last post, GitHub is a site hosting the most well known and widely used Git server. I won’t be touching it today.

Every time I do any research about Git, it seems about ten times bigger then last I remembered. It’s free, it’s open source, and it’s even industry standard for tracking changes in a project (Version Control). I should have been using it ten years ago, but like pretty much any tool originally written with Unix or Linux in mind, its initial impressions when using it on Windows were that of an aquarium at a dog shelter; it’s been made to work, but it hardly looks like it belongs. I strongly recommend project or two in a Unix like environment to learn the command line if you want to become fluent in Git without relying on a graphical interface.

Git Server Installed, Not Configured

I have yet to see what is of BlinkyPie, but I was easily able to install Git server onto GoldenOakLibry, the Network Attached Storage (NAS), from repositories over the web interface. Had this been restricted, like what happened with my Bitwarden server, I would have proceeded with rigging a Pi to host while storing the data proper on the NAS.

Proper server side configuration will be a matter for another week. My goal was to have a test repository accessible from all my workstations. However, it appears I need to do such things as make a git user and set up credentials and user keys. I’m not prepared to do that this week.

A Simple Start

I know it doesn’t sound like much, but I have taken my first steps toward intuitively understanding Git. I made a directory and used git init to tell Git it is a repository. Git in turn made a hidden administrative directory called .git, which I don’t touch on pain of messing up the project history. I made a simple text file and walked myself through adding it, and committing it with some difficulty following on-screen instructions when it wanted a note about the commit.

The next step would have been to push it to a repository I’m at a loss for setting up. While writing, I went through the same steps again on GoldenOakLibry with no idea how I’m supposed to use a server except possibly using clone on all my client machines. I even went through the confusion that is vim terminal text editor, since nano wasn’t around and won’t be for the time being.

Takeaway

Git is big. Every programmer should learn it. Just don’t go learning both Git and the command line at the same time. I had trouble learning which was which way back when. And if you reasonably can, install Linux. All it takes is a small distribution on a thumb drive; an old computer is even better.

Final Question

Got Git yet?

Family Photo Chest Part 14.4: Assembling a Scanning System

pricingGood Morning from my Robotics Lab! This is Shadow_8472, and today, I’ve studied another photo scanning system available on the web. Let’s get started!

Introduction

Scanning seems evermore a topic I’m learning about without entering production. I’ve lost count of how many workflow iterations I’ve come up with, yet each time I feel closer than ever to that most elusive batch #0001.

I’ve now studied the free portions of two workflows from people who claim years of experience: scanyourentirelife.com and howtoscan.ca, but howtoscan is my favorite between the two. Both give enough of their respective systems away to be functional, but they each offer additional, step by step, click by click training affordable to anyone rich enough to afford a dedicated scanner as is recommended when scanning thousands of photos at home.

My outline today is based on what I’ve gleaned from freely available sources, though influences of the above two are more prominent in my mind.

1. Have a System

Anyone can throw a few prints in a scanner and share them online. Hundreds or thousands of pictures taken and organized by people spanning generations mandates some form of structure.

My system at this point looks like it may be sorting pictures by immediate family of origin and further sorting by year. Eventually, each item will end up in one of several 10-12 gallon buckets that can be scanned and moved to a neat “completed” stack.

2. Select Hardware Wisely

You don’t need a $50,000 professional scanner if you don’t know how to use the $150 one intended for home use you may already have lying around, like me.

If, on the other hand you are buying a scanner new and already have scanner software piked out, like XSANE, you should make an effort find a fully compatible model. Remember: marketing for home grade scanners doesn’t always have quality results in mind; they’re there to sell a product to the general public. This probably isn’t you if you’re taking the time to research.

Thought should also be given to storage. You may choose to store your photos on a cloud or locally, but if you’re planning your digital archive lasting another 3 to 5 generations, I would recommend some form of protection against hardware failure, such as a RAID configuration featuring redundancy.

3. Assemble a Workstation

Less important is the workstation you’ll be scanning at. In my experience so far, RAM is a limiting factor. Right now, I’m working with a keyboard, mouse, and monitor are straight out of the 90’s. Other than that, your average workstation should be fine.

I will note that since I have animals in the house, I have a room with a door to keep them out. I will also note that I feel safe enough leaving the door open when I’m in the room with all the pictures in the buckets mentioned above.

4. Capture Natural Scans and Fix in Post

Again, standalone scanners aimed at households will have modes invented by the marketing department to sell scanners. Scan at your final resolution, dust and all. Anything you do to a scan preview is permanent to your scan proper. Take advantage of automatic numbering in names. Work on a copy in a program designed to manipulate images, not one intended to sell scanners. Scan in batches with similar settings to save time fiddling with software settings.

My setup uses GIMP. I’ve taken a class on the basics in Photoshop, and about the only thing I miss is something like the Quick Select tool, but GIMP beta has an early version out for testing before their big 3.0 release. I also plan on using a script called DivideScannedImages so I can scan multiple pictures in a single pass.

Now this last part will hurt, but not everyone will be interested in his or her 8th grade school photos or the backside of a motorcycle frame. Pay the most attention to the pictures people will want to see.

5. Manage Your File System

Whatever file system you learn, learn it. Metadata is your friend. Learn how to add tags, dates — for all I know you can add whole pictures (would be nice for photo backs annotated in foreign scripts, but probably not). HowToScan recommends learning how to use batch handling, and the Linux command line usually has it baked in.

Takeaway

I can see why people might spend years figuring this stuff out and valuing it at X hundred dollars to sell the specifics. Some people need step-by-step, click-by-click instructions, but I don’t feel the need for me. I had a class in Photoshop, and most of the important points in one of the howtoscan free ebooks was all about using the “healing brush,” a technique I once used to remove a power line from a church building photo — the line art of which is still used in the directory to this day.

This has been a hard push of a month. I expect to return to assorted topics next week.

Final Question

How much would you pay to learn how to scan?

Family Photo Chest Part 14.3: Into Production, Right?

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m making the first official scans for this project. Even then, I might have farther still to go. Let’s get started!

False Starts and Setbacks

I’ve wanted little more all month than to finally get some momentum going on this project — to actually have some results I can point to and say, “This is a sample of my finished product.” I want to know tangible progress is being made. Unfortunately, I still have much to learn.

I wrote a series of instructions on how to use XSANE scanning software with the setup I’ve assembled. I started with a set that covered opening the program and making sure it was configured correctly. A second batch of instructions covered what to do to prepare for a new batch of scans — how to name the directory and reset the name counter. Finally, a third and most important set covered actually scanning each set of pictures working through the envelope, rubber band, album, or whatever constitutes a manageable grouping. I got through the instructions the first time and my trainee went back to open XSANE fresh again.

While attempting to push into production, I quickly found the DivideScannedImages script doesn’t do so well with picture backs. I don’t even know how I want to display such pictures with their backs. It’s a topic for some time after I’ve reverted to monthly updates. I’ll probably end up scanning and retouching such pictures manually.

I decided narrow the focus of eligible batches to scan. If you have the negatives, that’s all the detail there ever was to capture right there. I was going to save batches containing the original data for another week, but the archive has way more negatives than I anticipated and my focus was now too narrow.

One of the scanners I’m working with came with a slide/negative holder and XSANE has a host of negative presets. I ran some numbers, and I figure I’m capturing about the same level of detail scanning prints at 1200 dpi vs maxing out the scanner on negatives at 6400 dpi (XSANE does not appear to let you access interpolated resolution in the scanners’ specifications).

But the process was way more involved than I thought. The colors were way off once inverted, and there was some pretty bad speckling that overpowered the filter on XSANE’s post-scan viewer. My father and I tried again on a more recent and hopefully less faded negative, and the color sort of came through… after I had poked at it with GIMP, but it wasn’t something I would put on display.

My shortcomings don’t end there. I have albums wider than scan bed to consider. There was a time when photos were available on CD and I might be able to harvest metadata from there. Duplicate detection and CD metadata harvesting.

A Footnote on UI

On a more positive note, I’ve been learning more about other aspects of using retro hardware. The division script in GIMP is too tall to display on this old monitor, even when hiding the panels (“start bars”) at the top and bottom of my screen. This is more a desktop environment problem, but by digging in my settings, I found that Alt + Drag (optionally Super/Window + Drag) moves windows around without the need to grab the title bar. I put the script up and out of normal bounds, and it failed spectacularly (see above).

Takeaway

These setbacks are the very reason you would pay good money for a professional who already knows how to scan. Besides, properly scanning negatives illuminates from one side and scans from the other, and I’m working with a purely reflective system. Suffice it to say, negatives are beyond the scope of this project, and I’ll be focusing on prints until otherwise stated.

Final Question

What project have you tested your patience with? How is it coming along?

Family Photo Chest Part 14.2: Prepared to Scan At Last

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am bringing my workstations online. Let’s get started!

Calibration

Last week, I detailed a workstation I assembled using found parts. I’ve gotten used to the trackball mouse, but click and drag is next to useless.

Also, just now while writing, I was looking for a place to put my tablet — between two scanner workstations, the desk is full– and thought it would be a good idea to slap it on this monitor from a more cubic era. Moments later, I’ve switched to dark mode, and it appears to be messing with the colors on some of my icons in the upper left part of my screen. Turns out it was magnets in the tablet. Degaussing fixed it, but when I removed the tablet, the colors went off again. I have since degaussed again and everything is normal.

Workflow And Training

I’d rather look back wishing I had done a better job than look back I had done any job. For this reason, I am shelving a lot of research and deliberation I got myself lost in. Perhaps in a few years, I can redo the master digital archive with better-supported equipment. In the meantime, I’ve selected a resolution that should be good enough to enlarge, yet small enough to store.

I wrote a set of instructions detailing my prototype workflow and started training family members in how to operate the scanner. As I went, I noted where they got confused and adjusted the instructions accordingly.

My first set of instructions is in how to start a work session. Make sure the scanner is on, start XSANE, set the resolution correctly and check that the persistent settings are correct.

Structure in the analog archive I’m digitizing is sporadic, but when it’s present, I’d like to respect it. Work will be divided into batches. My instructions detail how to name each batch and to make a metadata file describing the batch and the container it was found in, like B&W vs color and print sizes.

Finally, a third set of instructions is all about individual scans. Line things up, don’t go over the scan area, get a preview, and don’t bother with zooming in on that preview because there’s no sideways scrolling and no way to quickly alternate between zooming in and out. A final inspection checks for dust or hair/fur, and I have a little something in there for when pictures have notes on the back.

Challenges

I’m scanning to TIFF files, but I want the ability to include the backs of prints too. Ideally, I would just add .front or .back to the filename, but XSANE’s automatic numbering is stubborn. It wants a four (or more) digit number at the end of the file name, and refuses to recognize multiple file extensions. I’ve resolved to manually setting the file type to TIFF and using the front/back extensions.

XSANE has a preview feature. I am using it to select occupied parts of the scan bed to reduce scanning time. But that doesn’t work without click and drag. I’ve since added a wireless USB mouse, and the trackball is good for 2D scrolling.

Speaking of scrolling in all four directions, while setting up my laptop for the same procedure, I had to get into the touchpad settings. It was something I had found a little annoying, but it was an easy fix when I bothered to look for it.

Final Question

Have you ever needed to write instructions for others to follow? How much did you need to change, even though you thought you thought everything out ahead of time?

Family Photo Chest Part 14: The Tracks are Built, Bring on the Locomotive

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am technically ready to start my first batch of scanning photos. Let’s get started!

Early Start

I am tired of this project going seemingly forever. Whatever I’m getting done, I want working this week. My plan has been to scan directly into the DivideScannedImages script for GIMP, and for that I need the XSANE plugin (Scanner Access Now Easy for Xorg GUI server). Every version I found was ancient and obsolete. Turns out installing the plain XSANE included its own GIMP plugin, as confirmed by xsane -v and looking for a line about GIMP. Just know that if you’re trying to check the version over SSH, that it really wants an Xorg server: export DISPLAY=’:0′ worked for me.

GIMP has a powerful scripting language built in. With it, you can automate most anything, all be it with a little difficulty. You can even use it to script events when launching GIMP with the -b flag (b as in batch). I took a look at it. It doesn’t look that bad to learn. It’s heavy on the parentheses, but I’d hesitate to directly call it LISP.

I got as far as calling the XSANE plugin on boot from within the DivideScannedImmages script. I was a little short on time to struggle through getting it just right, so I reached out for help on a GIMP Discord, but then I began to reconsider everything.

Progress Rejected

I’ve been a bad programmer. So many dead ends. So many side projects distracting from the main goal. I have an unknown deadline for this project, and I really need to cut the fancy stuff I’ve been working on and do something that actually works.

I also got to thinking, Who am I designing this for? I had been working on a command line setup for me, and my mother has graciously offered to help with scanning a little bit at a time. She doesn’t do the command line outside Minecraft. As a good programmer, I need to consider my end user’s needs, and she needs a graphical workspace.

Thinking like my mother would think, I made a directory on my desktop for shortcuts related to this project. So far, I’ve made launchers for XSANE, the network share for the pictures, and GIMP. I may develop it later.

My new vision is to just use the tools I have: XSANE to scan and save locally and GIMP to separate and deskew pictures automatically and store them in the digital archive before someone either deletes or offloads the original scans. I can make a text file with miscellaneous metadata for each batch. A manual review can flag photos that will need additional touchup.

Testing the Workflow

I used a couple pictures from when I was little to test my workflow. I laid them on the scanner with a bit of tweak. I spent several attempts learning about the limits of the scanner. The scan head doesn’t actually reach the full scan bed. If I’m not careful, pictures will get pinched under the edges. It’s very easy to accidentally overlap pictures. All good reasons for finding a preview of each scan.

Deskew isn’t a miracle, but when I did a side-by side comparison on a sample size of my two test pictures, it got one almost perfect and reduced the the tweak from the other, but my sister said the deskewed one might be a little fuzzier.

Takeaway

I cannot emphasize this point enough: good programmers build software for end users. It’s fine to hack together a piece of software you understand, but if you want to share your creation with someone else, you’ve got to make a relatable front end.

Final Question

What elements of a project have you given up for the sake of an end-user?

BitWarden Operational and SSH Housecleaning

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am giving my BitWarden server a bit of a shake down, and since that didn’t take as long as expected, I have a story or two from rearranging my SSH keys. Let’s get started!

Server Fully Operational

Picking up from last week, I installed a BitWarden home server on BlinkiePi and set it up with a static IP making sure it had a unique hostname. To test it, I plugged it directly into my home router. I had to generate and install a self-signed security certificate so the browser plugin could recognize my server once I had directed its traffic appropriately.

I started early this week, expecting the firewall to be crazy complicated and maybe an exercise in futility, but that wasn’t the case. I found a package literally named “uncomplicated fire wall” (ufw). It installed no problem and I was easily able to reject unrecognized traffic by default, then allow ports for SSH and BitWarden.

I then went ahead and installed BitWarden plugins on my remaining computers, trying and failing to follow all the important steps from memory until I gave in and looked up the tutorial again. Later on in the week, I wanted to ensure my setup could withstand a power blink, so I cut power and and later restored it. I expected I’d need to spend a few hours trying to figure out how to get it auto started, but it’s almost like this project wants to short me of content, because I was able to reach its web interface no problem.

SSH Keys Between My Computers

I don’t like entering passwords every time I want to log into a system. SSH keys are way faster and more secure because the host machines are essentially letting you in as you essentially scan an ID instead of stopping to perform a secret handshake that can be more easily faked.

I did some research a while ago, and I found questions as to if the rsa method of making keys was still okay to use. To be honest, if it wasn’t, OpenSSH would probably push an update blocking its usage or at least notifying users that it’s been cracked wide open.

Nevertheless, when I redid my SSH easy access network, I used ed25519 to make my keys, and I transferred them over with ssh-copy-id to move them from one computer to another. I have three workstations I flip flop between, as well as my new password server and my Pi400 hack router. Now that I think about it, I could include the NAS and the Pi4 serving as our entertainment center, but that will wait for a later date.

One nice surprise I found was when I was copying a key from my main desktop on the 400’s subnet to one of my machines on the wider home network, and when my desktop didn’t recognize the computer, but the Pi400 did, the router vouched for the host I was reaching out to.

Takeaway

I suppose I could improve my setup with auto updates. That will mean another hole punched in the firewall, but in all reality, that’s a topic across my network for another day.

Final Question

If you were to spend a week in space, what games would you feel obliged to play along the way?

Raspberry Pi 400: Another Chance

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am re-evaluating my Raspberry Pi 400 unit I covered last week. Let’s get started!

The Story So Far…

Raspberry Pi 400, is the Pi 4 rearranged and built into a Raspberry Pi keyboard unit, available with multiple standard keyboard units from around the world… but not quite. I already knew about some changes they made to the CPU, but in theory, that shouldn’t have affected the 400’s wireless abilities like I was observing on my 64 bit installations: LibreELEC and ManjaroARM.

But a sample size of 3 (32 bit Raspian images work properly) is too small to claim manufacturer defect. If I was going to pay for warranty shipping, I wanted to see the failure on their own software. That is why I went looking for the elusive Raspberry OS 64 bit. I kept finding the 32 bit version, and had to call it quits last week.

Installation and Testing

But this week, I tried an official card imager with a list of 1st and 3rd party images to install; none were 64 bit Raspberry OS. There’s plenty of stuff out there about Raspberry OS 64, but the download link is well hidden from people who won’t understand what beta software is all about. In the end, I located the correct image, and between downloading, flashing, and testing, it took half an hour of low engagement.

Finally, I booted the Pi 400 using that chip, and it worked unexpectedly. I went ahead with the setup ritual — telling it the country, time zone, passwords, and the like. Options for making Wi-Fi connections came up before I even got around to asking it how many bits it was running on (see last week’s post): 64.

Further Testing

I honestly was expecting to cover the Pi 400’s exchange before this point. I just so happened to have updated my Manjaro card in an attempt to fix a problem keeping us from streaming my church’s Christmas program on Sabbath (turns out https was complaining about system time being set manually, and a couple months behind at that). The freshly updated card worked as well.

At this point, I was convinced it was just a matter of software. I booted into LibreELEC, which amusingly requires inserting the card part way through booting, and ran into the same problem as before. I even tried with an Ethernet cord we used to download the service, and saw no evidence of the clock updating.

Conclusion

The Raspberry Pi 400 is a relatively new product. With a better idea on where to look, I landed on this LibreELEC forum post. It states that the Pi 400 has a “somewhat different [Wi-Fi/Bluetooth] chip,” but by manually updating the kernal somehow, it is possible to hack something together. Honestly, I have other projects I’d rather do, and according to the same post, they’re already planning on supporting the Pi 400 in the next major release. I can wait.

Final Question

Have you ever looked at something as you were about to replace it just to poke at it a bit, only to find there was nothing wrong with it in the first place?