Blinkie Pie Case Repair: Stage 1, Part 2 (of 1.5)

Good Morning from my Robotics Lab! This is Shadow 8472, and I am never using Stage/Part sections again. Let’s get started!

Last week, I tried super gluing Blinky Pie’s base back together with unpromising observations. After two successful runs gluing segments of an interrupted print to its own raft that I somehow still had around, I went on to the main event, but it failed hard with the broken tab springing away from its base.

I hardly touched my project all week.

Sunday came and I forced myself to work on it. I was expecting to clean it off with some acetone and start over again, this time with the baking soda method that gives the super glue something to latch onto.

I pulled my project out and took inventory. I was surprised to find that not only had the broken piece returned to its proper seat, but it was holding up to a semi-reasonable amount of pressure. No way am I stress testing it, but I’m hopeful this case will continue to serve me for a long time to come as will be.

I set the base aside, then had help from my father tending to the top half. I took the camera out last week, so it wasn’t attached to any electronics or anything. As it stands as of writing, the glue is not quite to the 24 hour mark needed for full strength.

Next week, I expect to do a little sanding and painting on Blinkie’s dome head, but only if there’s glue oozing out the back.

I’ve also had a vague thought that I’ll need to get into soldering some day. It’s no time soon, but I really hope it’s not too much of a challenge.

Final Question: If you were a real-life spacecraft, what would you be, and why?

Blinkie Pie Case Repair: Stage 0, Part 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I can’t find my underscore key! I must have tried at least ten touch type attempts trying to find that stickler before I hit it. Today, I am actually planning out a new project: repairing my Pi case. Let’s get started!

Update on the bot: Project Canceled. Everything –and I mean everything– I saw said Discord audit logs would be available until 90 days after the incident. They vanished, and a screenshot I had of the logs the day of was only 55 days old. I contacted their support, but they eventually told me, “The team has confirmed that it is an intended behavior with audit logs. And we can’t offer troubleshooting steps to fix it,” and to “Please update us here [Discord feedback] if you have further inquiries.”

I really need to pay closer attention before I say something to the effect that they are doing a terrible job. The support was friendly, and usually got back to me within a day. It was only after I replied to that last one and expected a reply that my opinion tanked. I might be coming back to it yet.

Moving on to the subject I actually did my research for today:

I crushed my Pac-Man Ghost case while transporting it a month or so ago. I want to repair it. I went into the workshop to brainstorm. I learned a about techniques for joining PLA pieces, but I can tell this will take a lot longer than I had hoped.

Both the top and bottom are damaged. The head piece only has a crack going even through the paint, while the bottom has a broken tab piece hanging by a few lines of filament.

The quick and obvious solution is super glue. I’ve been warned against it, but I think that may be the way to go for the head piece because it’s otherwise mostly structurally sound. The bottom piece will require a little more creativity.

I read about two other repair techniques, one of which I was already speculating about. Friction welding, would be if I heat it up with a small motor and quickly press it back together. Soldering would also work, and hopefully not lose any material, but will require additional dexterity.

Worst case scenario: I botch a repair of the bottom and end up printing a new bottom. If that were the case, I’d want to get in there with Blender and adjust the inside to actually match my Pi 3B+ model instead of the Pi 2 it’s designed for. I’m expecting this project to last at least until Christmas, even if it’s not interrupted by something more worthy, like picking up on that bot project again.

Final Question: What PLA repair techniques should I research?

A Seemingly Complete and Utter Waste of a Blog Post

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m finishing my discord bot! and today, I’m getting back into my Pi Spy Feline Edition and today I’m solving — exploring the pseudo science behind a chapter in a book I’m writing. It’s not quite on topic, but let’s get started anyway.

I was supposed to have 90 days of the audit log. They dissipated after 55 days. I took screenshots the day of, and the file will be 56 days as of this posting. I’m a tad annoyed, but at least I learned something along the way.

I would have been interested in getting back into dealing with Blinkie and the kitty caller, but the case is broken and it still needs to be repaired. I’m hoping I don’t have to reprint the whole thing. Maybe I will see about using a soldering iron… I might actually try that one.

OK, onto business. About two years ago, I was in a role play set in the world of My Little Pony: Friendship is Magic where I took the role of two characters: a little Unicorn foal by the name of Applesauce, and an elderly Crystal Pony by the name of Professor Stone W. Jay PHD. I eventually settled on calling him Dr. Stone. I later found out about the manga of the same name. There is no relation between them.

At the end of the first big section of the game, Dr. Stone is in a small town when a dimensional rift opens up and starts spewing enemy soldiers. Having had military experience in the past, Dr. Stone takes command and MacGyvers a temporary victory.

In my novelization, I decided I wanted to add a nod to Physics without kowtowing to it. Since this is a magical setting, I decided I wanted a time dilation field so that as you approach the rift without the proper spells on you, your personal clock slows down, but you don’t suffer the tidal effects one would normally expect from several solar masses crushed to a point mass and placed in the town square.

My original question was along these lines:

  • Dr. Stone is at 50 <= r <= 150 feet from the rift.
  • Dr. Stone experiences 15 <= t(observer) = t(o) <= 30 minutes.
  • The world outside the significant effects of the time dilation field experience 3 <= t(fast) = t(f) <= 6 hours.
  • The rift in the town square respects time dilation as if it were a singularity with enough mass to dialate Dr. Stone’s frame of reference by the factors given.
  • The rift’s opening is no bigger than a pickup truck, but no smaller than a human.
  • Does Dr. Stone see an Einstein’s Ring?

I’ve researched by posting in forums and over Discord. I’ve gotten a whole array of people responding from one or two overly blunt Discord users who wouldn’t buy into my fantasy to an Oxford professor who unfortunately didn’t know anything either.

I did manage to get the attention of a few helpful individuals; one suggested lighting effects while another got me thinking about temporal gradients internal to the body. One individual going by Needling Haystacks has been particularly helpful as someone who knows a bit more than me in the topic. Turns out, the air around the singularity would mess with any Einstein’s ring in the first place.

In the search for an equation to work, I found one for stationary satellites above a non-rotating sphere, then I spent whole evening learning about the universal gravitational constant “big G” and not finding it expressed in Imperial units, likely because 99.99 percent of all people with any business even trying to understand equations with big G are already familiar with the SI system.

At this point, I managed to plug in all the variables in and my graphing software gave me a shape I wasn’t expecting. I’ll need to run some more tests.

Final Question: Have you ever gotten overzealous over what’s supposed to be a small detail, but ended up with something much different?

Programming a Pi With a Discord Bot Part 1 (of 1 hopefully)

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m inching back into actual programming with the skills I’ve built up with SSH over the past month paving the way with a bit smoother workflow. Let’s get started!

A few weeks back, I was made an admin on a Discord server after an unsavory character I shall only refer to as Z nuked it majorly. He took advantage of the owner’s weakened state after constant raids, unbanned pretty much everyone, and kicked about two thirds of the total population. Z’s rampage went on for around 10 minutes, but he caused massive damage.

As an admin, I’m helping rebuild the server. One of the tasks is rebanning everyone Z unbanned. Oops. Discord only gives you the option to ban someone if they have some sort of presence on the server by way of being a member or having a message.

When Discord bans someone, it offers to permanently delete a few days of messages from them. Troll tactics often involve getting a bunch of alt accounts on at a time for wide-spread harassment. When the server staff is doing a good job, alt accounts get banned quickly, and all traces of them are incidentally removed from the server. Bad news for me when trying to reban.

The good news is that Discord have a little more access to Discord’s full feature set, and one of those tools is banning accounts without needing a membership or a post. However, with a new goal in a slightly unfamiliar field, that implies new learning to be had.

The first step was defining what exactly I want it to do: Reban the unreachable. Next, I settled on hard coding each name that still needed banning. I’ve had issues with bot stability when running someone else’s dice bot myself, but I don’t need to worry since the finished bot need only run once and it’s done, ready to archive.

The hardest part would have been getting Python3 on. Lucky me, I have the earliest version of Python supported by Discord’s Python wrapper already installed on Blinkie from my naughty kitty project. I just had to install a few dependencies.

I met someone on Discord by the handle of Damaged who I got the majority of my assistance from. They actually pointed me in the direction of programming my own bot instead of just trusting someone else’s. “What I can say is that you need to be very careful of any bot you want to get to [ban people]. You’re effectively going to give them the keys to the castle.” Security is of concern. I once hacked a new bot (with permission from an admin) into pinging everyone when a normal account doesn’t let you do that. The last thing I want to do is let possible trolls have devastating tools I left laying around.

Banning doesn’t sound like anything too fancy to program, and the skills and tools I develop for this project should be useful later. I learned how Discord bots log into bot accounts with some kind of token. I learned how to tell the bot to print its account ID to the command line and use that to add its account to a server. As of the writing of this post, I have gotten a sample bot debugged and running and responding to a simple command.

The next step was a little revision in plans. I have about 50 names to reban or so in total, and bots can theoretically easily do the deed quickly and more thoroughly than I can. I figured it was likely somewhere around a wash, so I’m going to program a bot to look for all Z’s unbans and reban them.

Other possible bot features I might like are the ability to nuke someone’s entire history with the server. Another person who left now has a crude name I wouldn’t mind if it were gone.

Final Question: Not all bots need potentially dangerous permissions to run. What are some niche moderation bots you’ve seen/used?

Programming a Pi to Deter Cats: Part 9.3

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m off on another tangent away from my main goal of Kitty-Cam security. Let’s get started!

Last week, I went over the basics of logging in securely without a password. This week, I tried practicing my skills and setting up a SSH key pair. Somewhere, I messed up royally, and I messed up the connection for the laptop as well.

It feels like my knowledge in computers is growing like a pile of sand, with the goal being to reach some arbitrary height. Each grain of sand higher means another shell of grains around the outside, and if one falls in just the wrong place, I have to learn how to fix it. For monolithic goals, this is bad news, but in a wider scope, it’s not so bad. If I set a new goal in a similar field, a lot of my grains of knowledge will still be applicable to this new task. I suppose that’s why they call it a learning curve, though right now, it feels like a learning slope.

I went into the workshop and spent a while banging into a learning wall. My progress from last week had seemingly broken, and I could not recognize how for the longest time, even until closing time. Apparently, in addition to SSH public and private keys, there’s this thing called a host key. From a little more research, it looks like a host key is a number you trade over a separate communication line to make sure you don’t have a man-in-the-middle attack where someone is passing all the traffic between you and the server and taking notes on your otherwise private conversation.

I still haven’t gotten my side project working, but now, I think I know the correct direction. I’d like to get this working, because an interesting robot platform I’ve been following for a number of years, Sweetie Bot Project, finally made their code open source. With any luck, I’ll have my skills refined to where I can build one when they release the hardware as well.

Final Question: What projects have you looked forward to working on?

Programming a Pi to Deter Cats: Part 9.2

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m exploring SSH a little farther. Let’s get started!

Just as there’s more than one way to tie a knot, there’s more than one way to secure communications between two computers. Last week, I talked about how to use SSH to log in remotely. This week, I learned how to use a pair of cryptographic keys to log in without a pesky password preventing punctual passage.

I’d like to take a break here and just say that when you have multiple implementations for the same task, it can be difficult to choose which one to go with if there isn’t already some sort of standard in place. For example, if you are just starting to look at getting a new computer and you know precious little about them, it would probably not be a good idea to go with a completely unknown operating system like PurpleDragon (Fictional example). There isn’t likely much support for it, you likely won’t find much software for it outside what comes with it, and you just might have issues trying to connect with other computers if it isn’t programmed with standardized network protocols. This isn’t to say you will never want to run something like PurpleDragon, especially if you are in charge of a secret super genius’s lair and you need to prevent outside hackers and can afford a whole IT staff to maintain it.

Whatever the case, I’m now working with PuTTY, a fancier SSH client that’s supposed to work a little better than the Windows extension I’ve been working with. The cryptographic keys I mentioned earlier are generated in pairs. Oversimplified, both parties each have a public key and a private key, and each party’s public key can unlock the matching private key. The two parties swap public keys, and then the two can lock their chatter up in a way that it just looks like noise to anyone trying to listen in.

Final Question: It feels a little awkward learning about modern cryptography with Quantum computing coming any day now (relatively speaking). Once Quantum computers are easily accessible to the public, cryptography will be obsolete, but they will give us unbreakable quantum cryptography. How long do you think it will take to affect this change?

Programming a Pi to Deter Cats: Part 9.1

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am starting a small side adventure with my Pac-Man ghost inspired Raspberry Pi project. Let’s get started!

This week, I took a break from programming. For weeks now, I’ve had to share an HDMI monitor between my desktop and my Pi, which I’ve named Blinkie. In past weeks, I’ve setup a VNC server so I can remote in and take control of Blinkie from my desktop with a VNC client, but I have a few edges to smooth out.

Firstly, the system seems to get its resolution from an attached HDMI display. If I boot it up headless (no monitor attached) then Raspian will boot the GUI into a low-resolution mode, presumably to save on CPU time. If I later connect a monitor later, it doesn’t auto-adjust, and I end up rebooting. I’ve taken to going behind my monitor and switch the HDMI inputs, and they can be difficult to get on the first try.

Secondly, when I’m no longer working on Blinky, I will have two jobs for it. Guard the kitchen, and stream the Church service on Saturday morning. This will imply both headless and headed operation. I can easily optimize for one or the other, but getting it just right will be tricky.

The bulk of my work this week went into turning Blinky from a more tiny Desktop into more of an actual security camera arrangement. I taught myself about SSH, the most fundamental skill I’ll need for operating it at a distance. SSH feels similar to VNC, but only for the command line in that it has a server and a client. Since I’m starting with optimizing for the security camera configuration, I found a way to boot to CLI, the command line only, and a way to enable a virtual desktop in memory from that command line. I ran into a little surprise.

The first problem I found was a quick fix. Instead of the traditional pointer I’m used to, it favored a black X. LINK to the forum thread I used. Another issue was the unexpected evolution of the aspect ratio. I ended up just moving the whole VNC window over to my other monitor, since it about fit and I don’t particularly relish the black boarder around the desktop. I can rework my workflow on Blinkie to accommodate the narrower work area.

I still have some bits to figure out. The mouse buttons are still double swapping. If I’m going to be going between headless and desktop modes, I want to have a consistent, swapped button experience. The worst case scenario right now is where I run a bash or other kind of script and program it to change the setting based on if there’s a monitor connected on startup.

I also need to find an easier SSH client for Windows or else find a way to zoom in on the text. Also against the default Windows SSH client is the fact I cannot just paste a long, super secure password into Command Prompt for logging in. In addition, I may be interested in looking into an Android SSH client for both now and future projects.

I need to find an easier SSH client for Windows or else find a way to zoom in on the text. Also against the default Windows SSH client is the fact I cannot just paste a long, super secure password into Command Prompt for logging in. In addition, I may be interested in looking into an Android SSH client for both now and future projects.

Final Question: Have you ever used SSH on an Android device before? Do you recommend any in particular?

Programming a Pi to Deter Cats: Part 9

Good Morning from my Robotics Lab! This is Shadow 8472, and today, I feel like I’m progressing in a non-linear fashion. Let’s get started!

Last week, I cleaned up my code by moving the bits I still wanted over from one .py file to another. My goal for this week was to distinguish between objects moving in the foreground and a stationary, but adaptive background.

My progress this week doesn’t feel quite so linear. I got about half way to my stated goal on my own while working with a premade background subtraction function, and ended up getting stuck on trying to find a way to monitor my own progress. I’d like to first see my foreground, preferably in color, then I’d like to see my present background so I can hopefully understand the algorithm better.

I went into the workshop for a little help, and things really went in a lot of odd directions after that. We spent a while chasing oddball, but not quite false, leads a couple big bugs. We downloaded and installed an IDE called Spyder3.

As an aside, I would just like to say, that IDE’s are both a curse and a blessing. They are a pain to get working correctly, especially if you’re working with anything outside the core language. But they make development easier, and they are a lot easier to use than a command line for new programmers, once again provided someone who knows what they are doing is setting it up or the programmer has ample amounts of both luck and determination.

Spyder took a lot of hassle to generate a few nuggets of debug gold, and it still is misbehaving. I plan on just using it as a nicer environment for coding, but I intend to use the command line for testing unless I need a closer look at my program’s state when it fails.

One of my big breakthroughs this week was finally realizing a mismatch between a list of list of lists and a list of lists (a 3D array and a 2D array). It looks like one of those bugs I had was about an improper use of a blur function. My current understanding is that it outputs a “color” frame even if it gets a grayscale input.

I eventually arrived at a grayscale, background-subtracted image with poorly defined edges. It seems to have a hard time with my purple shirt against another background.

My goal for next week is to stabilize the background subtraction with a smooth edges algorithm, and hopefully be able to draw boxes around objects.

Final Question: did you or would you rather learn programming with an IDE or with a command line?

Programming a Pi to Deter Cats: Part 7

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am simplifying the path before me. Let’s get started!

I went to the workshop again this week, and I got some fairly simple advice. I have a fairly fancy background system, but do I really need it? For a first prototype, probably not.

The new way forward is going to be just keeping a running average of the background. After initialization, each frame will be fed into a function to average it with the existing background. I have my doubts, but adding some weights to the average to favor the existing background should achieve the same goal with less computing power spent and less code written.

I also talked about a few other things while I was there. I wanted to take advantage of multiple threads, where one core would manage the background while another handled the object detection. I forgot an important detail: Python itself doesn’t support multiple threads. Now, if I were to have it start another program… Possibly not, though.

I also brought up the unlikely possibility of changing program constants while the program is running. Turns out there are ways to do that. From my brief glance at the topic, it’s not quite the module to include I imagined, rather: it looks like it might be similar to a cheat engine I once saw being used for a game. I’ll be looking into it for next week.

Hopefully by next week, I will have something set up to change global variables on the fly. Another good boon would be to fully implement the whole deal of fading the whole background. I was exposed to the sound chip, but with my case geometry, I’ll need a GPIO port ribbon to accommodate my inverted board and an extra power cord. I’m still aiming for modifying a buzzer to accept a pulse from the Pi instead of a finger on a button.

Final Question: I think I’ve finally realized on the format I want to use from now on. Review progress, present progress, then plan ahead. I don’t always make as much progress in a given week, so I often pad it with descriptions of future plans. What do you think?

Programming a Pi to Deter Cats: Part 6

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m working on my Raspberry Pi again to make it chase cats off counters. Let’s get started.

A lot has happened this week, yet it feels like not enough. I had a Python script running OpenCV from the command line last week, and this week, I’ve started work on the dynamic background.

I’d have to say I’m not quite a third of the way to this smaller goal, but in terms of required research, I should be finished with it next week.

Right now, I have the pseudocode for this part of the project, as well as a demo highlighting the difference between the last two frames. To get to that point, I had to buffer frames to a two-frame queue, then compare them.

It is not fun when you have an error search engines don’t recognize. “Src1 is not a valid numerical tuple.” Long story short, it was the way Python arrays work. They need to be declared, then each element, or a placeholder element added. Somehow, after giving up for the night, I halfheartedly tinkered with swapping the order of comparison, and it followed the same element. I traced the bug back through my IO function, and got a buffered video feed.

The other exciting thing was finding the comparison function. The buffer has a pointer to say which is the more recent frame, and reading the most recent frame is fine, but the trick here is to highlight what’s new and changed this frame. It took a bit to find that function, and I played with a few bitwise operators I found next to it. One strangely affected the “raw footage,” and ended up strobing. I had to slow the framerate on that one to see it was inverting everything each frame.

The very next thing to do is to boost those frames to all white, but I haven’t found how to do that yet. After that, I need to figure out how to extract the resolution and get the demo working in low resolution mode by scaling the width and height down by an integer.

With a scaled down resolution, I should be able to build a heatmap as a countdown timer until areas fade into the background. Once I have all that working, I can start researching again for spawning a second thread and analysing the full resolution picture for naughty felines.

Final Question: How many more weekly parts should it take before I have my first working prototype?