Programming a Pi to Deter Cats: Part 9.1

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am starting a small side adventure with my Pac-Man ghost inspired Raspberry Pi project. Let’s get started!

This week, I took a break from programming. For weeks now, I’ve had to share an HDMI monitor between my desktop and my Pi, which I’ve named Blinkie. In past weeks, I’ve setup a VNC server so I can remote in and take control of Blinkie from my desktop with a VNC client, but I have a few edges to smooth out.

Firstly, the system seems to get its resolution from an attached HDMI display. If I boot it up headless (no monitor attached) then Raspian will boot the GUI into a low-resolution mode, presumably to save on CPU time. If I later connect a monitor later, it doesn’t auto-adjust, and I end up rebooting. I’ve taken to going behind my monitor and switch the HDMI inputs, and they can be difficult to get on the first try.

Secondly, when I’m no longer working on Blinky, I will have two jobs for it. Guard the kitchen, and stream the Church service on Saturday morning. This will imply both headless and headed operation. I can easily optimize for one or the other, but getting it just right will be tricky.

The bulk of my work this week went into turning Blinky from a more tiny Desktop into more of an actual security camera arrangement. I taught myself about SSH, the most fundamental skill I’ll need for operating it at a distance. SSH feels similar to VNC, but only for the command line in that it has a server and a client. Since I’m starting with optimizing for the security camera configuration, I found a way to boot to CLI, the command line only, and a way to enable a virtual desktop in memory from that command line. I ran into a little surprise.

The first problem I found was a quick fix. Instead of the traditional pointer I’m used to, it favored a black X. LINK to the forum thread I used. Another issue was the unexpected evolution of the aspect ratio. I ended up just moving the whole VNC window over to my other monitor, since it about fit and I don’t particularly relish the black boarder around the desktop. I can rework my workflow on Blinkie to accommodate the narrower work area.

I still have some bits to figure out. The mouse buttons are still double swapping. If I’m going to be going between headless and desktop modes, I want to have a consistent, swapped button experience. The worst case scenario right now is where I run a bash or other kind of script and program it to change the setting based on if there’s a monitor connected on startup.

I also need to find an easier SSH client for Windows or else find a way to zoom in on the text. Also against the default Windows SSH client is the fact I cannot just paste a long, super secure password into Command Prompt for logging in. In addition, I may be interested in looking into an Android SSH client for both now and future projects.

I need to find an easier SSH client for Windows or else find a way to zoom in on the text. Also against the default Windows SSH client is the fact I cannot just paste a long, super secure password into Command Prompt for logging in. In addition, I may be interested in looking into an Android SSH client for both now and future projects.

Final Question: Have you ever used SSH on an Android device before? Do you recommend any in particular?

Programming a Pi to Deter Cats: Part 9

Good Morning from my Robotics Lab! This is Shadow 8472, and today, I feel like I’m progressing in a non-linear fashion. Let’s get started!

Last week, I cleaned up my code by moving the bits I still wanted over from one .py file to another. My goal for this week was to distinguish between objects moving in the foreground and a stationary, but adaptive background.

My progress this week doesn’t feel quite so linear. I got about half way to my stated goal on my own while working with a premade background subtraction function, and ended up getting stuck on trying to find a way to monitor my own progress. I’d like to first see my foreground, preferably in color, then I’d like to see my present background so I can hopefully understand the algorithm better.

I went into the workshop for a little help, and things really went in a lot of odd directions after that. We spent a while chasing oddball, but not quite false, leads a couple big bugs. We downloaded and installed an IDE called Spyder3.

As an aside, I would just like to say, that IDE’s are both a curse and a blessing. They are a pain to get working correctly, especially if you’re working with anything outside the core language. But they make development easier, and they are a lot easier to use than a command line for new programmers, once again provided someone who knows what they are doing is setting it up or the programmer has ample amounts of both luck and determination.

Spyder took a lot of hassle to generate a few nuggets of debug gold, and it still is misbehaving. I plan on just using it as a nicer environment for coding, but I intend to use the command line for testing unless I need a closer look at my program’s state when it fails.

One of my big breakthroughs this week was finally realizing a mismatch between a list of list of lists and a list of lists (a 3D array and a 2D array). It looks like one of those bugs I had was about an improper use of a blur function. My current understanding is that it outputs a “color” frame even if it gets a grayscale input.

I eventually arrived at a grayscale, background-subtracted image with poorly defined edges. It seems to have a hard time with my purple shirt against another background.

My goal for next week is to stabilize the background subtraction with a smooth edges algorithm, and hopefully be able to draw boxes around objects.

Final Question: did you or would you rather learn programming with an IDE or with a command line?

Programming a Pi to Deter Cats: Part 8

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am droning on with my Raspberry Pi Cat Camera project.

Any sufficiently unguarded water cup is indistinguishable from a cat bowl. I had the unfortunate experience of going through that today. I gave it to the dogs. A completed sentry would not have helped in this case, but it serves as a little reminder. Of course, as soon as I get this thing working, I’d say its expected life expectancy is about a week or two. However long it takes before my cat stops the unwanted behavior.

I had a little bit of a tangent this week. It turns out it is very easy to get distracted when you have a special effects setup. The background I had last week, a constantly updating weighted average, was very entertaining in its own right. I could jump in front of the camera and fade in as if I were a ghost. I jump away, and my image fades out. If I held still, the picture would get sharper.

I ended up researching how to VNC into my Pi from my desktop so I don’t have to keep switching out the HDMI cord plugged into the back of my main monitor. It has actually been fairly uneventful so far. My password manager takes a while to register a new password, so I paused work on that angle of the project before I could set up a virtual desktop. The reason I want that is because the Pi checks if it is “headless” or without any video outputs when it boots. If there aren’t any, it goes into minimum resolution mode, presumably to conserve on CPU power. I have sense been careful to plug it in only after a monitor is connected.

I also went to the workshop again this week and promptly learned why I should always clean up after experiments; I had tried commenting out the line to convert to grayscale, and forgot to correct it. I also learned that the differing lighting conditions defeated the semi-fine tuned settings I had for my rudimentary motion spotter, and the ghost trails were back. We also did a big one on the old code while trying to simplify the code. One major change later, and I find things going half nuclear. I was given homework: Git Hub and code cleanup.

And by code cleanup, I mean extract the bits of code still in use after this great restructure and make it work again. I just need to be sure to chmod my new file. I’ll also be using native cv2 functions for finding the background and whatnot.

By next week, I hope to be able to remote into my Pi with full screen resolution, and get back to where I was but with native cv2 functions.

Final Question: How long will it take before the cat learns, and will I ever need to redeploy?

Programming a Pi to Deter Cats: Part 7

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am simplifying the path before me. Let’s get started!

I went to the workshop again this week, and I got some fairly simple advice. I have a fairly fancy background system, but do I really need it? For a first prototype, probably not.

The new way forward is going to be just keeping a running average of the background. After initialization, each frame will be fed into a function to average it with the existing background. I have my doubts, but adding some weights to the average to favor the existing background should achieve the same goal with less computing power spent and less code written.

I also talked about a few other things while I was there. I wanted to take advantage of multiple threads, where one core would manage the background while another handled the object detection. I forgot an important detail: Python itself doesn’t support multiple threads. Now, if I were to have it start another program… Possibly not, though.

I also brought up the unlikely possibility of changing program constants while the program is running. Turns out there are ways to do that. From my brief glance at the topic, it’s not quite the module to include I imagined, rather: it looks like it might be similar to a cheat engine I once saw being used for a game. I’ll be looking into it for next week.

Hopefully by next week, I will have something set up to change global variables on the fly. Another good boon would be to fully implement the whole deal of fading the whole background. I was exposed to the sound chip, but with my case geometry, I’ll need a GPIO port ribbon to accommodate my inverted board and an extra power cord. I’m still aiming for modifying a buzzer to accept a pulse from the Pi instead of a finger on a button.

Final Question: I think I’ve finally realized on the format I want to use from now on. Review progress, present progress, then plan ahead. I don’t always make as much progress in a given week, so I often pad it with descriptions of future plans. What do you think?

Programming a Pi to Deter Cats: Part 6

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m working on my Raspberry Pi again to make it chase cats off counters. Let’s get started.

A lot has happened this week, yet it feels like not enough. I had a Python script running OpenCV from the command line last week, and this week, I’ve started work on the dynamic background.

I’d have to say I’m not quite a third of the way to this smaller goal, but in terms of required research, I should be finished with it next week.

Right now, I have the pseudocode for this part of the project, as well as a demo highlighting the difference between the last two frames. To get to that point, I had to buffer frames to a two-frame queue, then compare them.

It is not fun when you have an error search engines don’t recognize. “Src1 is not a valid numerical tuple.” Long story short, it was the way Python arrays work. They need to be declared, then each element, or a placeholder element added. Somehow, after giving up for the night, I halfheartedly tinkered with swapping the order of comparison, and it followed the same element. I traced the bug back through my IO function, and got a buffered video feed.

The other exciting thing was finding the comparison function. The buffer has a pointer to say which is the more recent frame, and reading the most recent frame is fine, but the trick here is to highlight what’s new and changed this frame. It took a bit to find that function, and I played with a few bitwise operators I found next to it. One strangely affected the “raw footage,” and ended up strobing. I had to slow the framerate on that one to see it was inverting everything each frame.

The very next thing to do is to boost those frames to all white, but I haven’t found how to do that yet. After that, I need to figure out how to extract the resolution and get the demo working in low resolution mode by scaling the width and height down by an integer.

With a scaled down resolution, I should be able to build a heatmap as a countdown timer until areas fade into the background. Once I have all that working, I can start researching again for spawning a second thread and analysing the full resolution picture for naughty felines.

Final Question: How many more weekly parts should it take before I have my first working prototype?