Programming a Pi to Deter Cats: Part 8

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am droning on with my Raspberry Pi Cat Camera project.

Any sufficiently unguarded water cup is indistinguishable from a cat bowl. I had the unfortunate experience of going through that today. I gave it to the dogs. A completed sentry would not have helped in this case, but it serves as a little reminder. Of course, as soon as I get this thing working, I’d say its expected life expectancy is about a week or two. However long it takes before my cat stops the unwanted behavior.

I had a little bit of a tangent this week. It turns out it is very easy to get distracted when you have a special effects setup. The background I had last week, a constantly updating weighted average, was very entertaining in its own right. I could jump in front of the camera and fade in as if I were a ghost. I jump away, and my image fades out. If I held still, the picture would get sharper.

I ended up researching how to VNC into my Pi from my desktop so I don’t have to keep switching out the HDMI cord plugged into the back of my main monitor. It has actually been fairly uneventful so far. My password manager takes a while to register a new password, so I paused work on that angle of the project before I could set up a virtual desktop. The reason I want that is because the Pi checks if it is “headless” or without any video outputs when it boots. If there aren’t any, it goes into minimum resolution mode, presumably to conserve on CPU power. I have sense been careful to plug it in only after a monitor is connected.

I also went to the workshop again this week and promptly learned why I should always clean up after experiments; I had tried commenting out the line to convert to grayscale, and forgot to correct it. I also learned that the differing lighting conditions defeated the semi-fine tuned settings I had for my rudimentary motion spotter, and the ghost trails were back. We also did a big one on the old code while trying to simplify the code. One major change later, and I find things going half nuclear. I was given homework: Git Hub and code cleanup.

And by code cleanup, I mean extract the bits of code still in use after this great restructure and make it work again. I just need to be sure to chmod my new file. I’ll also be using native cv2 functions for finding the background and whatnot.

By next week, I hope to be able to remote into my Pi with full screen resolution, and get back to where I was but with native cv2 functions.

Final Question: How long will it take before the cat learns, and will I ever need to redeploy?

Programming a Pi to Deter Cats: Part 7

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am simplifying the path before me. Let’s get started!

I went to the workshop again this week, and I got some fairly simple advice. I have a fairly fancy background system, but do I really need it? For a first prototype, probably not.

The new way forward is going to be just keeping a running average of the background. After initialization, each frame will be fed into a function to average it with the existing background. I have my doubts, but adding some weights to the average to favor the existing background should achieve the same goal with less computing power spent and less code written.

I also talked about a few other things while I was there. I wanted to take advantage of multiple threads, where one core would manage the background while another handled the object detection. I forgot an important detail: Python itself doesn’t support multiple threads. Now, if I were to have it start another program… Possibly not, though.

I also brought up the unlikely possibility of changing program constants while the program is running. Turns out there are ways to do that. From my brief glance at the topic, it’s not quite the module to include I imagined, rather: it looks like it might be similar to a cheat engine I once saw being used for a game. I’ll be looking into it for next week.

Hopefully by next week, I will have something set up to change global variables on the fly. Another good boon would be to fully implement the whole deal of fading the whole background. I was exposed to the sound chip, but with my case geometry, I’ll need a GPIO port ribbon to accommodate my inverted board and an extra power cord. I’m still aiming for modifying a buzzer to accept a pulse from the Pi instead of a finger on a button.

Final Question: I think I’ve finally realized on the format I want to use from now on. Review progress, present progress, then plan ahead. I don’t always make as much progress in a given week, so I often pad it with descriptions of future plans. What do you think?

Programming a Pi to Deter Cats: Part 6

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I’m working on my Raspberry Pi again to make it chase cats off counters. Let’s get started.

A lot has happened this week, yet it feels like not enough. I had a Python script running OpenCV from the command line last week, and this week, I’ve started work on the dynamic background.

I’d have to say I’m not quite a third of the way to this smaller goal, but in terms of required research, I should be finished with it next week.

Right now, I have the pseudocode for this part of the project, as well as a demo highlighting the difference between the last two frames. To get to that point, I had to buffer frames to a two-frame queue, then compare them.

It is not fun when you have an error search engines don’t recognize. “Src1 is not a valid numerical tuple.” Long story short, it was the way Python arrays work. They need to be declared, then each element, or a placeholder element added. Somehow, after giving up for the night, I halfheartedly tinkered with swapping the order of comparison, and it followed the same element. I traced the bug back through my IO function, and got a buffered video feed.

The other exciting thing was finding the comparison function. The buffer has a pointer to say which is the more recent frame, and reading the most recent frame is fine, but the trick here is to highlight what’s new and changed this frame. It took a bit to find that function, and I played with a few bitwise operators I found next to it. One strangely affected the “raw footage,” and ended up strobing. I had to slow the framerate on that one to see it was inverting everything each frame.

The very next thing to do is to boost those frames to all white, but I haven’t found how to do that yet. After that, I need to figure out how to extract the resolution and get the demo working in low resolution mode by scaling the width and height down by an integer.

With a scaled down resolution, I should be able to build a heatmap as a countdown timer until areas fade into the background. Once I have all that working, I can start researching again for spawning a second thread and analysing the full resolution picture for naughty felines.

Final Question: How many more weekly parts should it take before I have my first working prototype?

Programming a Pi to Deter Cats: Part 5

Good Morning from my Robotics Lab! This is Shadow_8472, and I feel good about my progress this week. Let’s get Started!

Last week, I had successfully confirmed OpenCV was installed on my Raspberry Pi. The first thing to do this week was to start coding just as soon as I figure out a way to save a program file within the virtual environment, right?

Not quite. I thought a virtual environment was kind of like a virtual machine, where it has a whole little computer running within your computer, only with a few less restrictions. I still think that, but it turns out it works just fine if I run a file within the virtual environment from the Desktop or anywhere else.

I also learned exactly what a bash script is. So, I had OpenCV working in a script, and I changed permissions so I could run a .py file from the command line, but when i went to run it, I got an error about import not being supported. It turns out I forgot to run the Python interpreter with my program as an argument. That detail got me into the workshop I sometimes mention.

I actually got my base coding test harness from the YouTube Channel CodingEntrepreneurs. (Link to their first OpenCV Python video) While I was at the workshop, I learned a bunch of tangential stuff like when to pack your Python power cord, I polished off my knowledge on how to make sure a substitute power supply is acceptable, and when it’s a good idea to go back to the tutorial instead of arbitrarily placing a missing closing parenthesis.

In the end, I feel like I had a very successful session, but since then I’ve been building a framework for my program as a whole. Hopefully, by next week, I will have a program that will look at a series of pictures, tell if the background in each of three zones (cupboard, counters, and cabinets) is occluded (blocked) by a moving object (a cat or human), and sound the alarm if it sees something in the counters’ zone. As a bonus, it can save a picture of the culprit for if/when I try my coding skills at training a neural net to tell if my cat is being naughty.

Final Question: When was the last time you caught yourself considering over complicating a plan?

Programming a Pi to Deter Cats: Part 4

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I have a small, but important step completed. Let’s get started!

The tutorial I’ve been following gave me a hiccup after compilation. I had a passing difficulty when the commands I was copying used A different version number for an ls command.

Other than that, I had a little drama copying some expected output as a command, and when I did get the correct version number and file name in the command, there were a mismatch witha number or two. I figured it was just because of the version.

In the end, the tutorial had me test everything by running Python in the virtual environment, import OpenCV, and have it report its version. I was so relieved when it gave something back that didn’t resemble an error.

Going forward, I think the next, big milestone needs to be turning on an external light when it sees my cat in the frame. Setting this goal opens up a slew of smaller goals: connecting an LED to the external pins; learning and implementing basic elements in the OpenCV API; setting up such a program to run from either a single command, an icon, or bootup.

After that, it I should probably set it up as a VNC server and place it in its rightful position where it can start learning the difference between naughty cat on the counter and hungry human in the kitchen.

This project feels like it’s gone on ar least ten times longer than I expected. It’s good to have a small victory for once.

Final question: How do you keep track of progress in long projects?

Programming a Pi to Deter Cats: Part 3

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am finally installing OpenCV on my Raspberry Pi. And if I have a little more time, I may even try getting a Hello World type program running with it; it’s unlikely though. Let’s Get Started!

Last time I actually made any progress, I finished with making a virtual environment, meaning anything that happens in the sandbox stays in the sandbox, and I won’t have to rebuild the whole computer if things go nuclear. I think. The tutorial LINK wasn’t very informative, but it was fairly instant while making it clear it was technically an optional step.

The next couple steps are fuzzy in my memory. I don’t remember everything in chronological order. I kept running into errors, but I was gently pushed in the direction of trying to learn about them when I was in the local workshop (I was sure to bring a keyboard and mouse that weren’t at odds with my Pi).

My first major roadblock was when I was setting up to compile. I was sent off to research ‘cmake,’ a command line program being used in that stem. I ended up on a page with a lot of text, and I’m an audio learner. I don’t remember much of what was going on, but I recognized enough to say I was only about 95% lost as to what was going on when it came down to fine details, like the options being used for each argument in the command.

For some reason, it didn’t work, and I left stressed over it. If there’s one thing that is just an infinitesimal less frustrating than something not working when everything you know says it should work, it’s something working when everything you know says it shouldn’t work, and you’re trying to reproduce the problem so you can fix it. That happened to me. I set up for compilation, and I ran it. Ten minutes into the expected hour-plus operation at home, it crashed. There was some library it couldn’t find based on the pregenerated header, but I only figured that out when I went back into the workshop for a second time.

Round two this week was more of the same, possibly leading to events jumping sessions in my mind, but the second one was felt a lot more productive. I found the cause of the source of the problem, and I also worked on another, unrelated topic I want to save for another time.

Bonus Topic: I’ve complained a few times about how Windows sometimes is like, “By the way, we are updating your computer in 10 minutes, is that okay?” I find that highly frustrating. It popped up two or three Friday nights ago (after Sabbath started, so I didn’t want to do a whole topic about it). My father and I finally found the settings to supposedly require human input to permit an update.

Final Question: How much do you usually know about the commands you use?

Programming a Pi to Deter Cats: Part 2.01 (Robot Ethics Monologue)

Good Morning from my Robotics Lab! This is Shadow_8472, and I don’t feel like I have much progress this week. Let’s get Started!

I continued with the tutorial I started last week, and kept running into walls. I wanted a Hello World program up and running to show for this week’s post, but that simply isn’t happening unless I don’t generate enough text and brute force it within the next couple hours.

I suppose I could try sorting out my feelings about robot cruelty instead.

Several years ago, I did a paper about future mistreatment of robots. The gist was basically since animal abuse is strongly tied to human abuse, and the brain handles some robots more like people than even animals, there should be some form of legal protection for lifelike robots by the time they come into common use. Robots themselves might not need the protection, but people around would-be robot users would benefit from fewer abusers.

On the other hand, now that I know a little more about the prototyping process, I now know that a little more care would need to go into defining “abuse” lest the industry suffer.

I recently listened to a story where one of the characters was discovered to be a robot after a set of shelves fell on and damaged a limb. When her older sister questioned their parents, they said they found her in a dumpster, and didn’t show any sign of knowing she was artificial.

In the above story, there could be any number of reasons a lifelike robot could end up in a dumpster. She could have ended up there as an abandoned prototype, or even for having anomalous code that produced true emotions in a line of worker robots and someone smuggled her out instead of secretly destroying or reprogramming her. Or, she simply could have outlived her usefulness to her previous owner and they wiped her memory as they abandoned her.

In pretty much any case, under today’s laws, robots are treated like stuff, though social pressure has forced robotics company Boston Dynamics to stop showing footage of debugging their robots’ balance programs.

Where then, should the line be drawn, if it should even be drawn? The strongest opposing argument would be for the people who could stop at abusing robots so they don’t go on to people and buy legal victims who can easily have the memory erased. And without going off on a long sting of research, I don’t think I could answer which way would lead to fewer living victims.

For me, if asked to draw the line right now, I’d go easy on “abuse” cases performed in a professional context, as well as robots not designed or modified to relate as an artificial person. Digital assistants are a bit fuzzy here. They are often bound as part of modern operating systems, though I try to limit their scope for privacy concerns, thereby “neglecting” them.

Final Question: Should there be any laws against mistreatment of robots, and if so, how would you weave such a law so it stops the potentially harmful stuff while permitting ethical ore even necessary forms of “abuse”?

Programming a Pi to Deter Cats: Part 2

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I am moving along towards a feline safety program for my Raspberry Pi. Let’s get started!

For starters, I was even having doubts if my Pi was even powerful enough for computer vision applications. A quick search gives plenty of harry tutorials on how to install it. I’m following one here: LINK. I’m on the Raspberry Pi 3B+, and word has it the Pi 2 was pioneering the field for its kin.

I also watched some tutorials to get an idea of how OpenCV works, and I think it should track the cabinets and counter. If it tracks something moving on the counter and there is no occlusion (nothing is blocking) of the cabinets, register a hit. On a hit, save a picture for future evaluation, and sound the alarm to get the cat off the counter if it’s a strong enough hit.

I started installing OpenCV, but I ran into some issues. First of all, I am very glad I went it to that workshop I’ve mentioned before, because the tutorial just gives me wall of text. It helps to have someone knowledgeable to take a look at things when they stop working. It also would have helped if I had seen the troubleshooting section before I went and searched for one particular error I came across and ended up on another tutorial for the same thing.

Conditions were challenging. the mouse was crazy sensitive, and if I didn’t select and copy with the mouse, I ran the risk of typos in the keyboard. In the end, I believe I got a virtual box for Python 3 to run computer vision, but I ran out of time there.

At the beginning of my post, I mentioned feline safety. It turns out, my father spotted him walking on the stove top, and I really don’t want him getting burned, let alone the health risk.

Final Question: Have you ever used automated deterrents on animals?

When Not to Fully Backup a Device

Good Morning from my Robotics Lab! This is Shadow_8472, and today, I was going to do an image of my phone without using Root and voiding the warranty, but things have changed, and I want to go into why. Let’s get started!

Modern Smartphones are essentially pocket computers powerful enough to run a program pretending to be a state-of-the-art desktop from around 15-20 years ago. It used to be you had full access to everything when you tethered a device to a Desktop. I used that once to backup my first ever tablet and restore it after some work. Much to the annoyance of more than a few power users, around 7-10 years ago, it became standard practice to lock end-users out of the admin level actions under pain of voiding the warranty or even breaking the law, depending on the device and year.

I sort of understand the reasoning, but I wouldn’t mind a future where I cannot have warranty work done when anything I’ve ever done on the software side of would logically have had no affect on the intended work: for example, if I crack a screen (I tend to end up with the no-ask warranty policy), they would be well within their rights to take it in, plug in their diagnostic equipment, and leisurely return my phone if it tells them it’s ever been rooted.

I’ve had my Galaxy S7 Edge for a while now, and shortly after I got it, it developed a narrow, pink stripe down the right side of the screen. From my experiences and research back then, between a third and half of similar phones have this defect. As I understand it, the warranty was written in a way that encouraged you to keep the phone for a while before doing anything, so it’s been a couple years. This week, I decided I had had enough.

As a side note, since I’ve gotten my tablet, I haven’t been playing games on my phone as much. The one I’m still playing, though, recently got an update, where they started up what look like weekly events with grand prizes if you complete enough challenges. The first event, I played that event a lot. I put more time than I think was reasonable, and I still only got half way up the progressively longer ranks, just short of most of the prizes that are actually worth it. Of course, I could always spend the special currency linked to real money, but that trickles in way too slowly for normal gameplay, and I have zero cash for f2p games, lest I open Pandora’s box. In short, I lost interest because I felt cheated by a disguised pay to win system.

A couple weeks ago, that game would have been enough to give me the drive to make an image of my phone. According to my research, it would have taken a Desktop debugger and some other low-level commands, but there does not seem to exist a currently maintained GUI for what I wanted to do. Now? Now I’ll be happy just grabbing what I can and abandoning my game files hidden behind a wall of admin access.

Final Question: What laws about technology would you change if you had the opportunity?

Programming a Pi to Deter Cats: Part 1

Good Morning from my Robotics Lab! This is Shadow_8472, and today, and likely for the next couple weeks, I am studying up on how computer vision software works so I can get started on the software to convey a message to my feline friend about venturing in places he isn’t welcome. Let’s get Started!

I decided a long time ago that I wanted to explore the free and open source option OpenCV for this project. From what I can tell, I will need to play around with OpenCV on my desktop first to get a feel of how to use it.

As of the time of writing, I believe my workflow will start with getting OpenCV onto a Linux desktop. After I’m comfortable with how it works, I can try running it on the Pi. Depending on how well the Pi takes it, I will develop the actual program on either my PacMan Ghost case, Blinkie Pie, or my Ubuntu server tower, Derpy Chips. A period of testing will involve Blinkie watching the forbidden area and logging suspicious activity: basically anything that moves, and narrow that down to naughty cat behavior. While that’s happening, I need to develop an audio turret.

My mother actually found a dog training device that emits high frequencies, and the cats aren’t too fond of it either. I’ve already tried taking it apart, and it’s so cheap, getting into the case for tinkering will mean a new case. My goal for this phase is to get the Blinkie to turn the nasty sounds on from a remote desktop.

Once I have my cat identification program working reliably, I can connect it to the frequency generator and set it loose. For the first few days and weeks, I’ll want to monitor it and evaluate its results by looking at pictures from suspect incidents. I’m looking forward to obedient cats.

Final Question: Have you worked with Neural Nets on Computer Vision problems before?