MotionEye: One Step Closer to a Critter Cam

Good Morning from my Robotics Lab! This is Shadow_8472, and today I am revisiting my naughty critter cam again. Let’s get started!

Project Recap

Earlier this year, I spent a week learning about DietPi, a lightweight OS for “System On a Chip” computers — notably Raspberry Pi, and Motion, an open source home surveillance platform. These fit great into my Raspberry Pi 3B+. I burned out after getting it to stream video to a webpage.

Version numbers have marched on since, and updates were in order. Long story short, it was simpler to reinstall and relive the week in the span of one day. I set the standard “new install” settings, and installed motion from within DietPi’s “hedged garden.” I copied a default config file at /etc/motion/motion.conf to /root (I am approaching this as a short-term/low security project) and struggled against the documentation. It led me to mess with mmalcam_params when all I needed was:

rotate 180
webcontrol_localhost false
stream_localhost false

These lines rotate the image and unlock Motion’s webUI.

MotionEyeOS

The final deciding factor to reinstall started when I came across a new command for listing ports Linux will respond on:

ss -list

I found port 8765 with a login I couldn’t get in. Only after a total reinstall did I look up motionEye and find the default login is “admin” with an empty password. It’s a nice webUI, but it won’t share the camera with motion proper, and it took me a while to cycle through the different options to find the feed. My favorite feature is that it passed the “Oops, it lost power” test.

Push Notifications

MotionEye can run a commands when it sees something move, which can be anything from object detection to filter false positives to an automated squirt gun when it recognizes naughty behavior. My next major milestone though should be push notifications. On Linux, I can use notify-send over ssh, but I’d need to research an equivalent way for it to show up on Windows, Mac, and Android for other family members. In the long run, it will be simpler to dust off my Discord bot skills and give a LAN address.

So, that’s what I did. I made a bot that sends a hard coded message to a hard coded channel and closes itself as part of its startup function.

#!/usr/bin/python3

import os
import discord

intents = discord.Intents.default()
client = discord.Client(intents=intents)

#On startup: send message to and close program
@client.event
async def on_ready():
channel = client.get_channel(<channel_ID>)
await channel.send(f"Motion Detected!\nhttp://192.168.0.50:8765")
exit()

client.run('<Bot_Login_Token>')

MotionEye could run the bot once I placed it in /mnt/dietpi_userdata/, gave it permission to execute, and ensured it belonged to the dietpi user. Before too long, I had notifications through Discord to check Motion, and the hardest part of deployment was turning on the Pi’s power switch. For my “Show & Tell,” my father and I rigged up a cat tower and an LED work light on a stand to watch the stairs.

Takeaway

This temporary setup remains incomplete. For starters, the Discord bot wastes around four seconds logging in. I will eliminate this delay once I can get a signal from MotionEye into a running bot. Also of concern is that I want the bot ignoring humans – which means object detection, a field I’m not far into yet.

Final Question

Am I missing anything obvious on my road map? Let me know in the comments below or on my Socials!

Leave a Reply