Programming a Pi to Deter Cats: Part 2.01 (Robot Ethics Monologue)

Good Morning from my Robotics Lab! This is Shadow_8472, and I don’t feel like I have much progress this week. Let’s get Started!

I continued with the tutorial I started last week, and kept running into walls. I wanted a Hello World program up and running to show for this week’s post, but that simply isn’t happening unless I don’t generate enough text and brute force it within the next couple hours.

I suppose I could try sorting out my feelings about robot cruelty instead.

Several years ago, I did a paper about future mistreatment of robots. The gist was basically since animal abuse is strongly tied to human abuse, and the brain handles some robots more like people than even animals, there should be some form of legal protection for lifelike robots by the time they come into common use. Robots themselves might not need the protection, but people around would-be robot users would benefit from fewer abusers.

On the other hand, now that I know a little more about the prototyping process, I now know that a little more care would need to go into defining “abuse” lest the industry suffer.

I recently listened to a story where one of the characters was discovered to be a robot after a set of shelves fell on and damaged a limb. When her older sister questioned their parents, they said they found her in a dumpster, and didn’t show any sign of knowing she was artificial.

In the above story, there could be any number of reasons a lifelike robot could end up in a dumpster. She could have ended up there as an abandoned prototype, or even for having anomalous code that produced true emotions in a line of worker robots and someone smuggled her out instead of secretly destroying or reprogramming her. Or, she simply could have outlived her usefulness to her previous owner and they wiped her memory as they abandoned her.

In pretty much any case, under today’s laws, robots are treated like stuff, though social pressure has forced robotics company Boston Dynamics to stop showing footage of debugging their robots’ balance programs.

Where then, should the line be drawn, if it should even be drawn? The strongest opposing argument would be for the people who could stop at abusing robots so they don’t go on to people and buy legal victims who can easily have the memory erased. And without going off on a long sting of research, I don’t think I could answer which way would lead to fewer living victims.

For me, if asked to draw the line right now, I’d go easy on “abuse” cases performed in a professional context, as well as robots not designed or modified to relate as an artificial person. Digital assistants are a bit fuzzy here. They are often bound as part of modern operating systems, though I try to limit their scope for privacy concerns, thereby “neglecting” them.

Final Question: Should there be any laws against mistreatment of robots, and if so, how would you weave such a law so it stops the potentially harmful stuff while permitting ethical ore even necessary forms of “abuse”?

Leave a Reply