Generative AI: Ethics on the Frontier

Good Morning from my Robotics Lab! This is Shadow_8472 and today I have a few thoughts about ethics when living on a frontier. Let’s get started.

Law and New Technology

Law follows innovation. A world without motor vehicles or electricity won’t require cars to stop at a red light. Conversely, new technologies bring legal uncertainty. A nuclear-powered laptop might be ready for 20 years of abuse in an elementary classroom without leaking any radiation, but expect more courtroom pushback than a mind-reading camera – at least until the legal system can parse the respective technologies.

Generative AI in 2024 is data hungry. More training data makes for a better illusion of understanding. OpenAI’s ChatGPT-4o reportedly can read a handwritten note and display emotion in a verbal reply in real time. If they haven’t already, they will soon have a model trained off every scrap of text, video, and audio freely available as well as whatever databases they have access to. But the legal-moral question is: what is fair game?

Take drones as a recent, but more mature point of comparison. Generally speaking, drones should be welcome whereever recreational R/C aircraft already are. Hover like you might be spying on someone expecting privacy, and there might be trouble. Laws defining the boundaries between these and similar behaviors protect drone enthusiasts and homeowners alike. Before that compromise was solidified, the best anyone could do was not be a jerk while flying/complaining.

The AI Art War

But not everyone’s idea of jerk behavior is the same. Many AI trainers echo the refrain, “It’s not illegal, so we cam scrape.” Then digital artists on rough times see AI duplicating their individualized styles, and they fight back. Soon, jerks are being jerks to jerks because they’re both jerks.

Model trainers practically need automated scraping, precluding an opt-in consent model like what artists want. Trainers trying not to be jerks can respect name blacklists, but improperly tagged re-uploads sneak in anyway. Artists can use tools like Glaze and Nightshade to poison training sets, but it’s just a game of cat and mouse so far.

Those were the facts as stated as objectively as I can. My thoughts are that artists damage their future livelihood more by excluding their work from training data. The whole art market will be affected as they lose commissioners to a machine that does “good enough.” Regulars who care about authentic pieces will be unaffected. Somewhere between these two groups are would-be art forgers in their favorite style and people using AI to shop for authentic commissions. I expect the later to be larger, so the moral decision is to make an inclusive model.

At the same time, some countries have a right to be forgotten. Verbally abusing AI art jerks provides digital artists with a much-needed sense of control. While artists’ livelihoods are threatened on many sides, AI is approachable enough to attack, so they vent where they can. I believe most of the outcry is overreaction but remember I’m biased in favor of Team Technology, though I am not wholly unsympathetic to their cause. I am in favor of letting them exclude themselves, just not for the reasons they would rather hear.

Takeaway

I see the AI situation in 2024 as comparable to China’s near monopoly on consumer electronics and open secret about committing human rights violations. In theory you could avoid unethically sourced consumer goods, but often times going without is not an option. You can then see the situation as forcing you to support immoral practices or you can see yourself as making the effort to find the best –though only– reasonable option available. The same thing applies to AI. All other factors equal, I intend to continue using AI tools as my conscience allows.

Final Question

Do you disagree with my stance? Feel free to let me know in the comments below or on my Socials!

Leave a Reply