- cross-posted to:
- technology@beehaw.org
- reddit@lemmy.world
- cross-posted to:
- technology@beehaw.org
- reddit@lemmy.world
They always were.
Only now they’ve agreed to pay Reddit for it. This is what their third party lockdown was really all about.
They’re helping themselves to your Lemmy comments for free, as that’s just how it’s designed. If you post anything publicly anywhere, it’s getting slurped up by a bot somewhere.
I’m not a lawyer. But isn’t the reason they had to go to reddit to get permission is because users hand over over ownership to reddit the moment you post. And since there’s no such clause on Lemmy, they’d have to ask the actual authors of the comments for permission instead?
Mind you, I understand there’s no technical limitation that prevents bots from harvesting the data, I’m talking about the legality. After all, public does not equate public domain.
users hand over over ownership to reddit the moment you post
Not ownership. Just permission to copy and distribute freely. Which basically is necessary to run a service like this, where user-submitted content is displayed.
And since there’s no such clause on Lemmy, they’d have to ask the actual authors of the comments for permission instead?
It’s more of a fuzzy area, but simply by posting on a federated service you’re agreeing to let that service copy and display your comments, and sync with other servers/instances to copy and display your comments to their users. It’s baked into the protocol, that your content will be copied automatically all over the internet.
Does that imply a license to let software be run on that text? Does it matter what the software does with it, like display the content in a third party Mobile app? What about when it engages in text to speech or braille conversion for accessibility? Or index the page for a search engine? Does AI training make any difference at that point?
The fact is, these services have APIs, and the APIs allow for the efficient copying and ingest of the user-created information, with metadata about it, at scale. From a technical perspective obviously scraping is easy. But from a copyright perspective submitting your content into that technical reality is implicit permission to copy, maybe even for things like AI training.
This form of propaganda is my pet peeve. It’s not “your posts” as soon as you put something to public you don’t get to eat your cake. It’s out there, you shared it. Don’t share it if you don’t want humanity to ingest and use it.
You’re technically right, but nobody anticipated and therefore agreed on their posts being used for training LLMs.
Public information is public information.
Oh boy have I bad news for you. You ever heard of copyright?
Have you ever heard of fair use?
Finally found a use for MS Edge, loaded up Nuke Reddit History and removed all comments and posts: https://microsoftedge.microsoft.com/addons/detail/nuke-reddit-history/bklbcgohenjegdibgmppligaapohkgip
Hate to break it to you, but the time to do that was over a year ago, and even then it wasn’t ever really a sure thing - we don’t really know what their backup policies are around that stuff.
This is what the former power user community that made an exodus from Reddit roughly a year ago has been trying to communicate, but a ton of people here seem to enjoy keeping their toes in the water over there, with rather predictable consequences (literally, the post we’re commenting on).
All that said: I am very much looking forward to the absolutely titanic lawsuit around GDPR I’m sure is in the works over this.
Some day historians will be able to look back at this moment and be able to determine it was what caused ChatGPT to become horny and weird.
Only an idiot would decide to mindlessly trawl Reddit to train an LLM. They’ll be confused when their model suddenly is confidently wrong about everything and have no clue.
You are a hundred percent right, but how many idiots are there out there?
Uncountably many
So they filled reddit with bot generated content, and now they’re selling back the same stuff likely to the company who generated most of it.
At what point can we call an AI inbred?
This is actually a thing. It’s called “Model Collapse”. You can read about it here.
“Model collapse” can be easily avoided by keeping old human data with new synthetic data in the training set. The old archives of Reddit content from before there was AI are still around.
A model trained on jokes about bacon, narwhals, and rage comics.
By “old archives” I mean everything from 2022 and earlier.
But there were still bots making shit up back then. r/SubredditSimulator was pretty popular for awhile, and repost and astroturfing bots were a problem form decades on Reddit.
Existing AIs such as ChatGPT were trained in part on that data so obviously they’ve got ways to make it work. They filtered out some stuff, for example - the “glitch tokens” such as solidgoldmagikarp were evidence of that.
BRB - changing my entire 15 year reddit comment history to “Fuck Spez”. LOL.
Know any bots or ways to perma delete all Reddit comments?
Reddit has backups, permanently isn’t an option.
They’re not multiple though, edit it and then delete it and it’s gone. They disabled all the tools to do it though so it’s manually or nothing now.
They just reload a previous cached comment, doesn’t matter how many times you edit or delete, it’s all logged and backed up.
So it’s going to be a libtarded libtard AI that doesn’t represent the majority of the people, got it.