A while ago we had a post with a comic that was a bit controversial due to it being generated by genAI, but we did not explicitly have a rule against it.
We wanted to discuss this and ask the community, but this apparently had already been a topic on feddit.uk for awhile and they have made a instance rule about it (announced in this post).
Since buyeuropean community is on feddit.uk, the feddit.uk rules apply to this community and therefore I wanted to announce this new rule so it doesn’t come as a surprise.
Copy of the post body text from the announcement of this rule on feddit.uk:
So no:
- AI generated memes of images
- AI generated answers to questions
edit: this applies to feddit.uk communities, we won’t block AI art communities on other instances or sanction our users for posting on them.
Weird rule. Im against it. And I look at ai art neutral, its just a tool.
Thank you for announcing it, I never noticed any AI generated content here, but still better safe than sorry
How will you possibly enforce this?
Well, like any other rules. The community helps a lot with this.
Most of the time it’s clear and OP doesn’t hide the fact - in those cases we hope OP checks the rules before posting and then decides not to.
But, let’s say some popular post gets a lot of comments, one of the comments is just two lines of harmless AI text so no one pays attention or knows it’s genAI and therefore no one reports it: we wont notice and the comment that technically breaks the rules is staying. The commenter has either not read the rules or, if they have, is celebrating their victory at home for passing the great mod filter wall.
Mods are just normal users part of this community and volunteered to moderate this community when they can. Mostly we have been acting on reports by other users of this community and we filter out troll posts and comments that are just e.g. racist, transphobic, etc.
We don’t have some fine modding tools that parse every post and comment, if no one notices / reports rule breaking content then it won’t be acted upon.
How will you know ?
deleted by creator
oh why?
because LLMs can’t differentiate what’s facts and what’s fiction, which is quite important when trying to determine the origin of a product? and because AI generated content is, most of the time, low effort garbage?
What about if the text on an image is factual but the accompanying stock photography is just an AI generated one? what’s the harm and/or who cares?
if you use an AI-generated header for your article, then I’m going assume the text has been AI-generated, too. and I’m not going to bother reading something that no one could be bothered to write.
People have tried so damn hard to be objective. To take their own subjectivity out of their writings.
But that’s impossible.
Ai can do just that. It can analyse far more data than you can even imagine.
It’s the future.
AI is never objective. It’s always influenced by its training set and its parameters. What data is it going to analyse? Where does that data come from? And even if it were: choosing to write about one thing instead of another is also bias.
Humans are also never objective. Which is good. I’d rather know the biases of the author instead of some fake objectivity.
Funnily, the best explanation on this thread was just me copy pasting it from le chat mistral.
It simply gave a good explanation of how it works. Why it can’t be objective.
It’s removed though.
Objectivity is the wrong word then. I seek to know multiple angles all at once.
Nobody on this thread is pro AI, but that’s insane. As it’s one of the most growing markets. So there’s a lot of information lacking here.
“AI” doesn’t have a mind of its own to formulate am “objective” opinion, it just regurgitates whatever it’s being fed, and what it’s being fed is our biases.
It objectively states a summary of all of our combined biases. Which is valuable.
What else are you going to do? Humans are always going to search for information that supports their own bias.
AI forces them to read through bullet points that go against their own bias. It lowers the effect of polarisation if this is done on large scale.
it objectively
nope, it doesn’t have a way of telling what’s objective and what isn’t.
Which is valuable.
So basically we should stop funding le chat mistral ai and miss out on a market just like we did with smartphones?
You think that’s a good idea buddy? Not supporting our own products?
oh no, not missing out on the technology that hallucinates false information and makes fake people with six fingers, for a meagre cost of half an Amazon jungle per prompt! the horror!
Are you too young to have gone through past innovations? Have you not used the internet in 2002? YouTube was laughably bad back when it started. Microsoft was just a basic company.
You don’t know that AI will be improved upon? Are you this ignorant?
The Belgian government already made it law to use peppol invoices. That’s so that AI can automate the bookkeeping and that governments will have all the information they need in order to tax correctly.
Damn fools on this platform
I want to mention I’m personally not against AI generated content, and don’t know why so many people seem against all forms of AI, especially when it comes to images, but i am against wrong information and low effort crap so it will just say you do you feddit.uk and good of our community to follow the instance.
I’m against it because of the questionable ways AI gets trained (stealing art or books for example) and also because of the environmental impacts.
With the unethical training habits and energy consumption from megacorps, in addition to just being brain-killing slop, AI generated content should have no place in social media. https://en.wikipedia.org/wiki/The_Sorcerer's_Apprentice
I agree those two are bad things, but it is to me not enough reason to ban it entirely.
You’re ignoring one very problematic aspect: these artists and authors they’re stealing from? They have no way to opt-in or opt-out. These multi-billion dollar companies can just slurp up whatever they want, and they do. What’s your favorite web comic artist or indie musician going to do? Sue them? With what money?
Nowhere is consent a consideration, and until these companies start acting in good faith and instead of like billionaires (fat chance), they should not be allowed to run their slop generators.
If you generally mean “machine learning,” I agree that there’s good applications, such as in medicine. The arts, though? It has no business there.
What’s your take on AI like Adobe’s Firefly? it’s supposedly trained on their own licensed stock images, the artists get paid and they can opt-in / opt-out.
Edit: How on earth do people downvote a question? I want to know something. Why bury my inquiry? Did you guys actually read my post? I’m not defending any company. I’m ASKING about what the company claims, to LEARN.
It was not, their claims are not verifiable as they also used “third party datasets” without disclosing which ones. Also supposedly opt-out options appeared after they used their stocks to train/fine tune Firefly which contradict their promise of notifying users before doing so and likely illegal in most European countries. To add to that, their stocks are not curated/moderated and anyone can upload the whole collection of an artist there without agreement (opt-out is a deeply flawed system that protect no one), not to mention that the “pay” the contributors received was absolutely ridiculous…
Thanks for clarifying. Very informative. I agree with your POV, it’s logical. I still don’t get why people downvoted my question.
Still disagree although I agree that is a very problematic aspect. I think the ai companies have been given too much freedom. I’m also fine with everybody choosing not to use it. I also agree artists were fucked over and deserve artistic credits and financial compensation, I will support them getting this. I myself have not generated an image that looks like something from Ghibli for example. But still, all things considered; I don’t think banning all ai generated content a reasonable thing to do. But that has more to do with my world view concerning individual freedom than how I view ai generated things and the behaviour of ai companies.