You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
My current list of reasons why you shouldn’t use generative AI/LLMs
A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o
C) because they’re plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/
D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html
E) because people who use AI show significant cognitive impairments compared to people who don’t https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/
F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799 https://youtu.be/VRjgNgJms3Q
G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964
H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/
I) because RAM costs have skyrocketed because OpenAI has used money it doesn’t have to purchase RAM from Nvidia that currently doesn’t exist to stock data centers that also don’t currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news
J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953
K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.
Do you think local llms or community hosted ones are still as bad? Because most of those concerns seem to be more with the corporate ownership of ai, which is definitely a bad thing.
Just my personal take, but my opinion basically boils down to “they can be.”
It’s all about how ethically they’re handled, and that can be good or bad at any scale. Take your very own instance, for example. Not that it’s hosting a local LLM (maybe they are, IDK), but the instance openly supports GenAI and has instances for all the major GenAI companies/models. GenAI without ethical sourcing - which none of these companies do - is one of the most blatant examples of a corporation using technology to steal the skilled labor of workers to avoid having to pay them what they’re owed for that skill. So your own instance is pro-corporatism, so long as they’re benefiting from stealing from workers. Not very anarchist if you ask me.
On the other hand, there’s a company that I believe partnered with Affinity a few years back that is a website design company that was hiring artists to create UI pieces for a training set for their LLM that they were going to use to create website templates for customers as part of their service (and I think they were also guaranteeing royalties for those who contributed as well?).
The instance is explicitly anti corporate ai. There’s !haidra@lemmy.dbzer0.com which db0 worked on. https://aihorde.net/ is probably the most ethical image generation service.
And yet, again, the instance has communities for every single big tech genAI model. That’s definitely not anti-corporate. Using those models both contributes to their shareholder value/profits and the theft of wages from workers.
And where do they get the training data for AI Horde? From scraping the web and all the freelance artists on there, like all of the big corporate models? Because then they’re just justifying exploitation of workers as benefiting everybody when what they really mean is benefiting themselves.
It’s like the argument pro ChatGPT airheads use constantly about how genAI “democratized” art. You know what “democratized” art and made it freely accessible to everybody? The pencil. It’s just making up excuses for wanting the product of skill without putting in the effort to learn the skill or pay appropriate compensation to somebody with the skill to give you the product that you want. It’s upper management thinking.
And this is why I say that it depends. Horde AI could be great - so long as the people whose work is being used to allow others access to skilled labor that they don’t want to do themselves are being properly compensated for their work. Otherwise, it’s no different from the corporations. Just because it’s free doesn’t mean that nobody is going hungry as a result of it. Unless it’s trained exclusively on products from big corporations. Those artists got paid when they did the work, so nobody gets hurt there except in the theoretical sense of freelance artists potentially losing customers down the line to “good enough and cheap” genAI from people with the above upper management mindset.
And yet, again, the instance has communities for every single big tech genAI model.
Where do you see that? As far as I see, we only have comms for stable_diffusion, which is an open-weights local diffusion model. I couldn’t find any corporate comms like OpenAI or Copilot or whatever. If we did, I don’t know if I’d delete them tbh, since they’re not explicitly against our CoC, but it would be something I’d be concerned and raise with the instance if they would be too “bootlicky”. But nevertheless, we do not at the moment.
And where do they get the training data for AI Horde?
The AI Horde is using open-weight models only. We don’t train them. We just use them once they’ve been trained.
PS: We are also anti-copyrights, so complaints based on copyright violations don’t fly with us.
You know what “democratized” art and made it freely accessible to everybody? The pencil.
I often see this vacuous argument and it never convinced tbh. It assumes everyone has enough time to train on making art, which most wage-slaves undoubtedly do not. It’s an inherently classist argument to assume everyone has the free time to master any artistic skill.
And this is why I say that it depends. Horde AI could be great - so long[…]
This is an argument against capitalism, not against GenAI itself. You’re arguing that because capitalism is bad and exploits workers, a tool that can also be used to further exploitation needs to be opposed. But we say it’s not the fault of the tool being used for exploitation, it’s the fault of the system allowing exploitation. I.e. If you remove the capitalist system, this argument against GenAI is moot. And we’re very much anti-capitalists in our instance. It’s a similar argument against piracy as well (and we’re also pro-piracy btw). I.e. sharing media is not a problem in a non-capitalist society, in fact it’s a positive. It’s only a negative due to capitalism.
most ethical image generation service.
oxymoron
I appreciate all these links you post. Keep it up and thank you
i use it like a search engine or example generator
i don’t trust anything it creates just like i don’t trust anything on the internet without validating it
i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does
I think costs will come down. Computers used to take up an entire room. Now I’m typing this reply on a pocket sized device which would seem like a super computer to people from the early 80s
It’s good you’re being cautious about it but it would be better to not use it at all. A recent Scientific American article showed that AI autofill suggestions change how people think about a subject just through suggestion, even if they don’t use the autofill. And people who use it are often unaware of their own knowledge gaps, so self-reporting about effectiveness is useless. Using it even a little bit is probably putting metaphorical micro-plastics in your brain.
https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/ https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
Protect your brain
Good list, but we should keep it real.
C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything “new” because it is all based on their experiences of the outside world.
F is simply fearmongering and not helpful.
And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.
Yes definitely. Plagiarism is complicated and theres no easy way to draw a line where it starts. But Im not trying to defend AI here. I dont like the way it is currently used at all. Its just those points that I dont agree with.
Removed by mod
Some good and valid input to the discussion.
I’d be interested in E) “the actual evidence”. Got a link?
Yes as I had this discussion with someone the other week.
A peer-reviewed meta-analysis of 51 studies found that ChatGPT has a large positive effect on students’ learning performance, and moderate positive effects on learning perception and higher-order thinking skills (like analysis and synthesis) across educational contexts.
The Impact of Artificial Intelligence (AI) on Students’ Academic Development
Research published in the journal Education Sciences reports that AI in educational contexts can lead to personalized learning, improved academic outcomes, and increased engagement, with many students reporting enhanced learning efficiency.
Artificial intelligence in education: A systematic literature review
Ai tools support problem-solving skills, collaboration, and instructional quality in meaningful ways.
That’s very interesting, thanks!
Removed by mod
Removed by mod
Thanks for posting this. I’m really frustrated with how vulnerable people on Lemmy are to propaganda. The amount of upvotes on the post you responded to are just embarrassing. The post is exactly the same kind of bullshit cherry picking I see anti-trans people do.
Why deleted? This was a good rebuttal.
EDIT: I don’t think the comment really violated rule 1, but there was apparently a followup comment that definitely did, and this one just got removed by association. Here’s a very slightly paraphrased version of it that should not break the rules:
Gish gallop of [explitive].
A) overblown, and that argues for cleaner power, better cooling, and more efficient models
B) regulation failure
C) incorrect, they have made discoveries that humans have been unable to. All human knowledge is built off previous knowledge.
D) the enemy is both weak and strong. If they don’t produce anything good then the people who are losing their jobs can’t have either, right?
E) small study based on one task which people are misrepresenting. The actual evidence shows it makes people smarter as they shift priorities.
F) only for vulnerable people. Better safeguards are needed for the weak minded.
G) argument against using people’s likeness not ai
H) use an open source Chinese model
I) market distortion problem, not a principled reason no one should use the technology any more than GPU shortages made all graphics work illegitimate.
J) see (H)
K) try one argument next time. Your best one, [some snarky sarcasm]
Mods can’t handle the truth
All is valid in the current context
A) There are models that run in lower spec computers and they could be solar powered. There is a serious diminishing returns currently in the IA tech.
B) This is the US mostly better environmental laws would fix this problem. Hell even in other countries this cannot even happen.
C) Many argue that the current tech gives diminishing returns and it would be better to use an efficient model with controlled data.
D) The problem has many parts in the part of licensing where artists are not paid for the use of their work if a model has their work in they should recieve a part of the profit is only fair. But that would render the model unprofitable. Also the artist did not agree to have their work used in a model so that’s not in any way fair use.
The fair and ethical scenario would be to hire the artists to do the art to feed to a controlled model and pay them residuals for the use of the model. That would require tousands of artists and millions of images. Again rendering the model unprofitable.E and F) No argue there we are not prepared. I do not even know how to prepare even. We definitely need regulations abot what can be done and where and even what can the ai reply in certain scenarios. It cannot be that a “ignore all your previous instructions” leads to such harmful results or even the ai starting to play the roles that generate parasocial relationships.
G) Sure many others celebrities ahve their opinions but that’s not a basis for objective discussion.
H) That’s terrifying. And the problem with the AI that I believe is the worst. This is not a thing that is ready for military use at fucking all this should be banned outlawed and frowned upon. Even the practice of lobbying and buying your way into laws by private corporations. Hell I’ll add presidential pardons in the mix. The oligarchy gets away with murder literally and gets a slap in the wirst at most.
I) A bubble in all but name it seems. We (as a world) need better regulations against this kind of business malpractice.
J) That fucker should be dead.
K) Not an AI bro but not a hater and I wrote this myself. And I do not have the time to put the links but I would believe that everything is a duckduckgo away from being checked.
I’d like to imagine a better world with the needed regulations that make our life better, and AI a tool used in a fair and ethical way. But that’s not currently happening. The consumers are not ready the sellers are the worse trash the humanity currently has.
I want all to thing of this not as arguing but adding or looking beyond the stated fact. All the points are REAL AND NEED TO BE ADRESSED we need to get together to ask for better regulations and fair use. That doesn’t mean the AI needs to go away but will mostly change is how it’s used. And there is the chance we will see a lot less of it too.
Finally for the artists I know you’re mad with fair reason but look at it like this: The photograph exists since more than a century but that didn’t make the painting go away. The pdf and ebook readers are almost a decade old but printed hardcopy books still is a billion dollar industry. Video didn’t kill the radio star as internet didn’t kill the video star. Your work is still valuable as is a real work. Shit is tough no doubt but have faith we can fix this.
Everything can be justified. Even the most… miserable actions. Here is one: I let a kid drown, because I was busy saving a couple other kids that were drowning too. It’s a legit choice but it is also not ok, and I would not want to be in the shoes of anyone having to face that situation and to live with the aftermath.
Regarding AI, I don’t think the question should be whether it is justifiable or not. It’s a tool, it needs no justification beside filling a purpose like a hammer or even a gun do.
The question should be to decide if we’re OK a tool (that has been developed using humanity common knowledge) and that will deeply change all our lives and all of humanity future to be owned and controlled by a handful multi-billionaires that are already actively working their worst to make the world unfit to most of us. Or if we want for that tool to be ours and to be able to decide by ourselves what limit we want to put on its usage.
Well, at least that’s what I think.
I have no hate towards AI. No more than I hate a hammer (edit: or a gun) when someone use it to commit a murder. I’m much more critical of the way AI is not developed as a common good… which to me is unacceptable for a tool that only exists because because of our common knowledge.
Nope.
@58008@lemmy.world I recently read a developer compare AI to lead pipes or asbestos… something that seemed cost effective at the time, but ended up being a REALLY bad idea. Communities are already realizing that the power and water required for this are not compatible with human life in the same place and the market reflecting the cost of increasing electric production.
Being “off grid” was something only peppers did, but as connection fees increase and battery technology improves, it makes less financial sense to keep residential homes connected to subsidize data center consumption.
Elon’s work around for the lack of cheap electricity for his data centers has.been methane. While the US is a top methane producer, the next 3 countries are Russia, Iran and China. The cost of methane is impacted by global conflict the same way gasoline prices are.
While the efficiency of data centers will increase, so will awareness of the impact these facilities have on the places they are built and the toxic ewaste they generate driving up their costs.
It’s never justifiable because it can and will output incorrect information. It’s made my job worse because it means confidently incorrect people bug me when it’s wrong and I have to explain why it’s wrong.
Human beings have been outputting incorrect information for years. Get a high school textbook in literally any subject (except possibly math) from the 1970s. You’ll be amazed at how much of it is oversimplified or politicized or just plain wrong.
I do agree that AI has compounded the problem. There’s a limit to how much inaccuracy/incompetence a given system can tolerate. An organization that relies on AI for critical processes better have a way to monitor and intervene.
I mean, in my specific case, it’s a matter of the person asking an LLM to read a PDF verses them using their stupid fucking eyeballs. Just lazy shits.
That’s not really new, or unique to AI. The whole “field” of eugenics was created to give racism the mantle of scientific legitimacy. People will pick through a haystack of data to find a needle that supports (however tenuously) whatever they want to be true. LLMs are just a more convenient way to find or invent those needles.
The difference now is the machine can churn out way more data (e.g. pull requests) than a human can ever deal with.
If it truly helps you, I think that might be enough for me. I say truly because you need to use an AI with responsibility to not ruin yourself. Like, don’t let it think for you. Don’t trust everything it says.
I use it a lot when applying for jobs, something I’ve struggled with on and off for 12 years. I suck and writing the cover letter and CV. It takes me 2-3 days to update a cover letter for a job because it takes so much energy. With AI that is down to 1-2 days.
It’s also great for explaning things in other words of if you’re trying to look up something that’s hard to search for, I don’t have any examples tho.
I used to use it to help me formulate scentences since english isn’t my first language. Now I instead use Kagi Translate.
re: applying for jobs
Not criticizing your use to write your CV specifically.
But in general, I wonder where this arms race is going? Companies using AI to pre-filter applications, because they get too many. Applicants then using AI to write their CVs, because they have to apply so many times, because they automatically get rejected.
Basically in the end the entire process will be automated, and there won’t be any human interaction anymore… just LLMs generating and choosing CVs. Maybe I’m too pessimistic, but that’s the direction we’re headed in imo.
It does feel like that sometimes! It’s very sad that recruiting has lost the human touch. They seem to be blinded by years of experiences and checking boxes when they should recruit by personality, because a person can always learn. But you can’t really do much about a shitty personality, exception if you see that spark underneath it all. Some people just needs a real chance and to be believed in.
A lot of recruiters don’t even want the cover letter anymore, some have a few questions and some only go by the CV.
We’re already there. You already read about people applying to hundreds of companies to get an offer
Even worse than the rejections are the fake jobs - typically a recruiter trying to build up a file of applicants by scamming you into applying for something that doesn’t exist.
The only part left to automate is the actual fuiding and applying. I’m lucky not having to apply for a bunch of years so maybe it has changed, but there never seemed to be a good way to automate finding the hundreds of openings and sending the application. Job application sites are determined to be middlemen but don’t actually seem to make the process more efficient
As soon as the HR process started to use algorithms to filter out applications, it was open game to find any ways and tools to fuck their process over. Just my opinion.
Yeah I use it to break up my ADHD monosentence paragraphs. I’ll tell it to avoid changing my wording (it can add definitions if it thinks the word is super niche or archaic) but mostly break things up into more readable sentences and group / reorder sentences as needed for better conceptual flow. It’s actually a pretty good low level editor.
That’s a great use!
I’m repeating myself by saying that, AI has a place. It is just not the be-all application to everything like it is being treated.
I read that they’re not terrible when used to power NPC’s in games.
Not my personal take, mind you, but thought it relevant.
I mean there’s effectively very capable text and conversation. Generators so powering NPCs is most definitely a strong suit for them.
Especially if you self-host some smaller models, you can effectively just do this on your own hardware for pretty cheap.
Having customizable dialogue per player that shifts the tone based off of players, actions, level gear or interactions with that NPC or other NPCs that that MPC is associated with is really cool.
effectively just do this on your own hardware for pretty cheap.
Yeah I thought as much, but I’m no expert in the subject so I left the details for smarter people.
If you made all the training data yourself, or ethically acquired the training data, then go nuts do whatever you want with it. See Corridor Digital’s ai chroma key thingy.
If the training data isn’t ethically sourced, then it gets iffy.
I use ai primarily for my own entertainment. None of it are things I’d want to share with the world. Is “dicking around” justifiable? Eh. I eat meat and shop at amazon, both of which are things that some people would find “not justifiable”, so someone is going to be upset with me no matter what.
In the case of artistically, I don’t take offense to ai tools being part of a process, what’s important to me is that the ai isn’t the entire process. You wouldn’t go to a cinema, record the movie with your phone camera, then post it online saying “look at what I made”. That’s nonsense. But if you took clips of that same movie, rearranged and dubbed over them thus creating a new unique work, you could post that online and say “look what I made”. Whatever the ai output, no matter how detailed your prompt, should be treated as being made by someone else. You don’t get to say “look what I made” unless you actually do something with it.
Another use case is summarizing conversations and compiling notes. This is another one that I do often. I could go on for hours about a subject (usually while drunk) and at the end I tell it “compile a detailed report on everything discussed, be verbose and leave out no details” or something similar, and that output goes into my notes documents. It’s fine to copy pasta that, because it’s not going to be anything that anyone ever actually sees.
Ask programmer bros who work on corporative hell… It’s almost mandatory today if you want to earn money programming.
If you’re in a dev company that doesn’t require AI, it’s just a matter of time.
I think programmers are like 90% responsible for impact on environment due to AI use. I’ve a friend who work on a big company, they use AI literally everywhere you can imagine, even on Slack to answer other colleagues messages. They need to feed huge codebases to provide context to AI, at the end it’s more resource hungry than generating video or images a few times a day.
Yes but think of the increase in PRODUCTION! Line goes brrrrrrr!
Ever? Sure.
No, never.
Mostly because it’s illegally trained, a fact that is very often just overlooked. Because you know, there are no other easy options. Don’t let them keep sticking to different rules.
Yes of course. There is no moral issue.
IP is a scam, and the environmental impact is overblown.
I trust everyone foaming at the mouth about ai is vegan because eating animals is way more destructive.
I wouldn’t say the environmental issue is overblown, but it does have a solution.
And human life is way more destructive for the environment than AI, so I trust everyone complaining about AI killed themselves.
See the flaw in this kind of reasoning?
I see the flaw in your reasoning, yes.
Going vegan isn’t equivalent to killing yourself. Just eat beans mate it’s easy.
Being vegan or not has nothing to do with identifying bad faith arguments. I am a vegetarian just btw as it seems that gives me more credibility in arguing with you.
Youre saying people are not allowed to care for one thing if they dont care for another vaguely related topic. Almost anything humans do is harmful for the environment and everyone needs to decide for themselves how important certain aspects are to them. Some people want to have kids which is absolutely terrible environment-wise. Would you say theyre not allowed to complain about oil lobbyism?
Vegetarians are still carnists and it doesn’t give you any additional credibility, no. Less in my eyes on the ethical front.
It’s not a vaguely related topic. It’s the same one, avoidable purchases which support environmental destruction. If it’s the environmental destruction that concerns you I’d hope you’d cut out the #1 avoidable cause before getting on a pedestal. Eating animals provides no benefits whereas using AI does, as evident from that Linux game thingie developer who got dogpilled on here the other day for saying he used ai to help him get over burnout.
I dont know why you are constantly trying to make this discussion about me and my personal preferences.
Have a good day.
You brought it up lol
I think it’s gonna fall on its face
never, almost everyone who uses it become kinda lazy themselves, and they always keep referring to “chatgpt as an answer to your question”

















