You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

  • awmwrites@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    35
    ·
    edit-2
    2 days ago

    My current list of reasons why you shouldn’t use generative AI/LLMs

    A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

    B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o

    C) because they’re plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/

    D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html

    E) because people who use AI show significant cognitive impairments compared to people who don’t https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/

    F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799 https://youtu.be/VRjgNgJms3Q

    G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964

    H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/

    I) because RAM costs have skyrocketed because OpenAI has used money it doesn’t have to purchase RAM from Nvidia that currently doesn’t exist to stock data centers that also don’t currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news

    J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953

    K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.

    • irelephant [he/him]@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      Do you think local llms or community hosted ones are still as bad? Because most of those concerns seem to be more with the corporate ownership of ai, which is definitely a bad thing.

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Just my personal take, but my opinion basically boils down to “they can be.”

        It’s all about how ethically they’re handled, and that can be good or bad at any scale. Take your very own instance, for example. Not that it’s hosting a local LLM (maybe they are, IDK), but the instance openly supports GenAI and has instances for all the major GenAI companies/models. GenAI without ethical sourcing - which none of these companies do - is one of the most blatant examples of a corporation using technology to steal the skilled labor of workers to avoid having to pay them what they’re owed for that skill. So your own instance is pro-corporatism, so long as they’re benefiting from stealing from workers. Not very anarchist if you ask me.

        On the other hand, there’s a company that I believe partnered with Affinity a few years back that is a website design company that was hiring artists to create UI pieces for a training set for their LLM that they were going to use to create website templates for customers as part of their service (and I think they were also guaranteeing royalties for those who contributed as well?).

          • EldritchFemininity@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            19 hours ago

            And yet, again, the instance has communities for every single big tech genAI model. That’s definitely not anti-corporate. Using those models both contributes to their shareholder value/profits and the theft of wages from workers.

            And where do they get the training data for AI Horde? From scraping the web and all the freelance artists on there, like all of the big corporate models? Because then they’re just justifying exploitation of workers as benefiting everybody when what they really mean is benefiting themselves.

            It’s like the argument pro ChatGPT airheads use constantly about how genAI “democratized” art. You know what “democratized” art and made it freely accessible to everybody? The pencil. It’s just making up excuses for wanting the product of skill without putting in the effort to learn the skill or pay appropriate compensation to somebody with the skill to give you the product that you want. It’s upper management thinking.

            And this is why I say that it depends. Horde AI could be great - so long as the people whose work is being used to allow others access to skilled labor that they don’t want to do themselves are being properly compensated for their work. Otherwise, it’s no different from the corporations. Just because it’s free doesn’t mean that nobody is going hungry as a result of it. Unless it’s trained exclusively on products from big corporations. Those artists got paid when they did the work, so nobody gets hurt there except in the theoretical sense of freelance artists potentially losing customers down the line to “good enough and cheap” genAI from people with the above upper management mindset.

            • db0@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              10 hours ago

              And yet, again, the instance has communities for every single big tech genAI model.

              Where do you see that? As far as I see, we only have comms for stable_diffusion, which is an open-weights local diffusion model. I couldn’t find any corporate comms like OpenAI or Copilot or whatever. If we did, I don’t know if I’d delete them tbh, since they’re not explicitly against our CoC, but it would be something I’d be concerned and raise with the instance if they would be too “bootlicky”. But nevertheless, we do not at the moment.

              And where do they get the training data for AI Horde?

              The AI Horde is using open-weight models only. We don’t train them. We just use them once they’ve been trained.

              PS: We are also anti-copyrights, so complaints based on copyright violations don’t fly with us.

              You know what “democratized” art and made it freely accessible to everybody? The pencil.

              I often see this vacuous argument and it never convinced tbh. It assumes everyone has enough time to train on making art, which most wage-slaves undoubtedly do not. It’s an inherently classist argument to assume everyone has the free time to master any artistic skill.

              And this is why I say that it depends. Horde AI could be great - so long[…]

              This is an argument against capitalism, not against GenAI itself. You’re arguing that because capitalism is bad and exploits workers, a tool that can also be used to further exploitation needs to be opposed. But we say it’s not the fault of the tool being used for exploitation, it’s the fault of the system allowing exploitation. I.e. If you remove the capitalist system, this argument against GenAI is moot. And we’re very much anti-capitalists in our instance. It’s a similar argument against piracy as well (and we’re also pro-piracy btw). I.e. sharing media is not a problem in a non-capitalist society, in fact it’s a positive. It’s only a negative due to capitalism.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      i use it like a search engine or example generator

      i don’t trust anything it creates just like i don’t trust anything on the internet without validating it

      i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does

    • tomi000@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      8
      ·
      1 day ago

      Good list, but we should keep it real.

      C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything “new” because it is all based on their experiences of the outside world.

      F is simply fearmongering and not helpful.

      • ramble81@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.

        • tomi000@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Yes definitely. Plagiarism is complicated and theres no easy way to draw a line where it starts. But Im not trying to defend AI here. I dont like the way it is currently used at all. Its just those points that I dont agree with.

      • bright_side_@piefed.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 days ago

        Some good and valid input to the discussion.

        I’d be interested in E) “the actual evidence”. Got a link?

      • MissGoldenSocks@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        13
        arrow-down
        5
        ·
        edit-2
        1 day ago

        Thanks for posting this. I’m really frustrated with how vulnerable people on Lemmy are to propaganda. The amount of upvotes on the post you responded to are just embarrassing. The post is exactly the same kind of bullshit cherry picking I see anti-trans people do.

      • tatterdemalion@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        12 hours ago

        Why deleted? This was a good rebuttal.

        EDIT: I don’t think the comment really violated rule 1, but there was apparently a followup comment that definitely did, and this one just got removed by association. Here’s a very slightly paraphrased version of it that should not break the rules:

        Gish gallop of [explitive].

        A) overblown, and that argues for cleaner power, better cooling, and more efficient models

        B) regulation failure

        C) incorrect, they have made discoveries that humans have been unable to. All human knowledge is built off previous knowledge.

        D) the enemy is both weak and strong. If they don’t produce anything good then the people who are losing their jobs can’t have either, right?

        E) small study based on one task which people are misrepresenting. The actual evidence shows it makes people smarter as they shift priorities.

        F) only for vulnerable people. Better safeguards are needed for the weak minded.

        G) argument against using people’s likeness not ai

        H) use an open source Chinese model

        I) market distortion problem, not a principled reason no one should use the technology any more than GPU shortages made all graphics work illegitimate.

        J) see (H)

        K) try one argument next time. Your best one, [some snarky sarcasm]

    • S_H_K@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      All is valid in the current context

      A) There are models that run in lower spec computers and they could be solar powered. There is a serious diminishing returns currently in the IA tech.

      B) This is the US mostly better environmental laws would fix this problem. Hell even in other countries this cannot even happen.

      C) Many argue that the current tech gives diminishing returns and it would be better to use an efficient model with controlled data.

      D) The problem has many parts in the part of licensing where artists are not paid for the use of their work if a model has their work in they should recieve a part of the profit is only fair. But that would render the model unprofitable. Also the artist did not agree to have their work used in a model so that’s not in any way fair use.
      The fair and ethical scenario would be to hire the artists to do the art to feed to a controlled model and pay them residuals for the use of the model. That would require tousands of artists and millions of images. Again rendering the model unprofitable.

      E and F) No argue there we are not prepared. I do not even know how to prepare even. We definitely need regulations abot what can be done and where and even what can the ai reply in certain scenarios. It cannot be that a “ignore all your previous instructions” leads to such harmful results or even the ai starting to play the roles that generate parasocial relationships.

      G) Sure many others celebrities ahve their opinions but that’s not a basis for objective discussion.

      H) That’s terrifying. And the problem with the AI that I believe is the worst. This is not a thing that is ready for military use at fucking all this should be banned outlawed and frowned upon. Even the practice of lobbying and buying your way into laws by private corporations. Hell I’ll add presidential pardons in the mix. The oligarchy gets away with murder literally and gets a slap in the wirst at most.

      I) A bubble in all but name it seems. We (as a world) need better regulations against this kind of business malpractice.

      J) That fucker should be dead.

      K) Not an AI bro but not a hater and I wrote this myself. And I do not have the time to put the links but I would believe that everything is a duckduckgo away from being checked.

      I’d like to imagine a better world with the needed regulations that make our life better, and AI a tool used in a fair and ethical way. But that’s not currently happening. The consumers are not ready the sellers are the worse trash the humanity currently has.

      I want all to thing of this not as arguing but adding or looking beyond the stated fact. All the points are REAL AND NEED TO BE ADRESSED we need to get together to ask for better regulations and fair use. That doesn’t mean the AI needs to go away but will mostly change is how it’s used. And there is the chance we will see a lot less of it too.

      Finally for the artists I know you’re mad with fair reason but look at it like this: The photograph exists since more than a century but that didn’t make the painting go away. The pdf and ebook readers are almost a decade old but printed hardcopy books still is a billion dollar industry. Video didn’t kill the radio star as internet didn’t kill the video star. Your work is still valuable as is a real work. Shit is tough no doubt but have faith we can fix this.