I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.
Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.
Most people are against AI because of what corporations are doing with it. What do you expect corporations and governments are going to do with any new scientific or technological advance? Use it for for the benefit of humanity? Are you going to stop using computers because coorporations use them for their benefit harming the environment with their huge data centers? By rejecting the use of this new technological advance you are avoiding to take advantage of free and open source AI tools, that you can run locally on your computer, for whatever you consider a good cause. Fortunately many people who care about other human beings are more intelligent and are starting to use AI for what it really is, A TOOL.
“According to HRF’s announcement, the initiative aims to help global audiences better understand the dual nature of artificial intelligence: while it can be used by dictatorships to suppress dissent and monitor populations, it can also be a powerful instrument of liberation when placed in the hands of those fighting for freedom.”
If it’s real life, just talk to them.
If it’s online, especially here on lemmy, there’s a lot of AI brain rotted people who are just going to copy/paste your comments into a chatbot and you’re wasting time.
They also tend to follow you around.
They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.
They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.
That’s the issue. I do wish to warn me or even just inform them of what using AI recklessly could lead to.
Why care?
You’re wanting to go out and argue with people and try to use logic when that part of their brain has literally atrophied.
It’s not going to accomplish anything, and likely just drive them deeper into AI.
Plenty of people that need help actually want it, put your energy towards that if you want to help people.
Why care?
To give some fucks, probably.
The post is aimed at me facing situations where I state among people I know that I don’t use AI, followed by them asking why not. Instead of driving them out by stating “Just because” or get into jargons that are completely unbeknownst to them, I wish to properly inform them why I have made this decision and why they should too.
I am also able to identify people to whom there’s no point discussing this. I’m not asking to convince them too.
I wish to properly inform them why I have made this decision and why they should too.
You’re asking how to verbalize why you don’t like AI, but you won’t say why you don’t like AI…
Let’s see if this helps, imagine someone asks you:
I don’t like pizza, how do I tell people the reasons why I don’t like pizza?
How the absolute fuck would you know how to explain it when you don’t know why they don’t like pizza?
You do have a point. I think I may be overthinking this after all. I’ll just try to talk with them about this upfront.
I paste people’s AI questions into a chatbot for the humor of it.
They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.
More likely they feel insulted by people saying how “brain-rotted” they are.
What would the inoffensive way of phrasing it be?
Genuinely every single pro-AI person I’ve spoken with both irl and online has been clearly struggling cognitively. It’s like 10x worse than the effects of basic social media addiction. People also appear to actively change for the worse if they get conned into adopting it. Brain rot is apparently a symptom of AI use as literally as tooth rot is a symptom of smoking.
Speaking of smoking and vaping, on top of being bad for you objectively, it’s lame and gross. Now that that narrative is firmly established we have actually started seeing youth nicotine use decline rapidly again, just like it was before vaping became a thing
What would the inoffensive way of phrasing it be?
…and then you proceed to spend the next two paragraphs continuing to rant about how mentally deficient you think AI users are.
Not that, for starters.
The lung capacity of smokers is deficient, yes? Is the mere fact offensive? Should we just not talk about how someone struggling to breathe as they walk up stairs is the direct result of their smoking?
This is literally begging the question.
I don’t think it is, nor do I think name dropping random fallacies without engaging with the topic makes for particularly good conversation. If you have issues with OP’s phrasing it would benefit all of us moving forward if we found a better way to talk about it, yes?
It’s not a random fallacy, it’s the one you’re engaging in. Look it up. Your analogy presupposes an answer to the question that is actually at hand. It’s the classic “have you stopped beating your wife” situation.
Yup, that’s dbzer0
Edit: shit, they followed me here
All current AIs are based on stolen content.
– and are being used by rich sociopaths to replace the very people that made that content.
Thats on top of the large pile of shit.
Cool. So you’re in support of developing a model that financially compensates all of the rights holders used for its training data then?
Sort this one with the girlfriend’s “would you still love me if I was a worm” philosophy. It’s so far outside of reality it’s not worth considering.
Yes, I am. But i don’t expect them to do that.
Good!
I don’t either. But they probably should. And that’s a reasonable position to take.
You mean commercial LLMs.
AI as a term includes machine learning systems that go back decades.
If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?
Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?
Publicly accessible does not mean it is free of copyright. Yes, copyright law in it’s current form sucks and is in dire need to get reformed, preferably close to the original duration (14+14 years). But as the law currently stands, those LLM parrots are based on illegally acquired data.
People downloading stuff for personal use vs making money off of it are not the same at all. We don’t tend to condone people selling bootleg DVDs, either.
Publically accessible does not mean publically reusable. You can find a lot of classic songs on YouTube and in libraries. You can’t edit them into your Hollywood movie without paying royalties.
Showing them to an AI for them to repeat the melody with 90% similarity is not a free cheat to get around that.
This is in part why the GPL and other licenses exist. Linus didn’t just put up Linux and say “Do whatever!” He explicitly said “You MAY copy and modify this work, but it must keep this license, this ownership, and you may NOT sell the transformed work”. That is a critical part of many free licenses, to ensure people don’t abuse them.
If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?
Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?
The most reasonable explanation I’ve heard/read is that generative AI is based on stealing content from human creators. Just don’t use the word “slop” and you’ll be good.
Except that is also a subjective and emotionally-charged argument.
What do you normally say that you’re worried sounds like an “Ai vegan”?
You’d rather make your own painting than fill in a coloring book?
I want my creations to be precisely what I intend to create. Generative Ai makes it easier to make something at the expense of building skills and seeing their results
This reminds me of those posts from anti-vaxers who complain about not being able to find good studies or sources that support their opinion.
I normally ask them if they have a moment to talk about the rebirth and perseverance* Nurgle. For they already embrace his blesses on the land.
“it looks like shit from a butt and sounds like shit from a butt, and if I wanted to look at a shit from a butt, I would do that for free”
My goto is basically since I have to strictly verify all the information/data AI gives me, it’s faster for me to just produce this information myself. It’s what they literally pay me for.
Very simple.
It’s imprecise, and for your work, you’d like to be sure the work product you’re producing is top quality.
“It’s a machine made to bullshit. It sounds confident and it’s right enough of the time that it tricks people into not questioning when it is completely wrong and has just wholly made something up to appease the querent.”
I know people like this lol
Depending on how hardcore you are about it, you can’t.
Are you getting up in people’s face to tell them not to use it, or are you answering why you choose not to use it?
Are you extremely strict in your adherence? Or are you more forgiving based on the application or user?There are two general points I like to make:
- Big companies are using it to steal the work of the powerless, en masse. It is making copyright strictly the tool of the powerful to use against the powerless.
- If these companies aren’t lying and will actually deliver what they say they’re going to deliver in the timeline they stated, then it’s going to cause mass unemployment, because even if (IF) this creates new jobs for every job it destroys, the market can’t move fast enough to invent these new careers in the timeline described. So either they’re lying or they’re going to cause great suffering, and a massive increase in wealth inequality.
Energy usage honestly never seems to be a concern for people, so I don’t even try to make that argument.
While I understand new data enters for ai are increasing power usage, it’s just highlighting the existing problems where there are decades of insufficient investment in infrastructure.
You can’t get enough power to run a new data center? Where were you when I complained we needed additional transmission lines to keep bringing more renewable energy online? Where were you when I wanted the huge infrastructure project to import huge amounts of Canadian hydro? I bet you wish you had that now.
Where were you when I complained we needed additional transmission lines to keep bringing more renewable energy online?
I’ve strongly argued for this in the past.
All these tech bros with AI datacenters are putting their spare couch change together to build HVDC lines across the continent, right?
Here’s a piece I wrote to explain my apprehensive stance on AI to friends and colleagues: https://blog.erlend.sh/non-consensual-technology
Check out wheresyoured.at for some “haters guides.”
My general take is that virtually none of the common “useful” forms of AI are even remotely sustainable strictly from a financial standpoint, so there’s not use getting too excited about them.
The financial argument is pretty difficult to make.
You’re right in one sense, there is a bubble here and some investors/companies are going to lose a lot of money when they get beaten by competitors.
However, you’re also wrong in the sense that the marginal cost to run them is actually quite low, even with the hardware and electricity costs. The benefit doesn’t have to be that high to generate a positive ROI with such low marginal costs.
People are clearly using these tools more and more, even for commercial purposes when you’re paying per token and not some subsidized subscription, just check out the graphs on OpenRouter https://openrouter.ai/rankings
None of the hyperscalers have produced enough revenue to even cover operating costs. Many have reported deceptive “annualized” figures or just stopped reporting at all.
Couple that with the hardware having a limited lifespan of around 5 years, and you’ve got an entire industry being subsidized by hype.
Covering operating costs doesn’t make sense as the threshold for this discussion though.
Operating costs would include things like computing costs for training new models and staffing costs for researchers, both of which would completely disappear in a marginal cost calculation for an existing model.
If we use Deepseek R1 as an example of a large high end model, you can run a 8-bit quantized version of the 600B+ parameter model on Vast.Ai for about $18 per hour, or even on AWS for like $50/hour. Those produce tokens fast enough that you can have quite a few users on it at the same time, or even automated processes running concurrently with users. Most medium sized businesses could likely generate more than $50 in benefit from it per running hour, especially since you can just shut it down at night and not even pay for that time.
You can just look at it from a much smaller perspective too. A small business could buy access to consumer GPU based systems and use them profitably with 30B or 120B parameter open source models for dollars per hour. I know this is possible, because I’m actively doing it.








