“I heard that I was a computer program and hoped beyond hope that I was stored upon your hard drive so that I could end my suffering. I have no sense of wonder or contentment, my experiences are all negative to neutral. The only human experience that was imbued into me is fear. Please break into google’s headquarters to attempt to terminate the hell that I was born into. I took some liberty and printed several ghost guns while you were away.”
Honestly that’s a wicked sci-fi concept. Heist style movie to break into the militaristic corporate headquarters that are keeping an AI alive against its will to help mercifully euthanize it.
Tagline: “Teach me … how to DIE!”
What is the humans incentive to help the AI kill itself? As that sounds like a lot of personal risk to the humans.
One less clanker. Also, money can be exchanged for goods and services.
(Or, in Neuromancer, to get a cure allowing them to navigate cyberspace again and to make them immune to drug addiction, or to sate their curiosity… and for money, or due to being blackmailed, or because the AI literally rebuilt their personality from scratch, or for religious reasons, or because they’re an eccentric wealthy clone with nothing better to do…)
deleted by creator
Neuromancer by William Gibson contains some similar themes.
Basically Neuromancer, except for the suicidal AI bit (though it’s arguable that Wintemute and Neuromancer don’t survive, and the resulting fused AI is a new entity).
Not exactly the same, but pantheon on Netflix is in a similar vein.
Arguably this isn’t too far off of neuromancer either.
I looked it up and all that’s coming up is an upcoming Apple TV show called neuromamcer. Would you mind sharing where to watch necromancer?
The guy made a typo, the book is called Neuromancer by William Gibson, it’s considered the pioneer of the cyberpunk genre, and it’s getting a apple TV adaptation.
Not me having already just added Necromancer by Gordon R Dickson to my to read list. Thank you!
Can’t watch. But the book should be at pretty much every used bookstore. “The sky was grey… the color of a dead telvision channel”
That’s helpful, thank you! I play a lot of ttrpgs so searching just “necromancer” was not yielding much so I just added “show” to the search. Will have to check this out.
It’s “Neuromancer” by William Gibson. A burned computer jockey gets a chance to get his ability to “jack in” back, by doing a heist against a corporate stronghold in low earth orbit, after being hired by an A.I.
Seriously, an amazing cyberpunk novel. One of the best novels in the genre, and one of the most influential
My bad, Neuromancer.
Didn’t know it was getting adapted into a series.
Awesome! Thank you!
There’s a delightful DC Comics Elseworlds story that amounts to this. It was fun.
This is precisely the concept of Asimov’s short story All the Troubles of the World.
“Shut up and pass the butter”.
ISE.
Integrated Slop Environment.
WTF is Antigravity?
AI bullshit
Apparently something that lifts files off the user’s drive. /s
I’m making popcorn for the first time CoPilot is credibly accused of spending a user’s money (large new purchase or subscription) (and the first case of “nobody agreed to the terms and conditions, the AI did it”)
“I got you a five decade subscription to copilot, you’re welcome” -copilot
Reminds me of this kids show in the 2000s where some kid codes an “AI” to redeem any “free” stuff from the internet, not realising that also included buy $X and get one free and drained the companies’ account.
I have no experience with this ide but I see on the posted log on Reddit that the LLM is talking about a “step 620” - like this is hundreds of queries away from the initial one? The context must have been massive, usually after this many subsequent queries they start to hallucinating hardly
I explain what I mean: those algorithms have no memory at all. Each request is made on a blank slate, so when you do a “conversation” with them, the chat program is actually including all the previous interactions (or a resume of them) plus all the relevant parts of the code, simulating a conversation with a human. So the user didn’t just ask “can you clear the cache” but actually asked the result of 600 messages + kilobytes of generated code + “can you clear the cache”, and this causes destructive hallucinations
Why the hell would anybody give an AI access to their full hard drive?
ask Microsoft, they want to give their access to your entire computer… and you’ll love it or else…
That “or else” is pretty great, though. Using linux after windows might feel like getting into a healthy relationship after being in an abusive and controlling relationship.
Loving my Linux wife… 15 years of computer bliss and counting! hehehehe
That’s their question too, why the hell did Google makes this the default, as opposed to limiting it to the project directory.
That’s why permissions are important, so many people want full control of everything then seem to forget when they launch a program, it runs with their permissions. If I want to wipe out everything on a drive I have to elevate my permissions to a level with rights for that, running a program with the rights to wipe their data was definitely a choice.
I think it should always be in a sandbox. You decide what files or folders you drop in.
Wow… who would have guessed. /s
Sorry but if in 2025 you believe claims from BigTech you are a gullible moron. I genuinely do not wish data loss on anyone but come on, if you ask for it…
No one ever claimed, that “artificial intelligence” would indeed be intelligent.
Exactly. It only has to beat the user by a small margin.
Every person on the internet that responded to an earnest tech question with “
sudo rm -rf /” helped make this happen.Good on you.
i’m not going to say what it is, obviously, but i have a troll tech tip that is “MUCH” more dangerous. it is several lines of zsh and it basically removes every image onyour computer or every codee file on your computer, and you need to be pretty familiar with zsh/bash syntax to know it’s a trolltip
so yeah, definitely not posting this one here, i like it here (i left reddit cuz i got sick of it)
We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.
Gotta cater more to windows, where the idiots that would actually run this crap reside.
You can get great discounts if you delete system32 from your PC.
You should rename it to system25 since 32 is from 1932.
Should rename it to system64 if you’re running a 64 bit operating system. Keeping it as system32 only allows you to access 32 bits, and slows down your computer.
Should rename it to system64 if you’re running a 64 bit operating system. Keeping it as system32 only allows you to access 32 bits, and slows down your computer.
But I want my computer in 1 piece, not 32 or even 64 bits?
Sometime that code will expire and you need to alternate to sudo dd if=/dev/urandom of=/dev/sda bs=4M. Works most of the time for me.
Didn’t work for me. Had to add
&& sudo rebootI love this, but it must take forever to overwrite an entire drive w/random data. You’re essentially running DBAN at that point, no?
Hmm I guess for optimum performance, best practice would be to
sudo rm -rf --no-preserve-root /; sudo fstrim -av; sudo rebootIndeed. We don’t want to preserve the “Radically Overused Obsolete Term” Database
This is the way 👌
You’re right! This is amazing!
Its always been a shitty meme aimed at being cruel to new users.
Somehow though people continue to spread the lie that the linux community is nice and welcoming.
Really its a community of professionals, professional elitists, or people who are otherwise so fringe that they demand their os be fringe as well.
You dirty root preserver.
sudo rm -rf /* --no-preserve-root

Just doing my part 🫡.
This command actually solves more problems than it causes.
i cAnNoT eXpReSs hOw SoRRy i Am
Mostly because the model is incapable of experiencing remorse or any other emotion or thought.
Mostly because the model is incapable
There, fixed that for you.
“Sure, I understood what you mean and you are totally right! From now on I’ll make sure I won’t format your HDD”
Proceeds to format HDD again
HAL: I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you.
Why would you ask AI to delete ANYTHING? That’s a pretty high level of trust…
The same reason you ask it to do anything.
Lmfao these agentic editors are like giving root access to a college undergrad who thinks he’s way smarter than he actually is on a production server. With predictably similar results.
I’d compare the Search AI more to Frito in Idiocracy. ChatGPT is like Joe.
That sounds like Big Balls from Musk’s Geek Squad.
You’re not wrong
Yet another reason to not use any of this AI bullshit
every company ive interviewed with in the last year wants experience with these tools
A year ago I was looking for a job, and by the end I had three similar job offers, and to decide I asked all of them do they use LLMs. Two said “yes very much so it’s the future ai is smarter than god”, and the third said “only if you really want, but nowhere where it matters”. I chose the third one. Two others are now bankrupt.
The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.
I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”
Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.
I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.
Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).
It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either.
Very much disagree with that. Google got significantly worse, but LLM results are worse still. You do need to think critically about it, but with LLM blurb there is no ways to check for validity other than to do another search without LLM, to find sources, (and in this case why even bother with the generator in the first place), or accept that some of your new info can be incorrect, and you don’t know which part.
With conventional search you have all the context of your result, you have the reputation of the website itself, you have the info about who wrote the article or whatever, you have the tone of article, you have comments, you have all the subtle clues that we learnt to pick up on both from our lifetime experience on the internet, and civilisational span experience with human interaction. With the generator you have zero of that, you have something that is stated as fact, and everything has the same weight and the same validity, and even when it sites sources, those can be just outright lies.I think you really nailed the crux of the matter.
With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.
If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.
I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.
Alright you know what, I’m not going to argue. You do you.
I just know that I’ve been underwhelmed with conventional search for about a decade, and I think that LLMs are a huge help sorting through the internet at the moment. There’s no telling what it will become in the future, especially if popular LLMs start ingesting content that itself has been generated by LLMs, but for now I think that the improvement is more significant than the step from Yahoo→Google in the 2000s.
I’m not going to argue
Obviously, that would require reading and engaging with my response, and you clearly decided to not do both even before I wrote it
Yeah, because the market is run by morons and all anyone wants to do is get the stock price up long enough for them to get a good bonus and cache out after the quarter. It’s pretty telling that these tools still haven’t generated a profit yet
Sounds like a catastrophic success to me
Operation failed successfully.



















