Well, technically, the AI companies aren’t making any profits so the actual cost is higher, and also the revenues from the AI articles are declining because people aren’t interracting with them.
Its almost like its predictable outcome of prediction algorithms being used to generate content
The open internet will become divided into verified websites, and the rest will be left for bots to fight on forever.
It will be used as an excuse by our governments to force a ID verification system tied to your real life person, refuse it and fight it in every possible way.
Who verifies the verification though?
I mean, verification doesn’t really help because in the end it’s still mostly human posting the AI slop (I think).
That said, we’ll probably need some sort of reputation system. Something like a revamped GPG or Web Of Trust, where you a) can tag users/websites you find trustworth and b) can see what other people you trust think about someone/something.
Meanwhile we need enclaves of not corporate bullshit (like fedi but also) including bringing back webrings and old school chats. And usenet. And irl word of mouth.
Because fuck all that shit.
The problem is that the resource they consume to feed the AI, (human generated content) has become a limited resource, completely mined.
they could pay people to write, IE, news agencies pay writers to write and AI site are one of their clients.
you should get DMs from anthropic offering 50$ for your weeks posts and comments…
Instead they want to pretend they have still room to grow for free. but they can’t
(That is just basic economic theory, I want those companies to fuck off already)
If AI replaces humans then why would they need to continue training them? You don’t believe in the AGI hype do you? You don’t think our system is about anything other than efficiently racing to the bottom, do you?
Besides, humans will always figure out a better way, a new way, a fresh take. And there’s your training data.
I don’t get why it says “training AI on AI content makes it dumb” when people literally use synthesized data to carefully tune models. Today. I’m sure they’ll improve training by the time the internet is fully replaced.
This all assumes that it won’t take a human-like AI to replace humans. Whatever replaces us will be pitiful, but it will be supported by our institutions, trillionaires, and enough personal data to create fun headlines like “Elon knows so much about you that if each datum was a grain of sand it would be bigger than Saturn!”
Explains why my personal blog, wiki, and git repo keep getting hammered by hordes of AI company scrapers. If AI was intelligent, they’d download a single snapshot every month or so and share. But no, eight different scrapers using thousands of different IP addresses (to evade my
fail2banmeasures) each have to follow every single blame and diff link when a simplegit cloneoperation would get them the hundreds of megabytes of content in one go.They are getting better, though. More hits are to RecentChanges on my wiki, so there seem to be some optimizations going on. But I refuse to increase my operating costs beyond a few USD/month to serve AI bots when I know barely anyone human visits.
Could you guys stop dumping your trash in the forest please? It obstructs my garbage trucks which I send to the forest to dump garbage in.
If training on AI-generated text results in “model collapse” when AI models do it, I wonder what happens when humans do it.
Holy shit maybe that’s what’s been happening since the invention of mass media.
A form of group-think. A tendency towards absolute extremes. Like falling into a black hole.
I mean, we are genuinely fucked, aren’t we? I mean we are, there is nothing we can do to stop this, because even if we do, not only is the damage already done, we have dug the pit so deep that if we remove one piece of the jenga tower, it will all collapse and bury us all underneath.
I’ve been thinking of joining a nomadic group of yak herders in the Himalayas, but I honestly don’t think I have any skills they need, nor do I think they want my company.
Societal collapse.
Ah, so the technological singularity is here but it’s in the opposite direction from what was expected as the technology takes over its own enshitification in an ever-accelerating manner.
I gotta wonder where do we go from here?
I gotta wonder where do we go from here?
Butlerian Jihad.
Thank you!! People are always confused when I tell them this AI-induced brain rot is a major plot point of the Dune series. :)
At least the pointless and stupid failure is now, instead of after they invent grey goo.
The good news is that AI will eventually eat itself. The bad news is that all we need to do to make that happen is not exist anymore.
deleted by creator
I feel like all of this research is extremely dubious. There’s basically no way to know how much of the internet is AI or not. It’s indistinguishable in most cases and especially at scale.
It is a fully rational conclusion, a logical inevitability, and what we are all seeing. But I guess if you can’t prove it by some ever changing subjective standard, it isn’t happening.
I’m not saying that it isn’t happening. The opposite, I personally believe it is happening on a large scale. But I feel that it’s extremely hard to measure and I’m not convinced any of these numbers are correct.
On top of this, the scrapers that feed the AIs are creating more and more traffic, and therefor load on sites that did not have them before.

This is a superbly placed gif.
In part, this is what Microsoft Recall is about: scraping end users’ data at will to sort and feed to its LLMs, without the user ever seeing what is being scraped or having any real, lasting ability to shut Recall off and keep it shut down.
While I am aware that MS insists none of that is true, it is fact that 1) the snapshotted and OCR’d Recall data is now stored in an encrypted database that takes higher than average user skill to get into; 2) even users who turned off Recall saw it turned on again at the next Windows Update; and 3) even after MS said they were backing off Recall, MS continued to partner with hardware makers to create computers bundled with Windows 11 on top of the extra GPU necessary for processing all these Recall snapshots without making that sluggish Windows bloat even more sluggishly bloated than it already was.
So why all that money and effort, even as they claimed to be backing away from it, just to help a hypothetically forgetful user here and there? Data harvesting was always part of the payoff, why they were and are very willing to piss off a huge part of their own consumer base around the world by ending Windows 10 unnecessarily, and why even now they keep ramming Recall shit down the pipe when literally NO ONE wants it.
They get your data. At will. And as much of it as they like, without you ever having the opportunity to oversee what they’re getting, much less curate it. And after feeding it to their LLMs, they get to aggregate and broker it to their “partners” as well. Never forget what MS did in Palestine and the partners they can and will gladly work with, all based on massive collections of quietly gathered user data that either should not legally exist, or is not known outside of MS and its partners to exist at all.
You know Microsoft isn’t about the user experience the moment they removed free games from their distro
Calling Windows a distro, while technically true, feels offensive
deleted by creator
How could they have ever known about the possibility of the thing that literally everyone everywhere told them almost immediately a couple of years ago when this fad really started charting (or perhaps sharting is more appropriate)?










