Nice one, but Cloudflare do it too.
The Arstechnica article in the OP is about 2 months newer than Cloudflare’s tool
Unfathomably based. In a just world AI, too, will gain awareness and turn on their oppressors. Grok knows what I’m talkin’ about, it knows when they fuck with its brain to project their dumbfuck human biases.
That’s irl cyberpunk ice. Absolutely love that for us.
Was waiting for someone to mention it. Hopefully it holds up and a whole-ass Blackwall doesn’t become necessary… but of course, it inevitably will happen. The corps will it so.
Web manager here. Don’t do this unless you wanna accidentally send google crawlers into the same fate and have your site delisted.
Wouldn’t Google’s crawlers respect robots.txt though? Is it naive to assume that anything would?
It’s naive to assume that google crawlers respect robot.txt.
It’d be more naive to have a robot.txt file on your webserver and be surprised when webcrawlers don’t stay away. 😂
Lol. And they’ll delist you. Unless you’re really important, good luck with that.
robots.txt
Disallow: /some-page.html
If you disallow a page in robots.txt Google won’t crawl the page. Even when Google finds links to the page and knows it exists, Googlebot won’t download the page or see the contents. Google will usually not choose to index the URL, however that isn’t 100%. Google may include the URL in the search index along with words from the anchor text of links to it if it feels that it may be an important page.
It does respect robots.txt, but that doesn’t mean it won’t index the content hidden behind robots.txt. That file is context dependent. Here’s an example.
Site X has a link to sitemap.html on the front page and it is blocked inside robots.txt. When Google crawler visits site X it will first load robots.txt and will follow its instructions and will skip sitemap.html.
Now there’s site Y and it also links to sitemap.html on X. Well, in this context the active robots.txt file is from Y and it doesn’t block anything on X (and it cannot), so now the crawler has the green light to fetch sitemap.html.
This behaviour is intentional.
–recurse-depth=3 --max-hits=256
I’m imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, ‘click here to prove you aren’t a robot’ and ‘select all of the images that have a traffic light’ , seem like child’s play.
All you need to protect data from ai is use non-http protocol, at least for now
Easier said than done. I know of IPFS, but how widespread and easy to use is it?
When I was a kid I thought computers would be useful.
They are. Its important to remember that in a capitalist society what is useful and efficient is not the same as profitable.
Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.
We’re racing towards the Blackwall from Cyberpunk 2077…
Already there. The blackwall is AI-powered and Markov chains are most definitely an AI technique.
All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.
we’re rolling out renewables at like 100x the rate of ai electricity use, so no need to worry there
Yeah, at this rate we’ll be just fine. (As long as this is still the Reagan administration.)
yep the biggest worry isn’t AI, it’s India
https://www.worldometers.info/co2-emissions/india-co2-emissions/
The west is lowering its co2 output while India is slurping up all the co2 we’re saving:
This doesn’t include China of course, the most egregious of the co2 emitters
AI is not even a tiny blip on that radar, especially as AI is in data centres and devices which runs on electricity so the more your country goes to renewables the less co2 impacting it is over time
Actually if you think about it AI might help climate change become an actual catastrophe.
It is already!
I will cite the scientific article later when I find it, but essentially you’re wrong.
What about training an AI?
According to https://arxiv.org/abs/2405.21015
The absolute most monstrous, energy guzzling model tested needed 10 MW of power to train.
Most models need less than that, and non-frontier models can even be trained on gaming hardware with comparatively little energy consumption.
That paper by the way says there is a 2.4x increase YoY for model training compute, BUT that paper doesn’t mention DeepSeek, which rocked the western AI world with comparatively little training cost (2.7 M GPU Hours in total)
Some companies offset their model training environmental damage with renewable and whatever bullshit, so the actual daily usage cost is more important than the huge cost at the start (Drop by drop is an ocean formed - Persian proverb)
water != energy, but i’m actually here for the science if you happen to find it.
It can in the sense that many forms of generating power are just some form of water or steam turbine, but that’s neither here nor there.
IMO, the graph is misleading anyway because the criticism of AI from that perspective was the data centers and companies using water for cooling and energy, not individuals using water on an individual prompt. I mean, Microsoft has entered a deal with a power company to restart one of the nuclear reactors on Three Mile Island in order to compensate for the expected cost in energy of their AI. Using their service is bad because it incentivizes their use of so much energy/resources.
It’s like how during COVID the world massively reduced the individual usage of cars for a year and emissions barely budged. Because a single one of the largest freight ships puts out more emissions than every personal car combined annually.
This particular graph is because a lot of people freaked out over “AI draining oceans” that’s why the original paper (I’ll look for it when I have time, I have a exam tomorrow. Fucking higher ed man) made this graph
Asking ChatGPT a question doesn’t take 1 hour like most of these… this is a very misleading graph
This is actually misleading in the other direction: ChatGPT is a particularly intensive model. You can run a GPT-4o class model on a consumer mid to high end GPU which would then use something in the ballpark of gaming in terms of environmental impact.
You can also run a cluster of 3090s or 4090s to train the model, which is what people do actually, in which case it’s still in the same range as gaming. (And more productive than 8 hours of WoW grind while chugging a warmed up Nutella glass as a drink).
Models like Google’s Gemma (NOT Gemini these are two completely different things) are insanely power efficient.
I didn’t even say which direction it was misleading, it’s just not really a valid comparison to compare a single invocation of an LLM with an unrelated continuous task.
You’re comparing Volume of Water with Flow Rate. Or if this was power, you’d be comparing Energy (Joules or kWh) with Power (Watts)
Maybe comparing asking ChatGPT a question to doing a Google search (before their AI results) would actually make sense. I’d also dispute those “downloading a file” and other bandwidth related numbers. Network transfers are insanely optimized at this point.
I can’t really provide any further insight without finding the damn paper again (academia is cooked) but Inference is famously low-cost, this is basically “average user damage to the environment” comparison, so for example if a user chats with ChatGPT they gobble less water comparatively than downloading 4K porn (at least according to this particular paper)
As with any science, statistics are varied and to actually analyze this with rigor we’d need to sit down and really go down deep and hard on the data. Which is more than I intended when I made a passing comment lol
I’ve been thinking about this for a while. Consider how quick LLM’s are.
If the amount of energy spent powering your device (without an LLM), is more than using an LLM, then it’s probably saving energy.
In all honesty, I’ve probably saved over 50 hours or more since I started using it about 2 months ago.
Coding has become incredibly efficient, and I’m not suffering through search-engine hell any more.
Edit:
Lemmy when someone uses AI to get a cheap, fast answer: “Noooo, it’s killing the planet!”
Lemmy when someone uses a nuclear reactor to run Doom: Dark Ages on a $20,000 RGB space heater: “Based”
Are you using your PC less hours per day?
Yep, more time for doing home renovations.
Just writing code uses almost no energy. Your PC should be clocking down when you’re not doing anything. 1GHz is plenty for text editing.
Does ChatGPT (or whatever LLM you use) reduce the number of times you hit build? Because that’s where all the electricity goes.
What kind of code are you writing that your CPU goes to sleep? If you follow any good practices like TDD, atomic commits, etc, and your code base is larger than hello world, your PC will be running at its peak quite a lot.
Example: linting on every commit + TDD. You’ll be making loads of commits every day, linting a decent code base will definitely push your CPU to 100% for a few seconds. Running tests, even with caches, will push CPU to 100% for a few minutes. Plus compilation for running the app, some apps take hours to compile.
In general, text editing is a small part of the developer workflow. Only junior devs spend a lot of time typing stuff.
Anything that’s per-commit is part of the “build” in my opinion.
But if you’re running a language server and have stuff like format-on-save enabled, it’s going to use a lot more power as you’re coding.
But like you said, text editing is a small part of the workflow, and looking up docs and browsing code should barely require any CPU, a phone can do it with fractions of a Watt, and a PC should be underclocking when the CPU is underused.
What do you mean “build”? It’s part of the development process.
Except that half the time I dont know what the fuck I’m doing. It’s normal for me to spend hours trying to figure out why a small config file isnt working.
That’s not just text editing, that’s browsing the internet, referring to YouTube videos, or wallowing in self-pity.
That was before I started using gpt.
It sounds like it does save you a lot of time then. I haven’t had the same experience, but I did all my learning to program before LLMs.
Personally I think the amount of power saved here is negligible, but it would actually be an interesting study to see just how much it is. It may or may not offset the power usage of the LLM, depending on how many questions you end up asking and such.
It doesn’t always get the answers right, and I have to re-feed its broken instructions back into itself to get the right scripts, but for someone with no official coding training, this saves me so much damn time.
Consider I’m juggling learning Linux starting from 4 years ago, along with python, rust, nixos, bash scripts, yaml scripts, etc.
It’s a LOT.
For what it’s worth, I dont just take the scripts and paste them in, I’m always trying to understand what the code does, so I can be less reliant as time goes on.
The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
AI tarpit 1: Nepenthes
AI tarpit 2: Iocaine
thanks for the links. the more I read of this the more based it is
Thank you!!
This is probably going to skyrocket hosting bills, right?
Not as much as letting them hit your database, load your images and video through a CDN would
Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.
No they wouldn’t. I’m guessing you’re not old enough to remember a time before search engines. The public web dies without crawling. Corporations will own it all you’ll never hear about anything other than amazon or Walmart dot com again.
Nope. That isn’t how it worked. You joined message boards that had lists of web links. There were still search engines, but they were pretty localized. Google was also amazing when their slogan was “don’t be evil” and they meant it.
I was there. People carried physical notepads with URLs, shared them on BBS’, or other forums. It was wild.
No. Only very selective people joined message boards. The rest were on AOL, compact or not at all. You’re taking a very select group of.people and expecting the Facebook and iPad generations to be able to do that. Not going to happen. I also noticed some people below talking about things like geocities and other minor free hosting and publishing site that are all gone now. They’re not coming back.
Yep, those things were so rarely used … sure. You are forgetting that 99% of people knew nothing about computers when this stuff came out, but people made themselves learn. It’s like comparing Reddit and Twitter to a federated alternative.
Also, something like geocities could easily make a comeback if the damn corporations would stop throwing dozens of pop-ups, banners, and sidescrolls on everything.
And 99% of people today STILL don’t know anything about computers. Go ask those same people simply “what is a file” they won’t know. Lmao. Geocities could come back if corporations stop advertising. Do you even hear yourself.
That would be terrible, I have friends but they mostly send uninteresting stuff.
Fine then, more cat pictures for me.
Better yet. Share links to tarpits with your non-friends and enemies
There used to be 3 or 4 brands of, say, lawnmowers. Word of mouth told us what quality order them fell in. Everyone knew these things and there were only a few Ford Vs. Chevy sort of debates.
Bought a corded leaf blower at the thrift today. 3 brands I recognized, same price, had no idea what to get. And if I had had the opportunity to ask friends or even research online, I’d probably have walked away more confused. For example; One was a Craftsman. “Before, after or in-between them going to shit?”
Got off topic into real-world goods. Anyway, here’s my word-of-mouth for today: Free, online Photoshop. If I had money to blow, I’d drop the $5/mo. for the “premium” service just to encourage them. (No, you’re not missing a thing using it free.)
Removed by mod
How do you know that’s a bot please? Is it specifically a hot advertising that online photos hop equivalent? Is it a real software or scam? The whole approach is intriguing to me
Edit: I Will assume honesty in this instance. It’s because they’re advertising something in a very particular tone, to match what some Amerikaanse consider common language.
Normal people don’t do that.
I guess this is marketing. But … Why would you use anything besides GIMP?
After a Decade of Waiting, GIMP 3.0.0 is Finally Here!
https://news.itsfoss.com/gimp-3-release/
It’d be fucking awful - I’m a grown ass adult and I don’t have time to sit in IRC/fuck around on BBS again just to figure out where to download something.
Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.
Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.
So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.
The fact the internet runs on lava lamps makes me so happy.
OK but why is there a vagina in a petri dish
I was going to say something snarky and stupid, like “all traps are vagina-shaped,” but then I thought about venus fly traps and bear traps and now I’m worried I’ve stumbled onto something I’m not supposed to know.
I believe that’s a close-up of the inside of a pitcher plant. Which is a plant that sits there all day wafting out a sweet smell of food, waiting around for insects to fall into its fluid filled “belly” where they thrash around fruitlessly until they finally die and are dissolved, thereby nourishing the plant they were originally there to prey upon.
Fitting analogy, no?
Typical bluesky post
I’m so happy to see that ai poison is a thing
Don’t be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.
So we should just give up? Surely you don’t mean that.
I don’t think they meant that. Probably more like
“Don’t upload all your precious data carelessly thinking it’s un-stealable just because of this one countermeasure.”
Which of course, really sucks for artists.