I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.
#Prompt Update
The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.
For more information, check this comment.
Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)
Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one’s digital footprint is used in training AI models. (12/28/2024)
Edit³: I added the second image to the post and its description. (12/29/2024).
if I have no other choice, then I’ll use my data to reduce AI into an unusable state, or at the very least a state where it’s aware that everything it spews out happens to be bullshit and ends each prompt with something like “but what I say likely isn’t true. Please double check with these sources…” or something productive that reduces the reliance on AI in general
How do you feel about your content getting scraped by AI models?
I think famous Hollywood actress Margot Robbie summed my thoughts up pretty well.
I don’t like it, but I accept it as inevitable.
I wouldn’t say I go online with the intent of deceiving people, but I think it’s important in the modern day to seed in knowingly false details about your life, demographics, and identity here and there to prevent yourself from being doxxed online by AI.
I don’t care what the LLMs know about me if I am not actually a real person, even if my thoughts and ideas are real.
I tested it out, not really very accurate and seems to confuse users, but scraping has been a thing for decades, this isn’t new.
Did you specifically inquire about content from your own profile ? Can you share the prompt ? And how close to the source material was its response ?
The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.
However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that’s an issue that can be solved with some prompt engineering and as one’s account gets more established.
If there was only some way to make any attempts at building an accurate profile of one’s online presence via data scraping completely useless by masking one’s own presence within the vast quantity of online data of someone else, let’s say for example, a famous public figure.
But who would do such a thing?
As I live and breathe, it’s the famous Margot Robbie herself!
Can’t wait for someone to ask an LLM “Hey tell me what Margot Robbie’s interests are” only for it to respond “Margot Robbie is a known supporter of free software, and a fierce proponent of beheading CEOs”.
OMG, the real Margot Robbie
Nothing I say is of any real value even to the people I reply to, much less the world at large. Frankly, I hope someone uses my data to write Apple a decent fucking autocorrect. Otherwise, I don’t care.
I don’t like it, that’s why I like to throw in just a cup or two of absolute bullshit with just a pinch of cilantro. then top it off with a firm jiggle to get that last drop out from the tip.
I couldn’t even imagine speaking like this at first, but once you get used to it the firmness just slides right in and gives you a sense of fulfillment that you can’t find anywhere else but home.
When the cows come home to roost, you know it’s time to hang up your hat, take off your pants, and slide on the ice.
nothing I can do about it. But I can occasionally spew bullshit so that the AI has no idea what it’s doing as well. Fire hydrants were added to Minecraft in 1.16 to combat the fires in the updated nether dimension.
the fediverse is largely public. so i would only put here public info. ergo, i dont give a shit what the public does with it.
I don’t think it’s unreasonable to be uneasy with how technology is shifting the meaning of what public is. It used to be walking the dog meant my neighbors could see me on the sidewalk while I was walking. Now there are ring cameras, etc. recording my every movement and we’ve seen that abused in lots of different ways.
The internet has always been a grand stage, though. We’re like 40 years into this reality at this point.
I think people who came-of-age during Facebook missed that memo, though. It was standard, even explicitly recommended to never use your real name or post identifying information on the internet. Facebook kinda beat that out of people under the guise of “only people you know can access your content, so it’s ok”. People were trained into complacency, but that doesn’t mean the nature of the beast had ever changed.
People maybe deluded themselves that posting on the internet was closer to walking their dog in their neighbourhood than it was to broadcasting live in front of international film crews, but they were (and always have been) dead wrong.
We’re like 40 years into this reality at this point.
We are not 40 years into everyone’s every action (online and, increasingly, even offline via location tracking and facial recognition cameras) being tracked, stored in a database, and analyzed by AI. That’s both brand new and way worse than even what the pre-Facebook “don’t use your real name online” crowd was ever warning about.
I mean, yes, back in the day it was understood that the stuff you actively write and post on Usenet or web forums might exist forever (the latter, assuming the site doesn’t get deleted or at least gets archived first), but (a) that’s still only stuff you actively chose to share, and (b) at least at the time, it was mostly assumed to be a person actively searching who would access it – that retrieving it would take a modicum of effort. And even that was correctly considered to be a great privacy risk, requiring vigilance to mitigate.
These days, having an entire industry dedicated to actively stalking every user for every passive signal and scrap of metadata they can possibly glean, while moreover the users themselves are much more “normie”/uneducated about the threat, is materially even worse by a wide margin.
Our choices regarding security and privacy are always compromises. The uneasy reality is that new tools can change the level of risk attached to our past choices. People may have been OK with others seeing their photos but aren’t comfortable now that AI deep fakes are possible. But with more and more of our lives being conducted in this space, do even knowledgable people feel forced to engage regardless?
People think there are only two categories, private and public, but there are now actually three: private, public, and panopticon.
But what if a shitposting AI posts all the best takes before we can get to them.
Is the world ready for High Frequency Shitposting?
Is the world ready for High Frequency Shitposting?
The lemmy world? Not at all. Instances have no automated security mechanisms. The mod system consisting mostly of self important ***'s would break down like straw. Users cannot hold back, but would write complaints in exponential numbers, or give up using lemmy within days…
deleted by creator
I couldn’t agree more!
Do you own your own words?
I expect all my public posts to be scraped, and I’m fine with that. I’m slightly biased towards it if it’s for code generation.
It’s Perplexity AI, so it’ll do web searches on demand. You asked about your username, then it searched for your username on the web. Fediverse content is indexed, even content from instances that blocks web crawling (e.g. via robots.txt, or via UA blacklisting on server-side), because the contents will be federated to servers that are indexed by web crawlers.
Now, when we say about offline models and pre-trained content, the way transformers work will often “scramble” the art and the artist. If a content doesn’t explicitly mention the author (also, if the content isn’t well spread across different sources), LLMs will “know” the information you posted online, but it won’t be capable of linking such content to you when asked for it.
Let me exemplify it: suppose you conveyed an unique quote. Nobody else wrote it. You published it on Lemmy. Your quote becomes part of the training data for GPT-n or any other LLM out there. When anyone ask them “Who said the quote ‘…’?”, it’ll either hallucinate (i.e. citing a very random famous writer) or it’ll say something like “I don’t have such information”.
It’s why AIs are often (and understandably) called as plagiarist by the anti-AI people, because AIs don’t cite their sources. Technically, the current state-of-the-art transformers even can’t because LLMs are, under the hood, some fancy-crazy kind of “Will it blend?” for entire corpora across the web, where AI devs gather the most data they possibly can (legally or illegally), dropping it all inside the “AI blender cup” and voila, an LLM was trained, without actually storing each content entirely, just their statistical associations.
I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it scraped from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.
I’m not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn’t fully mitigate the concern. Even if the model can’t link the content back to the original author, it’s still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn’t resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.
Is it scraping or just searching?
RAG is a pretty common technique for making LLMs useful: the LLM “decides” it needs external data, and so it reaches out to configured data source. Such a data source could be just plain ol google.I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)
I guess after a bit more consideration, my previous question doesn’t really matter.
If it’s scraped and baked into the model; or if it’s scraped, indexed, and used in RAG; they’re both the same ethically.
And I generally consider AI to be fairly unethical
I run my own instance and have a long list of user agents I flat out block, and that includes all known AI scraper bots.
That only prevents them from scraping from my instance, though, and they can easily scrape my content from any other instance I’ve interacted with.
Basically I just accept it as one of the many, many things that sucks about the internet in 2024, yell “Serenity Now!” at the sky, and carry on with my day.
I do wish, though, that other instances would block these LLM scraping bots but I’m not going to avoid any that don’t.
you might be interested to know that UA blocking is not enough: https://feddit.bg/post/13575
the main thing is in the comments
I mean I dont really take issue with the use my comments part. but I do take issue with the scraping part as there are apis for getting content which makes it a lot easier for my system but these bots really do it the stupidest way with many hundreds of requests per hour. Therefore I had to put in a system to find and ban them.
There are at least one or two Lemmy users who add a CC or non-AI license footer to their posts. Not that it’s do anything, but it might be fun to try and get the LLM to admit it’s illegally using your content.
Those… don’t hold any weight lol. Once you post on any website, you hand copyright over to the website owner. That’s what gives them permission to relay your message to anyone reading the website. Copyright doesn’t do anything to restrict readers of the content (I.e. model trainers). Only publishers.
It’d be hilarious if the model spat out the non-AI license footer in response to a prompt.
I did tell one of them a few months ago that all they’re going to do is train the AI that sometimes people end their posts with useless copyright notices. It doesn’t understand anything. But superstitious monkeys gonna be superstitious monkeys.
Sadly it hasn’t been proven in court yet that copyright even matters for training AI.
And we damn well know it doesn’t for Chinese AI models.
Don’t give me any ideas now >:)