We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
This is the Ministry of Truth.
This is the Ministry of Truth on AI.
Actually one of the characters in 1984 works in the department that produces computer generated romance novels. Orwell pretty accurately predicted the idea of AI slop as a propaganda tool.
deleted by creator
You want to have a non-final product write the training for the next level of bot? Sure, makes sense if you’re stupid. Why did all these companies waste time stealing when they could just have one bot make data for the next bot to train on? Infinite data!
Faek news!
What a dickbag. I’ll never forgive him for bastardizing one of my favorite works of fiction (Stranger in a Strange Land)
deleted by creator
She sounds Hot
She’s unfortunately can’t see you because of financial difficulties. You gotta give her money like I do. One day, I will see her in person.
I never would have thought it possible that a person could be so full of themselves to say something like that
An interesting thought experiment: I think he’s full of shit, you think he’s full of himself. Maybe there’s a “theory of everything” here somewhere. E = shit squared?
He is a little shit, he’s full of shit, ergo he’s full of himself
I remember when I learned what corpus meant too
So where will Musk find that missing information and how will he detect “errors”?
I expect he’ll ask Grok and believe the answer.
I read about this in a popular book by some guy named Orwell
Wasn’t he the children’s author who published the book about a talking animals learning the value of hard work or something?
The very one!
That’d be esteemed British author Georgie Orrell, author of such whimsical classics as “Now the Animals Are Running The Farm!”, “My Big Day Out At Wigan Pier” and, of course, “Winston’s Zany Eighties Adventure”.
Don’t feed the trolls.
He’s done with Tesla, isn’t he?
Huh. I’m not sure if he’s understood the alignment problem quite right.
deleted by creator
“And the Libertarian founding fathers defeated the woke pro-slavery communists, to get rid of the DEI british and found America (which was uninhabited at the time)” -Grok
Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.
The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.
Thinking wikipedia or other unbiased sources will still be available in a decade or so is wishful thinking. Once the digital stranglehold kicks in, it’ll be mandatory sign-in with gov vetted identity provider and your sources will be limited to what that gov allows you to see. MMW.
Yes. There will be no websites only AI and apps. You will be automatically logged in to the apps. Linux, Lemmy will be baned. We will be classed as hackers and criminals. We probably have to build our own mesh network for communication or access it from a secret location.
Can’t stop the signal.
Wikipedia is quite resilient - you can even put it on a USB drive. As long as you have a free operating system, there will always be ways to access it.
I keep a partial local copy of Wikipedia on my phone and backup device with an app called Kiwix. Great if you need access to certain items in remote areas with no access to the internet.
They may laugh now, but you’re gonna kick ass when you get isekai’d.
The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context.
That’s a massive oversimplification, it’s like saying humans don’t remember things, we just have neurons that fire based on context
LLMs do actually “know” things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens
You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it’ll describe how you have a great view of the colosseum from the top
The problem is that it’s very complicated and complex, researchers are currently developing new math to let us do this in a useful way
asdf
So what would you consider to be a trustworthy source?
Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.
Wikipedia presents the views of reliable sources on notable topics. The trick is what sources are considered “reliable” and what topics are “notable”, which is why it’s such a poor source of information for things like contemporary politics in particular.
asdf
Books are not immune to being written by LLMs spewing nonsense, lies, and hallucinations, which will only make more traditional issue of author/publisher biases worse. The asymmetry between how long it takes to create misinformation and how long it takes to verify it has never been this bad.
Media literacy will be very important going forward for new informational material and there will be increasing demand for pre-LLM materials.
asdf
asdf
Again, read the rest of the comment. Wikipedia very much repeats the views of reliable sources on notable topics - most of the fuckery is in deciding what counts as “reliable” and “notable”.
asdf
You had started to make a point, now you are just being a dick.
asdf
Wikipedia gives lists of their sources, judge what you read based off of that. Or just skip to the sources and read them instead.
Just because Wikipedia offers a list of references doesn’t mean that those references reflect what knowledge is actually out there. Wikipedia is trying to be academically rigorous without any of the real work. A big part of doing academic research is reading articles and studies that are wrong or which prove the null hypothesis. That’s why we need experts and not just an AI to regurgitate information. Wikipedia is useful if people understand it’s limitations, I think a lot of people don’t though.
For sure, Wikipedia is for the most basic subjects to research, or the first step of doing any research (they could still offer helpful sources) . For basic stuff, or quick glances of something for conversation.
This very much depends on the subject, I suspect. For math or computer science, wikipedia is an excellent source, and the credentials of the editors maintaining those areas are formidable (to say the least). Their explanations of the underlaying mechanisms are in my experience a little variable in quality, but I haven’t found one that’s even close to outright wrong.
asdf












