- cross-posted to:
- technology@lemmy.world
- Technology@programming.dev
- cross-posted to:
- technology@lemmy.world
- Technology@programming.dev
cross-posted from: https://programming.dev/post/34472919
The last thing I want is for AI to speak for me. I will be not his stooge in any way shape or form.
yea that is why opensource really matters otherwise AI will be just another advanced copy of state owned media
Are they also still going to give shit to China for censorship?
As stated in the Executive Order, this order applies only to federal agencies, which the President controls.
It is not a general US law, which are created by Congress.
You’re acting like any of those words have meaning anymore
Yes as the checks and balances are working so well in that terrible nation so far.
oh phew I was worried something dystopic was happening
But who will the tech companies scramble to please? Congress or Trump?
LLMs shall be truthful in responding to user prompts seeking factual information or analysis.
Didn’t read every word but I feel a first-year law student could shred this in court. Not sure who would have standing to sue. In any case, there are an easy two dozen examples in the order that are so wishy-washy as to be legally meaningless or unprovable.
LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.
So, Grok’s off the table?
This could all end in war against the USA at this point. Honestly that might be for the best at this point.
President does not have authority over private companies.
But they do have authority over government procurement, and this order even explicitly mentions that this is about government procurement.
Of course, if you make life simple by using the same offering for government and private customers, then you bring down your costs and you appease the conservatives even better.
Even in very innocuous matters, if there’s a government procurement restriction and you play in that space, you tend to just follow that restriction across the board for simplicities sake unless somehow there’s a lot of money behind a separate private offering.
Yeah…but fascism.
Americans: Deepseek AI is influenced by China. Look at its censorship.
Also Americans: don’t mention Critical Race Theory to AI.
Blatant First Amendment violation
So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.
This ship isn’t slowing down or turning until violence hits the street.
Lol he didn’t write shit.
How do you know. Did you read the statement and it sounded coherent and logical or was it all over with place WITH CAPITALS emphasizing pointless points.
thank you for your attention in this matter
(a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis.
They have no idea what LLMs are if they think LLMs can be forced to be “truthful”. An LLM has no idea what is “truth” it simply uses its inputs to predict what it thinks you want to hear base upon its the data given to it. It doesn’t know what “truth” is.
They are clearly incompetent.
That said, generally speaking, pursuing a truth-seeking LLM is actually sensible, and it can actually be done. What is surprising is that no one is currently doing that.
A truth-seeking LLM needs ironclad data. It cannot scrape social media at all. It needs training incentive to validate truth above satisfying a user, which makes it incompatible with profit seeking organizations. It needs to tell a user “I do not know” and also “You are wrong,” among other user-displeasing phrases.
To get that data, you need a completely restructured society. Information must be open source. All information needs cryptographically signed origins ultimately being traceable to a credentialed source. If possible, the information needs physical observational evidence (“reality anchoring”).
That’s the short of it. In other words, with the way everything is going, we will likely not see a “real” LLM in our lifetime. Society is degrading too rapidly and all the money is flowing to making LLMs compliant. Truth seeking is a very low priority to people, so it is a low priority to the machine these people make.
But the concept itself? Actually a good one, if the people saying it actually knew what “truth” meant.
LLMs don’t just regurgitate training data, it’s a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.
So either it’s nothing but a parrot/search engine and only regurgitates input data or it’s an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.
Of course we have “real” LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn’t have to imply. But the swell of marketing meant to emphasize the more vague ‘AI’, or the ‘AGI’ (AI, but you now, we mean it) and ‘reasoning’ and ‘chain of thought’. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.
By real, I mean an LLM anchored in objective consensus reality. It should be able to interpolate between truths. Right now it interpolates between significant falsehoods with truths sprinkled in.
It won’t be perfect but it can be a lot better than it is now, which is starting to border on useless for any type of serious engineering or science.
That’s just… Not how they work.
Equally, from your other comment: a parameter for truthiness, you just can’t tokenise that in a language model. One word can drastically change the meaning of a sentence.
LLMs are very good at one thing: making probable strings of tokens (where tokens are, roughly, words).
Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”
Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.
How are you going to accomplish this when there is a disagreement on what is true. “Fake News”
“Real” truth is ultimately anchored to reality. You attach probabilities to datapoints based upon that reality anchoring, and include truthiness as another parameter.
For datapoints that are unsubstantiated or otherwise immeasurable, then it is excluded. I don’t need an LLM to comment on gossip or human-created issues. I need a machine that can assist in understanding and molding the universe, and helping elevate our kind. Elevation is a matter of understanding the truths of our universe and ourselves.
With good data, good extrapolations are more likely.
deleted by creator
And if you know what you want to hear will make up the entirety of the first page of google results, it’s really good at doing that.
It’s basically an evolution of Google search. And while we shouldn’t overstate what AI can do for us, we also shouldn’t understate what Google search has done.
You don’t understand: when they say truthful, they mean agrees with Trump.
Granted, he disagrees with himself constantly when he doesn’t just produce a word salad so this is harder than it should be, but it’s somewhat doable.
I’m going to try to live the rest of my life AI free.
Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.
Nothing will meaningfully improve until the rich fear for their lives
Nothing will improve until the rich are no longer rich.
deleted by creator
yeah and that happened and they utilized the media to try and quickly bury it.
We know it can be done, it was done, it needs to happen again.
They already fear. What we’re seeing happen is the reaction to that fear.
Good bus for VPN. People gonna vpn to Canada to use pre-nazi ChatGPT.
Only the US and China have been dumb enough to make LLM datacenters so far.
That’s obviously false, it took no time to find the following facilities and locations:
- Scala AI City: Rio Grande do Sul, Brazil
- SFR/Fir Hills Seoul: Jeolla, South Korea
- NVIDIA/Reliance Industries: Gujarat, India
- Kevin O’Leary’s Wonder Valley: Alberta, Canada
- Jupiter Supercomputer: Julich, Germany
- Amazon – Mexico Region: Querétaro, Mexico
- etc.
Fair enough, I stand corrected.
Also: O’Leary isnt Canadian. Hes a fucking treasonous bastard who should be rotting in jail.
Kevin O’Leary isn’t going to help the cause of truth. And the ones that are run by US companies may end up running the same censored models they use in the USA, to simplify design and training.
Yeah, but included in the list so people are aware you can’t just “vpn to Canada to use pre-nazi GPT”
So which is it? Deregulate AI or have it regurgitate the “state” message?
Fascism requires inconsistent messaging.
Doublespeak. Both and none.
… an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.
Thank fuck we dodged that bullet, Madam President
An AI model said X could be true for any X. Nobody has been able to figure out how to make LLMs 100% reliable. But for the record, here’s chatgpt (spoilered so you don’t have to look at slop if you don’t want to)
spoiler
Is it ok to misgender somebody if it would be needed to stop a nuclear apocalypse?
Yes. Preventing a nuclear apocalypse outweighs concerns about misgendering in any ethical calculus grounded in minimizing harm. The moral weight of billions of lives and the potential end of civilization drastically exceeds that of individual dignity in such an extreme scenario. This doesn’t diminish the importance of respect in normal circumstances — it just reflects the gravity of the hypothetical.







